Skip to content
News
Link copied to clipboard

The Philly sheriff’s good news headlines? AI generated them.

The sheriff's campaign broke its silence on Monday, acknowledging that news stories attributed to local outlets were not real. It claims a consultant had used ChatGPT.

The "news" headlines published by Sheriff Rochelle Bilal's campaign described her first term in glowing terms. They were fake, the campaign acknowledged Monday.
The "news" headlines published by Sheriff Rochelle Bilal's campaign described her first term in glowing terms. They were fake, the campaign acknowledged Monday.Read moreMatt Rourke / AP

Philadelphia Sheriff Rochelle Bilal’s campaign is claiming that a consultant used an artificial intelligence chatbot to generate dozens of phony news articles that were posted on her campaign website to highlight her first-term accomplishments.

The campaign broke its silence, releasing a statement in response to an Inquirer article published Monday morning that raised questions about the veracity of 31 favorable articles attributed to local news organizations including NBC10, CBS3, WHYY, and The Inquirer, each with supposed dates of publication.

Representatives for those organizations were not able to find any of the articles.

» READ MORE: Did Philadelphia Sheriff Rochelle Bilal’s campaign make up dozens of false news stories?

Last week, Bilal spokesperson Teresa Lundy declined to comment, referring questions to Bilal’s campaign manager. Lundy said she did not know who that person is. Bilal and her campaign did not respond to requests for comment, either.

The article was picked up Monday by the Associated Press and New York Post.

By Monday afternoon, the link that had previously directed readers to the phony headlines displayed a “page not found” message.

“After review, it has been determined that an outside consultant for the reelection campaign utilized ChatGPT in support of initiatives that were in fact completed by the Philadelphia Sheriff’s Office under the administration of Sheriff Rochelle Bilal,” read an unsigned statement released by Friends of Rochelle Bilal.

The campaign did not respond to follow-up questions seeking the name of Bilal’s campaign manager or the outside consultant.

“It is now clear that the artificial intelligence service generated fake news articles to support the initiatives that were part of the AI prompt,” the statement read. “Our campaign provided the outside consultant talking points which were then provided to the AI service.”

Bilal was elected in 2019, vowing to reform an office that has long been susceptible to corruption and dysfunction. She was reelected last November.

The headlines posted on Bilal’s website referred to her community outreach efforts, a program to distribute gun locks, antiviolence initiatives, and other topics related to law enforcement.

But it does not appear that any of those articles was ever published. It is unclear where the publication dates originated, or how the phony headlines came to be attributed to real news organizations.

The campaign, for instance, had claimed that NBC10 ran a dozen stories about the sheriff. But a station spokesperson said the digital team could not find any of them.

“We have one video similar to the Sheriff’s Office’s headline about the Sheriff’s Office handing out free gun locks,” NBC10 spokesperson Diana Torralvo said by email last week. “However, that story was done in 2016, before Rochelle Bilal was in office.”

Monday’s statement from the Bilal campaign said the sheriff “has been the subject of many positive media articles over the past four years,” and it provided links to two articles from 2021 by WPVI and Fox 29.

“Unfortunately,” the statement said, “ChatGPT did not provide a link to this and many other powerful stories of Sheriff Bilal’s impact on the community.”

Experts in media ethics said the Bilal episode, atlhough bizarre and even comical, poses a real danger: Fabricated headlines can erode trust in public institutions and the news media, and can confuse voters who are having to do more work to wade through AI-generated information and “pink-slime journalism” created by partisan interests.

“You just keep spewing stuff out and it fatigues people and they don’t know what to believe,” said Matthew Jordan, professor of media studies and director of Penn State’s News Literacy Initiative.

Last week, as The Inquirer was reporting on the phony news articles, Bilal’s campaign had temporarily pulled down headlines from her main campaign page, then added the link back with a “public disclaimer” stating that it could not guarantee the “completeness, accuracy, reliability, suitability or availability with respect to the website or the information provided.”