
By Margaret Shreiner
In July 2025, an article in the Wisconsin State Journal containing incorrect information and several factual errors about a development plan for downtown Madison, was published on the front page of the Sunday newspaper. Later that day, the original article was taken down, replaced by a “re-reported” article and a disclaimer stating that the original story contained information and quotes from a fabricated source generated by unauthorized AI use. The reporter, Audrey Korte, was then fired for using the AI software installed by Lee Enterprises on her work computer.
The incident has since raised questions about who is at fault and whether local newsrooms should have clearer protocols for using AI.
As artificial intelligence begins to affect everything from health care to social media, news organizations face a changing environment. While larger, national news organizations are welcoming the idea of what AI can do for their business, many local news organizations find themselves playing catch-up.
How can local newsrooms, operating with limited resources and often without guidelines, implement AI tools ethically and effectively?
Tomás Dodds, a professor at the UW–Madison School of Journalism and Mass Communication, has been leading these conversations with news organizations locally and internationally.
“I think the easy solution there is to pin it on the journalist, and to say there are basic things that a journalist should know, even if they don’t know how AI should be used, which is a fair argument,” Dodds said. “At the same time, you’re working in an organization that has no clear guidelines, no clear structures and no training for its journalists about how to use any of these technologies.”
The incident prompted Judith Davidoff, editor at the Isthmus, a Madison-based news organization that originally broke the news about Korte’s story, to create an AI policy for her newsroom.
After conversations with Dodds on AI policies and tools in the newsroom, Isthmus worked with Dodd the Public Tech Media Lab at the University of Wisconsin–Madison to develop its own AI policies based on their specific needs.
“Tomás led two sessions with our staff where he discussed such things as how we use AI and how we don’t want to use AI,” Davidoff said. “Tomás is now drafting a policy for us that we will review and finalize.”
While implementing AI policies in newsrooms for seemingly small uses, such as transcribing interviews with Otter.ai — an AI platform that uses Large Language Models (LLMs) trained on voice data to transcribe and summarize meetings or interviews — might seem arbitrary, Dodds argues the opposite.
“Those are not small uses; using AI for transcription, for example, is huge,” Dodds said. “If you upload a complete audio file to transcribe an interview, that interview is part of the [company’s] data set. LLM companies will say, ‘We use the data carefully,’ but you don’t know how it’s actually operating behind the scenes.”
Even one line describing how AI is being used to support certain tasks is important in maintaining trust and transparency, especially if private companies could access confidential information between a source and reporter.
For example, if a reporter were interviewing a government whistleblower as a source but that source wanted to remain anonymous, the reporter should question whether using Otter.ai as a transcription service would open the source up to potentially being identified. Or, if a reporter is interviewing a source about sensitive, personal information, the reporter cannot guarantee that information will stay private since they’re using a third party AI system.
Established AI policies not only maintain transparency with the audience, but they also guide new reporters on proper reporting practices for that organization. Korte, for example, told Isthmus in an interview that despite having an AI program installed on her work computer, she received no training from the Wisconsin State Journal and was not given a written AI policy.
For newsrooms overburdened by reduced funding and fewer staff positions, AI tools could be extremely beneficial for productivity if implemented responsibly.
“We need to educate people, though, on how to use it. You must be aware that these things hallucinate. You have to take responsibility for your content, you need to bring your own voice into the writing,” Edward C. Malthouse, a professor at Northwestern’s Medill School of Journalism and Integrated Marketing Communications, said.
To develop policies around the ethical use of AI in newsrooms, local organizations may also look to existing models to guide them.
For example, Mark Treinen, editor of the Madison-based The Capital Times, said the paper has yet to solidify an AI policy but is reviewing various models from other news organizations, attending training sessions applicable to The Capital Times and considering uses that would protect their credibility and ensure transparency with audience members.
Dodds explains that working with AI at the local level requires tailoring policies to the intended audience.
With fewer resources and personnel to address developments with AI, local newsrooms are at risk of mimicking larger newspapers. But those larger newspapers have vastly different audiences and business models, so the policies may not cater to the local newspapers’ guiding principles.
The result is generic and ineffective guidelines for reporters.
“Newsrooms are different, they have different needs, and they serve different audiences,” Dodds said. “‘Man in the loop’ is this thing that newsrooms say, which is basically saying, AI will not operate alone, and we are going to be involved in the process. But when and how much oversight do you have?”
To anticipate and adapt to rapid technological advancements, newsrooms could develop a “living document” for AI policies, meaning the policies are continually updated to account for new uses of AI in journalism.
For example, while most newspapers may approve AI sparingly, such as for transcribing or spell-checking, other newspapers have implemented AI tools on a larger scale.
The Washington Post continues to develop a Large Language Model based on its newspaper archives to generate a chatbot, like OpenAI’s ChatGPT or Microsoft’s Copilot, to respond to readers’ prompts using published work. But as they continue to revise this technology, they disclose to users that the technology is imperfect and advancements continue to be made.
Local newsrooms may also look to private companies to implement their tools. For example, Lee Enterprises — the organization that owns the Wisconsin State Journal — installed Microsoft’s Copilot into company devices, which is what Korte used to assist in her reporting.
But installing privately run AI tools comes with less trust in both the tools and the data the LLMs are trained on.
“You cannot tell your audience where the data is coming from because you don’t know, and so you cannot promise the audience that the data you’re working with is bias-free,” Dodds said.
Using off-the-shelf, private tools, like OpenAI or Copilot, may seem inexpensive at first because they eliminate the need for excess time, funds or people to implement in-house LLMs, but it could prove to be costly and less effective in the long run if papers become hyper-dependent on that software.
“That’s the danger, and that’s what we want to avoid,” Dodds said. “We really want to encourage newsrooms to have as much of their own infrastructure as possible, that they own, that they can use and shape however their needs change with time.”
To combat overreliance on private companies, other organizations have begun to pop up to direct journalists in using AI tools and ensuring there are policies to align with.
For example, Dodds has created the Public Tech Media Lab in the School of Journalism and Mass Communication at UW-Madison to work with newsrooms on developing their own in-house AI tools and policies, decreasing the likelihood of hyper dependence on the private sector.
Similarly, another non-profit organization, Hacks/Hackers, has started working with newsrooms nationwide on innovative, ethical media technologies. Recently, the organization opened the Newsroom AI lab to support small newsrooms in developing Large Language Models.
AI is dynamic, meaning it is constantly evolving and improving. With that comes the likelihood of mistakes. But if newspapers are transparent with audiences about the use of AI from the beginning, it decreases the reputational risk that could come out of mistakes with AI use.
“People know the journalists are using AI; nobody’s fooled, so just tell them that you are using AI,” Dodds said.
Margaret Shreiner was a 2025 fellow at the Center for Journalism Ethics and is a recent graduate of the School of Journalism and Mass Communication at the University of Wisconsin–Madison.
The Center for Journalism Ethics encourages the highest standards in journalism ethics worldwide. We foster vigorous debate about ethical practices in journalism and provide a resource for producers, consumers and students of journalism.