Newser's AI Policy

Newser has made summarizing our business since 2007. Our journalists (who hail from places like Gannett properties, the Washington Post, and the Chicago Sun-Times) have fine-tuned the art of boiling the day’s most important and interesting stories into succinct news briefs.

Across hundreds of thousands of stories, we have come to internally understand and define what makes a summary a successful one: which facts are most essential, which filler should be omitted, which sources are the most trustworthy, which gray areas need to be more deeply researched and clarified.

The advent of generative AI could on the one hand seem like an existential threat to what we do. But we’re choosing a different view: that our expertise leaves us exceptionally positioned to direct and edit the summarizing that AI is primed to do. Every CEO and Co-Founder Dan Shipper summed up the new potential paradigm perfectly in this piece:

“Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.”

Our first foray into using AI to summarize the news is underpinned by a chatbot we built with AnyforSoft using ChatGPT as its core technology. It’s an experiment. Here are the parameters we’ve set:

  • Our AI-generated stories will be entirely written by AI.
  • All AI-generated stories will have a byline identifying them as such, along with a link to this policy.
  • No AI-generated stories will appear on our homepage grid or most popular controls. These stories will live in their own controls and be clearly marked as Newser.ai.
  • No AI-generated art is currently being used anywhere on the site. If that changes, we’ll update the policy and clearly identify it.
  • Our AI-generated stories will be reviewed by an editor before posting. The editor will be tasked with fact-checking for errors/hallucinations and fixing anything that’s unreadable or overly confusing, but not with rewriting to make the story more Newser-like. The goal is to see what AI can produce and continue to refine the prompts as a means of improving what we’re presenting.
  • Mistakes are inevitable, as our human editors know. If you see one, let us know. If we correct one, we’ll make a note of it at the end of the story.

We have plans for subsequent phases in which these AI-generated stories will start to look and sound increasingly like Newser stories—but no one at Newser will lose their jobs because of AI. This is an effort to increase our output by having our journalists direct AI to create more stories for our readers.

If this works as we hope, we should be able to expand from 30 to 40 stories per day to hundreds, with more extensive coverage in areas such as business, sports, international news, and science. With more stories, we can launch software we have already developed that will suggest stories based on our readers’ past consumption, making for a more personalized Newser experience.