Table of Contents
Background
Carl Bernstein and Bob Woodward are the famous journalists who broke the Watergate scandal that led to the fall of U.S. President Nixon in 1972. They created their Watergate exposé by using established journalistic techniques. They started by knocking on doors following a burglary, and ended up exposing the crimes of a U.S. President. They are respected figures in U.S. journalism. Both for their award-winning reportage as well as their famous book about Watergate – a book that was adapted by Hollywood for the film, “All the President’s Men.”
The AI-Generated Article
AI’s rapid rise has generated fears of privacy loss, distribution of misinformation, and job losses. The AI progran chatGPT has gained wide application in the past year. To find out what they thought about its ability to write journalistic content, a BBC reporter read them an excerpt that the AI had created about them. It briefly summarised their biographical data and went on to say that their work “had inspired a new generation of journalists and established a new standard for investigative reporting.”
Their Response
Carl Bernstein was not impressed, and described the AI-generated article as “an amalgam of things that have been written about us.” Parts of it were taken virtually word-for-word a promotional brochure for a conference. In spite of these current shortcomings, he felt that AI will become a huge force that we’re simply going to have to deal with.
He went on to say, “We need to know what’s real as opposed to what’s false. The press is the essential element in a community of being able to attain that. … Truth is the bottom line for anything in your life.” When he was asked why anyone would want to become a reporter today he said, to find “the best obtainable version of the truth.”
Problems With Using AI in Journalism
Discussing the disadvantages of AI in journalism, Woodward said, “I can call the Pentagon and say, ‘I’d like to talk to the chairman of the Joint Chiefs, the top military man’, and he’s either going to talk or maybe not. AI can’t do that.” In addition to this is the very negative propensity for AIs to “hallucinate,” and generate erroneous facts that they have simply invented – although these are embedded with actual facts and expressed authoritively. This requires that any content generated by an AI needs to be thoroughly fact-checked before it’s published or distributed.