Through automation and the use of Artificial Intelligence, our ability to understand complex subjects and apply ourselves as journalists is rapidly diminishing. As AI software is refined, media literacy becomes increasingly complex.
We must be able to detect and recognize the techniques used by the AI to generate news and what is being produced to satisfy what we may see as the desire of powerful social and political actors. This may pose a danger to society, especially to the disadvantaged.
I didn’t write the previous two paragraphs. They were written by a bot, an AI program that uses natural language processing and machine learning to generate prose. I gave the bot a topic, a few sentences to get started, and it “wrote” a whole blog post. I just sampled one section of the resulting article.
Writing created by artificial intelligence and machine learning has dramatically improved in quality and adoption over the last few years. Such robot-generated text is now used in newsrooms around the globe as well as by companies looking to generate content quickly and cheaply. Worse, these tools are being used by those creating fake news as well.
Does this software, as the AI itself seems to be telling us, “pose a danger to society?” I think it does. The modern world runs on truth and good faith. Robots have neither, and the AI-generated text is yet more evidence of the need for far more media literacy.
Known as GPT-3, the technology has some limitations. Tolstoy it is not. In the sample paragraph, “how AI works, how it functions” is oddly repetitive, for example. The word choice is slightly off throughout, and the phrasing in the third sentence is a mess. But despite the syntactical awkwardness and heavy use of cliché, the text is somewhat convincing.
Bloomberg and the Associated Press adopted such tools early on to produce articles in specialized areas, where producing human-written journalism would be too laborious and expensive. Economist editor Kenn Cukier said of the new technology: “We didn’t cling to the quill in the age of the typewriter, so we shouldn’t resist this either. It’s a scale play serving niche markets that wouldn’t be cost-effective to reach otherwise.”
Automated journalism has grown over time and is particularly successful in numbers-heavy fields, where it can translate raw data, like the stats from a baseball game or the performance of a group of stocks, into short, cogent stories. Others have used such tools to produce marketing copy for brochures and websites.
Researchers cite a number of potential problems with automated text, especially if it is used for more complex topics. Good writers don’t just list facts; they interpret them, make decisions about which ones are important, and provide relevant context. There are dangers that automated news will lack such context, reproduce bias, and mislead readers.
The greater danger is AI-generated fake content. Recent years have, of course, already shown how much damage false and misleading news stories can have on public discourse and democratic processes. Artificially generated content can be an enormously productive tool for propagandists and spreaders of disinformation. The developers of GPT-3 even withheld access to an earlier version of the technology initially, amid fears that it would be misused.
The wider adoption of AI tools among bad actors will pose serious challenges to online platforms, governments, and news consumers. The danger is not just specific fake news stories will spread, but that they will further degrade the information ecosystem and erode trust in journalism and media, which is already experiencing a serious crisis of trust.
Faced with a confusing media environment, more people may fall prey to fake news, but still more may simply throw their hands in the air and give up, judging the whole media landscape untrustworthy. Authoritarian and illiberal leaders thrive on this kind of confusion, exhaustion, and skepticism. They can fuel the desire for the scapegoating, divisiveness, and simple solutions that authoritarians capitalize on.
What can be done? Social media companies and other developers will likely build tools to detect AI-generated content. But these can’t be the only solution. For one thing, it’s hard to detect fake writing, and the AI tools themselves will likely stay one step ahead.
The solution must be more holistic, involving both measures to improve the information ecosystem and to improve the general public’s ability to navigate it. With respect to the former, that means social media platforms and journalists must find ways to foreground factual and useful information and downgrade disinformation and sensationalistic opinion, without encroaching on the freedom of speech.
But even with the best policies and practices in place, the information environment is only going to get more complicated. More needs to be done to equip people, beginning at a young age, with the critical thinking and media literacy skills to navigate this environment. Explicit instruction in media literacy is important. But the nation must also recommit to a liberal arts education that teaches students critical thinking, including skills like how to cope with ambiguity and uncertainty, formulate and analyze arguments, and manage their emotions.
This is a time of great technological change, much of it beneficial. But if people want to maximize these benefits, while avoiding the drawbacks, they need to learn how to think clearly in an environment where information — and disinformation — threaten to drown out reasoning.