What future for journalism in the age of AI?
The article you are about to read was written by a human.
This kind of disclaimer will become an everyday occurrence as chatbots, or large language models, infiltrate deeper into our media space. Doubts about the veracity of such disclaimers will also become commonplace.
With the leaps and bounds registered by machine learning and large language models over the past couple of years, it is becoming increasingly difficult to prove that a human is on the other side of a written or spoken communication.
How would I prove to you that these words were the product of human creativity and exertion? Perhaps through the originality of an idea or the novelty of a turn of phrase? Maybe by cracking a joke or employing irony? How about by expressing humane empathy as only a human could supposedly do?
Just how fast AI is progressing and how deeply it is infiltrating the media was put on stark display recently when Germany’s largest tabloid, Bild, announced that it was laying off a third of its staff and migrating their functions to machines. This follows BuzzFeed’s decision in January to use AI to generate quizzes and its quiet experimentation with AI-generated content ever since, especially SEO-primed travel guides.
“The functions of editor-in-chief, layout artist, proofreader, publisher and photo editor will no longer exist in the future as we know them today,” Bild’s editor-in-chief said in an email to staff. “Artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it,” Mathias Doepfner, the CEO of Axel Springer, the owner of Bild, claimed in an internal letter when the idea was first floated in the spring.
This raises the pertinent question of what effect AI will have on the profession I love and how it will shape the future of journalism and the media.
First, it is important to acknowledge that artificial intelligence has been causing tectonic shifts in the media landscape for quite some years already. This has occurred both directly and indirectly.
One direct and largely positive way in which AI has affected the media is the emergence of Big Data journalism, which covers everything from crunching through the data in big leaks like the Panama Papers to examining the consequences of the climate crisis. Without powerful algorithms, journalists would have likely not been able to successfully comb through and decipher the mountains of data at their disposal to identify the telling statistical patterns and use these to tell compelling and useful stories.
Media organisations also use AI for many back-office tasks, such as recommending content, transcribing interviews, subtitling videos, analysing audience interests, preferences and engagement, not to mention finding ways to boost their all-important SEO ranking. This last example hints at a wildly important way in which AI has indirectly influenced the media landscape: the role of search engines and social media channels as gatekeepers and curators of content, and the algorithms they employ to that end, which exercise a profound influence over the flow of revenue – or lack thereof – to media outlets.
But now, with the release of sophisticated large language models, we are on the cusp of AI moving out of the peripheries and penetrating the very heart and soul of journalism: content creation. Just as sophisticated language communication is central to our identity as humans, writing or speaking or storytelling is possibly the defining feature of being a journalist for many of us who took up the gauntlet.
I know it was my profound fascination with human stories that drew me to the profession almost a quarter of a century ago, and that keeps me clinging to it, despite having switched careers.
For a journalist, few things can beat the joy, nor match the frustration, of crafting a coherent and compelling story or narrative out of a raw jumble of ideas, words and information.
This may explain why, even though I’ve experimented with querying chatbots, I have not yet fallen to the temptation of using ChatGPT or other large language models to draft or edit text: it gives me too much professional satisfaction and personal pleasure to do it myself. I also don’t fully trust these tools to be able to do the job properly. Using a bot would be tantamount to a chef offering guests a microwave meal, a bespoke tailor selling off-the-rack suits or a cabinet maker assembling IKEA furniture.
Writing an article, like producing a video or radio report, may be an arduous and labour-intensive process, but it is also a highly rewarding one which, when done well, pays huge dividends both for the journalist and the audience.
Many expect that the advent of machine learning will not dent interest in and demand for high-quality, human-generated journalism. “I’m not sure how any conscientious journalist or writer worth reading would be adversely affected,” says columnist and essayist Tracy Quan, who wrote about some of the early experiments with AI in the 1990s.
Of course, demand for quality, well-produced content will remain, though how many people will be willing to pay premium rates to access it is open to question. And even though journalists will certainly wish to continue to create content, their bosses may not agree, citing the need to cut costs or to “remain competitive”. Although there is little doubt that humans will remain at the heart of the process, how many and what role they will play is the real question. While the arrival of AI in the newsroom is being hyped poetically as a tool for liberating journalists from drudgery so they can focus on the important aspects of their job, the reality is far more prosaic.
Just as the rollout of digital technology and the internet did away with untold jobs in the media, the advent of AI may cull many of those that remain. The day may not be far off when the buzz of newsrooms is a thing of the distant past and they become – almost – worker-less factories. For routine content, machines will churn out at superhuman speed endless copy which will, hopefully, be thoroughly checked and edited by a handful of human editors. Providing the eyes, ears and legs the machines lack, reporters will rove the outside world conducting investigations and interviews. The raw material they collect will then either be partially or fully, depending on how much of a human touch is deemed necessary, transformed into content by the bots.
Top media organisations will likely retain their star reporters and columnists. These celebrities will help distinguish one machine-driven media behemoth from another, make them appear more human and accessible, and provide a marketable brand identity for the outlet. That being said, these human stars are likely to be joined by AI-generated celebrities in the near future.
Junior, mid-range or less famous journalists won’t be so fortunate. Many of these are likely to be unceremoniously given their marching orders and those that are not may stay on under more precarious conditions and at lower pay. Many may be forced to join the bloating ranks of the gig economy in the media, where they will have to increasingly work like machines in order to compete with the machines.
If you think this sounds overly pessimistic, this trend has been in full swing since the digitalisation of the media. Although American newspapers employed close to half a million people (458,000) in 1990, by 2016, this figure stood at 183,000, according to the US Bureau of Statistics – a decline of more than 60 percent. The volume of work each journalist, in contrast, is expected to generate has risen exponentially. And without remedial action, these trends will almost certainly worsen and accelerate as AI takes over more and more roles previously done by humans.
Beyond the devastating effect on employment and job security, there are the profound social and environmental consequences of this rapid and unstudied dash to deploy AI in the media. Chatbots not only consume vast amounts of energy, they remain notoriously inaccurate, even delusional. And rather than admit they don’t know, they often simply invent facts.
To test ChatGPT’s accuracy, I asked it to write about a subject I knew intimately: I requested that it write a short biography of me. “Use only verifiable sources. Cite the sources you use,” I insisted. This injunction notwithstanding, ChatGPT’s flattering biography of me was riddled with errors, ranging from where I was born and studied to where I had worked. More bafflingly still, even though the sources it cited sounded like they could be authentic, none of them existed in the real world.
Until these fatal kinks are ironed out, this inaccuracy alone makes the use in journalism of large language models risky and irresponsible, especially as properly fact-checking texts could potentially take as long as researching them in the first place. In addition, using AI both to generate and curate highly personalised content could have the unintended consequence of narrowing people’s worldviews. “If news consumption becomes highly personalised and driven by algorithms, there is a risk of narrowing the diversity of perspectives and limiting exposure to contrasting viewpoints, potentially leading to echo chambers,” ChatGPT gave as one potential risk.
Moreover, growing overly reliant on these AI systems could lead to a situation in which we would not know when it was misleading us or leading us astray. AI could perpetuate biases or even create new ones, and unless we question and analyse everything it does, this could happen without us even being aware of it.
Then, there is the power AI grants bad-faith actors. The propaganda and misinformation potential of AI, for everyone from extremists and radicals to authoritarian governments and vested political interests, is terrifying. In fact, the power of deepfake technology has become such that it can undermine and even upend the concept of a common reality. “I’m concerned that with audio and video deepfaking, ‘smoking gun’ moments in documentary film, TV and radio will lose their effect,” admits Adam Grannick, a freelance filmmaker and documentary producer. “That’s the best-case scenario. [The] worst case is that video and audio will cease to be used for reliable journalism and documentary, period.”
Naturally, it is not all gloom and doom. Like other digital technologies before it, AI will bring along a democratisation of sorts, alongside the greater inequalities it is likely to engender. By lowering the cost barriers, it can empower individuals and small outfits to channel their limited resources towards work that really matters and produce truly remarkable output.
For media organisations who see it as a complement rather than a replacement for humans, it can revive the waning areas of investigative and documentary journalism. If staff levels are kept the same or made higher, then AI can truly serve to alleviate human journalists of some of the drudgery of their work, liberating them to go out into the world and report on it in-depth and humanely.
But for us to get the most out of AI, in the media and in other domains, the process cannot be guided solely by profit-driven tech companies and dominated by a few billionaires. Every ethical, social and environmental aspect must be considered first and, in the true spirit of democracy, decisions about the future role of artificial intelligence in society need to involve us all because it affects us all.