Artificial Intelligence (AI) is revolutionizing many industries, including journalism. In recent years, AI has played an increasingly important role in creating news content, analyzing data, and even in making editorial decisions. While some argue that AI could potentially replace human journalists, others believe that AI can enhance the work of journalists and improve the quality and accuracy of news reporting.
One of the most significant benefits of AI in journalism is its ability to process vast amounts of data quickly and accurately. This has made it possible for journalists to produce news stories faster and more efficiently than ever before. AI can also be used to analyze data and provide insights that human journalists may not have the time or resources to uncover on their own. For example, AI-powered tools can be used to identify trends, detect patterns, and predict the outcome of events.
However, the use of AI in journalism also raises some concerns. One of the biggest concerns is the potential for bias in AI algorithms. AI algorithms are only as good as the data they are trained on, and if that data is biased, the algorithms will be biased as well. This could lead to the perpetuation of stereotypes, discrimination, and other forms of social injustice in news reporting.
Another concern is the potential for AI to replace human journalists. While AI can be used to automate some aspects of news production, such as fact-checking and data analysis, it cannot replace the critical thinking and creativity of human journalists. The ability to ask questions, conduct interviews, and write compelling stories requires human judgement and intuition that cannot be replicated by machines.
In conclusion, AI has the potential to transform journalism in positive ways by making it faster, more efficient, and more accurate. However, it is essential to ensure that the use of AI in journalism is ethical and transparent, and that it does not perpetuate bias or replace the essential role of human journalists. Ultimately, the most effective approach to AI in journalism is one that harnesses its power to support human journalism, rather than replace it.
————————————————————————————————————
If you hadn’t noticed already, up until this point this article has been written by ChatGPT. I put in the prompt “write an opinion article on the role of AI in journalism”, and this is its full, unedited response.
When I opened up ChatGPT and put this idea in, I had no idea what sort of thing would come out – I didn’t know whether I’d be writing the headline “AI Revolution makes journalists unnecessary”, or “Don’t worry: ChatGPT isn’t that good…yet”. Either way, I thought, it would make a great article – either AI is very good at journalism, or it’s very bad, but whatever comes out will be interesting.
In the end, I don’t think I can write either headline, because the article it produced is so much more intriguing and nuanced than a binary “AI good” or “AI bad”. In some ways what it produced, and produces for many other purposes, is terrifying in its fluency; and yet its weaknesses are glaringly obvious.
Let’s start with what makes the article above good. What I think is most impressive is that it did, in fact, write an article on the role of AI in journalism. At first glance this might not sound very impressive, but if you pause and think for a few seconds, this is actually a really difficult task.
We can sometimes get caught up in comparing AI to humans, and understandably so – after all, that’s the part that scares and fascinates in equal measure. And yet, the fact that all I had to do was write that prompt and it produced a pretty coherent article examining the different ways AI can be used in journalism, identifying where it can best be used to help human journalists, the potential drawbacks and areas of concern, is almost unbelievable in itself.
I think the simple fact that an AI is able to do these kinds of tasks and we hardly bat an eyelid – indeed we begin to criticise its ability – is a testament to how rapidly the AI Revolution has developed. Even just a few years ago we’d be astonished that AI can do such creative tasks, let alone that it could be plausibly compared to a human’s ability. And yet, the article it produced contains all of the relevant information, presented in an orderly structure appropriate for the format dictated, and in a language setting that befits a news article – not too academic; not too informal.
I decided to run an experiment on my friends the other day. I asked ChatGPT to write the same essay title that I’d done the week before, and read out its introduction paragraph. I then read out my own, and asked them to decide which was mine. Only one got it right.
I find that terrifying, and much worse than if they had been split 50/50 on it. That shows not only that AI can make itself seem like a human writer, which is worrying enough, but that it’s gone one step further. It can convince people that it’s more human than usn.
Even more impressive is the speed with which it does its job. Think about how long it would take you to do the same task. You might think about the question for a minute or two, identifying the main arguments on either side. You would have to do a bit of research, seeing where AI has been used in journalism already, and where experts think it might be used in future. You’d plan out the structure of the article, and then sit down to type it out. All in all, maybe that’s half an hour, maybe that’s 45 minutes maybe it’s a bit longer, the point is that it’s measured in minutes or hours.
And so when I put the prompt in, and the incongruously retro-looking cursor blinks once, twice, three times, and then sets off from the start line, producing the text in its wake, and completes the article in a matter of seconds, I would call that really impressive.
Even if the quality is worse than what a human could produce, the sheer speed with which it is able to complete these tasks is breathtaking. A three-verse poem with full metre and rhyme scheme? Complete within twenty-five seconds. It’s not the best ever written, but it’s finished at perhaps twenty times the speed. A speech about the inaccuracies of Jurassic Park written in the style of Donald Trump? 34.7 seconds until it reached “thank you and God bless America”.
And yet however impressive these achievements are, in many ways ChatGPT is simply not great in other ways. When I began this part of the article with “if you haven’t noticed yet” I hesitated for a second, because I really, really hoped that you would have noticed. In fact, I’d be offended if you read the AI-generated section and thought that it could have been an Oxford student.
There’s nothing wrong with the content, to be sure. I don’t think there’s any key points it’s missed, or any irrelevant material it’s included. As a purely factual article about AI in journalism, it’s quite good. And yet, it doesn’t really fulfil the prompt I gave it. I didn’t ask it to give me a briefing guide, or a summary. I asked it to write an opinion article, and it didn’t.
Firstly, it isn’t really able to give an opinion. It presents both sides of the issue – AI is useful, AI is dangerous – with a dogmatic impartiality. Half the article on one side, half on the other. Two examples of uses, two examples of dangers. There is no ‘argument’ embedded within the article, no central theme that runs through the whole piece and links the paragraphs, no hint of any sort of opinion at all until the conclusion, where it proceeds to write the blandest, most vanilla take on AI. It is the final boss of ‘on the one hand…on the other’, the ultimate compromise, the epitome of neutrality.
I don’t know why AI generated writing isn’t able to be properly opinionated yet, but no matter how many different phrasings of the prompt I used, I couldn’t get it to properly express a hot take. The best I could do was use the input “write a boldly opinionated article on AI in journalism”. It took the line that AI is beneficial for the field, but even then it still uses phrases like “some people argue”, “on the other hand”, and still presents both sides of the debate. The first expression of an opinion is in the fourth paragraph, and the only real change from the structure of the article it generated above is the insertion of the phrase “in my opinion”.
Secondly, what it produced isn’t really an article either. What I mean by that is that it’s not written very engagingly – while the language is of the appropriate setting, the presentation seems more fitting for a finished-the-night-before school project than a news article. When I think of good articles, there’s usually a tantalising hook at the beginning – some sort of anecdote, a shocking statistic, an element of human interest, or a little scene-setting.
For example, a recent article about the British ‘third parties’ published in The Oxford Blue’s Opinions section written by a human writer started with Charles Russell’s predictions of apocalypse; the debate article about Trump’s role in the Republican Party used such obviously exaggerated or satirical language as “a medieval court of obsequious concubines” and “the President-turned-Picasso”.
Essentially, articles written by people don’t stick to the point. It’s the inefficiency of using these hooks, of their unique perspectives, of not plunging straight into ‘some argue…others believe’ that makes them different from Wikipedia; that makes them articles rather than factual summaries.
What I’m saying here, perhaps out of a desire to nurse my ego from being deemed less human-sounding than an AI, is that ChatGPT writes badly. It’s not engaging, it’s not fun, it can’t be meaningfully creative. There’s a difference between the creativity inherent in the task of writing an article from scratch, the creativity of writing sentences that make sense and are on topic, and the creativity necessary to make it sound good and read well. ChatGPT may have artificial creativity, but it doesn’t have natural creativity; it can create, but can we really call that ‘creativity’ the same as ours?
I found this out when asking it to write romantic poetry about one of my friends who studies engineering. The first attempt was really impressive, with such beautiful lines like “with hands that build and eyes that gleam/he creates wonders, his mind a dream”. However, as we got it to write more, we rapidly realised it wasn’t actually very creative any more. Throughout the three poems we got it to write, it used lines about fire and burning six times, and not in such a way that it was ever the theme of the poem.
It kept ending lines with ‘joy’ to make the rhyme scheme easier, it repeated lines and line structures not only within poems (where it could be part of the idea) but across the three. Two of the poems used ‘dream’ as the crux of the first stanza. And when we asked it to write the third one in the style of Shakespeare, the only meaningful change to that theme was the occasional ‘doth’, ‘thy’ and ‘thine’.
So, though I criticised ChatGPT for its fence-sitting, my conclusion is mixed. Is it seriously impressive that AI can write virtually anything you could think of, in nearly any format, in less than a minute? Yeah, of course. If you wrote this in a 2010s sci-fi book, critics would say it’s unrealistic, that this technology would be decades away. But is AI going to replace us? Can journalism be done without humans? Should I be spending my time coding rather than writing? Of course not, or at least not for a long time yet.
If every article was written like the one at the top, no one would read a newspaper ever again, or they would use it as a treatment for insomnia. Ultimately, I feel pretty secure about humanity’s survival for the time being – AI can’t take over the world if they can’t form the opinion to do so. “Some people think destroying the humans is a good idea, but on the other hand…” is hardly the most inspirational rallying call.