Illustration by Beastman. Licensed under CC BY-SA 3.0.

Imagine you walk into a gallery and see the two works above. I tell you that one has been generated by artificial intelligence and that the other is real. But which is which?

Overview

Artificial intelligence isn’t new. Its underlying mechanisms are being continuously developed and remodelled, but the thinking behind AI has been around for half a century, together with an awareness of its vast potential. So what changed in March of 2023? Why did AI suddenly become the talk of the town? And why is it shaping up to be a determining topic of the twenty-first century? 

Importantly, computer power has grown to the point that it can more fully supply whatever tasks AI is instructed to complete. Greater computer power equals the ability to process more data which equals more impressive answers. What really brought AI to the fore, however, was the subsequent decision to release these more powerful models for public use. In turn, new ‘large language models’ (llms)—the algorithms which power ChatGPT—not only surprised their creators with unexpected abilities, but became immensely popular tools. Artificial intelligence began to attract significant attention, moving out of academic circles and into public consciousness. 

Correspondingly, AI has dominated recent news. You will have likely read articles or listened to commentators having their say on the future of the technology, the positives and negatives of its use, as well as how it could lead to the end of the world. While many of these are excessively dramatic, the growth and increasing use of AI does come with immediate implications.

Political Implications

The first concern is the potential for bias. Because AI can only complete tasks using the data it has been given, the answers that it provides will likely possess a similar bias to the human(s) that designed it. This is precisely the problem Amazon faced back in 2014, when a team of engineers tried to program an algorithm to review their applicants’ resumes. Because the existing pool of Amazon’s software engineers was overwhelmingly male, the new software tended to favour male applicants.

This becomes a particularly difficult problem to solve in the case of political bias. Even if it were agreed that AI should be politically neutral, how would that be achieved? One solution might be to have future AI models built by groups that are representative of all views, races, genders, and so on. But such collaboration will be difficult in a privatised industry. Moreover, how would you be able to tell whether a given AI model is politically biassed in the first place? You might analyse the data it is provided with for language which is more politically conservative or liberal, but who’s to say what language necessarily implies, and how would this be policed? 

Such implications are only amplified by the potential influence an AI model can have. It is now very easy for thousands, if not millions, of people to gain access to AI which has been deliberately manipulated. It might be used by a politician trying to rig an election, a group of scammers setting up greater influence networks, or anyone incentivised to spread fake news. It is already terrifyingly difficult to test whether an image or text is AI-generated, and the supply of disinformation will only increase further.  

It is such concerns, among others, that have prompted many of those involved in the AI industry to call for greater regulation. We saw this back in March, in the form of an open letter coordinated by the Future of Life Institute, signed by figures such as Elon Musk, Steve Wozniak, and Emad Mostaque. More recently, Sam Altman, the CEO of OpenAI, has testified before the US Senate, while Professor Yoshua Bengio, one of the so-called God-Fathers of AI, has said he feels he has lost control over his work. The capabilities of artificial intelligence have been amplified, but humanity and the infrastructure in place has been seemingly unprepared for its full release. 

So far, governments have taken different approaches to such regulation. At one end, Britain has employed a “light-touch” approach. This only applies limited and existing regulations, with the aim of boosting investment in a less restricted, British artificial intelligence. The EU has proposed a law which requires increasingly close monitoring of AI models in proportion with their perceived risk. At the other end, various governments have proposed that AI should be treated much like medicine or food, with strict testing and approval required before any model is made publicly accessible. This is the case in China, but of course strict testing in this case is also employed for more specific, political reasons, such as to ensure that an AI reflects the “core values of socialism”.

Implications for Everyday Life

At the moment, then, leaders within the AI industry are trying to take back some control over their expanding technology, reining in the aforementioned concerns. If such regulation is effective, and the unwanted side-effects of artificial intelligence are largely controlled, the technology will be a tremendous force for good. This is easy to forget in the plethora of opinions on the subject: AI is inevitable, it is already a part of everyday life, and it will largely change the world for the better. 

Large language models such as ChatGPT have already begun to exert an influence on media, how we communicate, and who has access to what information, much like the internet did decades ago. And this is only one form of artificial intelligence. AI in all its versions has the power to automate whole industries, such as the production of food, energy, medicine, and much more. Holon unveiled its prototype for the self-driving car in January of this year, claiming it is the world’s first shuttle built to automotive standards, and that production will start in the US in 2025. It is quite clear that AI will make things easier to do, or simply do them for you.

Where Next?

All of this likely sounds like a nice, neat summary. These are the reasons why AI burst onto the scene, some of the concerns relating to its use, and a few of the potential benefits. The issue is that this is only the tip of the iceberg. Like any tool, AI can be useful, and it can be misused. But artificial intelligence is the most powerful tool ever created, and we are only beginning to see its impact and potential. We also don’t know how its use will spread. All of these implications only apply to those with access to AI, in developed societies, and such disparities in access are likely to increase with the technology’s growth.

At the moment, we have a good understanding of how AI models work, and so we hold them to high standards. Take again the example of self-driving cars. Thousands of individuals die in car accidents every day, but we strive to make autonomous vehicles as close to perfect as possible, because our approach is deterministic. We know that the reason why the AI has caused an accident is encoded in its algorithm, and we can change that. But what happens when AI labs begin to produce models that “no one – not even their creators – can understand, predict, or reliably control”? That is how the Future of Life Institute’s open letter described the present use of AI models, three months ago. 

And we can ask more questions. If you give a model all the info and data and awareness that we have, what will AI be able to produce? What will it mean for humanity when a non-human intelligence becomes better than most of us at drawing images, composing music, or telling stories?

Art as a Test Case

You probably recognised that the image on the right, at the start of the article, is the real work of Piet Mondrian. It is, after all, one of his best and most famous: the ‘Composition with Large Red Plane, Yellow, Black, Gray and Blue’ (1921). It showcases a mastery of tone and spacing, creating a deeper sense of optical illusion. 

But now imagine you walk into a gallery, see the two works above, and I tell you that both are works of Mondrian. Chances are you wouldn’t question my suggestion. In other words, I gave Stable Diffusion v1.5 the prompt to create something in “Piet Mondrian style, with a few coloured boxes and white background”, and in five seconds it produced an image similar enough to the original to fool those who may be less knowledgeable of the artist’s practice.

This isn’t really ground-breaking. Go on popular sites such as r/midjourney and you’ll find thousands of posts showcasing AI’s ability to generate images given textual prompts. But AI’s presence in art does highlight questions that will become increasingly pertinent. Have a look at the AI-generated artwork which caused controversy after winning the Colorado State Fair’s fine arts competition last September. Is that AI-generated image art, and would you go to an exhibit of such images? If AI companies train systems to imitate artists’ works, what legal repercussions should follow?

And again, what should we do in a world where AI can complete tasks as well as we can? I think it is hyperbolic to claim that AI will entirely replace humanity. At least in art, AI is more likely to be incorporated as a new technique or medium, just like the computer was once upon a time. To stop creating because of what we’ve allowed AI to create is to end humanity. But it will redefine what people can do. One reason why script writers in Hollywood are striking right now is that staff are starting to be replaced by AI tools. And perhaps equally importantly, AI will redefine what people choose to do. Once it possesses the ability to complete tasks for us, we will have to re-evaluate why we do things, what we enjoy doing, and ultimately what it means to be alive.

The Importance of Asking Questions

So the use of AI is inevitable, and we don’t know what the technology might one day be capable of. We can react to its immediate implications, and encourage its benefits, but new forms and uses are being designed and developed all the time. How should we react? I think the most important thing we can do is keep asking questions, like those I have proposed in this article, and many more. While Professor Bengio admitted that concerns over AI are emotionally challenging for someone who had dedicated his life to the technology, he emphasised that “you have to keep going and you have to engage, discuss, encourage others to think with you”. 

UK Prime Minister Rishi Sunak has said that AI has the potential to “positively transform humanity”. AI will change the needs of society and, while it won’t replace humans, it will certainly transform what it means to be human. Whether or not that transformation is positive, however, can be up to us. Thinking more about AI, and also about ourselves, will allow humanity to adapt to and direct an AI world. The sooner we do that, the better.