ChatGPT is a chameleon. It has become a go-to search engine, essay writer, artist, medical encyclopedia, and therapist, all rolled into one. Hundreds of millions of people use ChatGPT every day. While this exponential rise in its use invites important discussions about reliability and integrity, the main problem with people becoming so reliant on AI is its environmental impact. This remains unseen – or simply ignored.
The first problem with AI is a very general one. It is worryingly easy to be fed false information by generative AI, since it is not searching a factual database but generating (clue in the name) content based on what is already available on the internet. As children, we all learnt not to trust what we read on the internet, and now OpenAI itself admits that the responses it makes up based on this same internet “may not always be accurate”. Despite this, we are predisposed to believe what ChatGPT says because it is programmed to tell us exactly what we want to hear. It is easy to become reliant on a website like that.
It is also worryingly easy for us to start seeing ChatGPT as a person, since its responses are meant to mimic natural human interaction. Recent reports claim that ChatGPT 4.5 can pass the Turing Test 73% of the time, meaning we genuinely mistake it for a human. This is disturbing for lots of reasons, but one that often goes under the radar is its unnecessary environmental impact. If we see ChatGPT as a person, we actually “chat” to it; we talk to it about our problems, greet it politely, and say “please” and “thank you” to it. Extra interactions like this, which come naturally when we start to think of ChatGPT as a person rather than a computer, cost OpenAI tens of millions of dollars – and a lot of energy.
But ChatGPT is not a person. Its backbone is a complex set of supercomputers housed in data centres. The electronics required to run a large-scale AI model use vast amounts of raw materials, including rare earth metals, which are often mined in environmentally destructive ways. And the data centres that power AI are already burning through enough energy to power a small country. By 2026, their electricity consumption is predicted to be equal to that of Japan.
Companies such as Microsoft, Google and Amazon are projecting a 78% increase in the number of data centres they will need to keep up with AI’s cloud storage and computer demands. This demand stems from the fact that using generative AI for basic online activities can use 30-40 times as much energy as doing the same task without it. Most of this energy will come from fossil fuels, wasting more finite resources and releasing more greenhouse gases.
People used to search Google whenever they had a question, so much so that “to Google” something became its own verb. Now, more and more people are turning their back on simple “Googling”, preferring ChatGPT to answer their burning questions. “Googling” a question uses 0.3 watt-hours of electricity. “ChatGPT-ing” it, although the term hasn’t caught on yet, uses 2.9 watt-hours.
The general public’s concern about greenhouse gas emissions means that tech companies have been monitoring and attempting to offset their emissions for years. Google set itself the goal of reaching net zero emissions across all operations by 2030, but their recent forays into the world of AI have made this seem less attainable every year. Since 2019, Google’s greenhouse gas emissions have risen by 48%, which they ascribe to an increase in energy consumption in their data centres.
Then, there’s the problem of water. Large computers like the ones that power AI models heat up quickly, and must be cooled by water; freshwater, not seawater, or else they’d corrode. As well as being inland, away from sea air, new data centres are usually built in arid areas because this reduces the risk of yet more corrosion from the humidity. Construction on a large scale of technology that requires a constant supply of clean water, in areas lacking freshwater, will only ever cause problems – both environmental and logistical.
Amazon is building three new data centres in Aragon, Spain, which are projected to use more electricity than the entire region currently consumes. This will accelerate the rapid desertification of Spain, which is already impacting biodiversity and agriculture. If generative AI continues to remain people’s first port of call for writing everything from shopping lists to eulogies, then this will be the fate of many more arid, rural places across the world. Water-scarce regions across Africa and the Middle East will inevitably bear the brunt of this, with electricity and water prices being driven to dizzying heights.
The tech giants themselves are not going to fix this. Google has pledged to be “water positive”, replenishing 120% of the freshwater they use in their operations by 2030. In 2023 they replaced just 18% of their water consumption, so far from this goal that it is almost comical. We often forget that freshwater is a finite resource; Earth is full of water, but only 0.5% of this is freshwater and available for us to drink. We need it to survive; we cannot afford to waste it on cooling down AI servers.
Of course, there is no doubt that artificial intelligence is a new frontier in technology. It has been hailed as a potential lifesaver in areas such as cancer research, disease diagnosis, and predicting natural disasters. Lots of people argue that these projects result in enough good to almost justify the danger they pose to the environment. This is debatable, but irrelevant in the discussion of the recreational use of generative AI (like getting it to write an email for you, or, like a certain US president, to dream up a picture of you as the Pope). These unnecessary uses of AI are what really cause all of these data centres to be built, all of this energy to be expended, and all of this water to be wasted.
AI isn’t going anywhere. But the next time you have a burning question for ChatGPT, ask yourself: is it worth the planet?