Giving his keynote lecture at the VC’s Colloquium on AI last term, Professor Ian Goldin predicted that AI would usher in a dynamic new age of innovation: a modern renaissance. The same term was used in the Trump administration’s AI Action Plan, though for very different reasons.

That is not to say that these claims are unfounded. Between 2013-2024, global investment into AI totalled $1.6tn, with this figure being projected to reach $2.5tn in 2026 by Gartner. In just three years, the adoption rate of generative AI has reached 53%. By contrast, the personal computer and internet reached 20% and 30% respectively in the same time span. AI developers have made grand predictions about the productivity gains the technology will create, and the subsequent masses of wealth that will then follow: admittedly a good sales pitch, though one which has been called into question. AI is also increasingly being used as a tool in scientific research, with the 2024 Nobel Prize in Chemistry being awarded for the development of an AI model that predicts protein structures. The technology evidently has great promise. How, then, did AI become the object of such a heated political debate?

The area of contention: AI regulation

With great reward, comes great risk. As with any powerful technology, AI’s potential for destruction is roughly proportional to its potential for good. Professor Goldin’s analogy of an AI renaissance alluded not only to that vibrant age of innovation, but also to the shadow of violence and societal instability that followed it. Fears of the possible negative consequences of AI have prompted political actors across the ideological divide to seek regulations for AI developers. However, pushback from the US federal government and the developers themselves has made adopting such legislation difficult.

Currently, two of the only comprehensive state regulations on AI are the Responsible AI Safety and Education (RAISE) Act in New York and the Transparency in Frontier AI Act (TFAIA) in California. Though there are some differences between them, both of these acts aim to place transparency, safety, and  accountability obligations on large AI developers. These companies will be required to publish safety protocols before the release of a new model, including information about how to mitigate, identify, assess, and respond to risks of “critical harm”. RAISE is set to come into force in January 2027, while TFAIA has been in effect since January 2026. Yet, an act passed in Colorado in 2024 has repeatedly been postponed due to legal challenges from xAI, as well as the federal government. Apart from transparency obligations, this act gives developers the duty to protect consumers from risks of “algorithmic discrimination” that would result in “unlawful differential treatment”. The act is scheduled to take effect in June 2026, though it is likely that the date will again be postponed.

Proponents of such legislation have come from across the political spectrum, and the issue has even caused a rift within the Republican Party. Bernie Sanders, an Independent aligned with the Democratic Party, has cautioned against the concentration of corporate power that would result from unrestricted AI development. Similarly, across the political aisle, Republican Governor of Florida Ron DeSantis has accused the US president of aiming to “kneecap the states and let Big Tech write the rules”. Even more radical was the decision of President Trump’s former Press Secretary and incumbent Arkansas Governor Sarah Huckabee Sanders to publicly break ties with the administration over the issue of AI. It seems as though this is not a debate in which many are willing to compromise.

Out of the large AI companies, only Anthropic currently supports AI regulation. The developer of Claude has provided funding for Public First Action, an organisation which finances the election campaigns of pro-regulation politicians. Anthropic’s CEO, Dario Amodei, has often expressed his concern for safety, both in the models themselves, and how they may later be used. In an essay, he outlines specific risks he sees in AI’s development, including economic disruption, concentration of wealth, increased cyberattacks, the creation of bioweapons, and the facilitation of authoritarian regimes by way of sweeping surveillance and autonomous weapons.

None of these concerns are ones that have not been stated previously. At an earlier point in OpenAI’s history, AI safety was a central topic in Sam Altman’s pitch to investors. Since Trump’s return to office, however, the company seems to have changed its tune. Alongside executives at Meta, xAI, and Palantir, OpenAI President Greg Brockman has poured millions of dollars into organisations that oppose the campaigns of pro-regulation politicians and support those who are against it.

A central concern of anti-regulation actors is that legislation will diminish the rate of AI development and make US companies less competitive on the global stage. A controversial Executive Order signed by Trump in December 2025 directs lawmakers to avoid disparate state regulations that would stifle innovation and interstate commerce, and instead aim for a “uniform Federal policy framework”. Such a national framework was proposed by the administration in March of this year. Although it includes important points about child protection, intellectual property rights, and the protection of communities, protections against critical harm provided by RAISE and TFAIA remain notably absent. 

Space Race 2.0?

Why is the US administration so concerned about remaining competitive in AI? Because there is another competitor in the race, and they are catching up fast. The administration’s AI Action Plan elaborates on this sentiment, comparing today’s race against China to reach greater AI capabilities to the Space Race of the 1950s and 60s. It is in this context that the dream of an AI ‘renaissance’ is invoked. The differing connotations attributed to this term reveal the particular attitude of the US administration towards AI – instead of conveying Professor Goldin’s careful enthusiasm, it implies an urgent zeal for innovation. 

It is also a matter of differing priorities. While pro-regulation actors are most concerned with AI safety, the administration is preoccupied with gaining global AI superiority, such that it has been labelled a ‘national security imperative’ by Trump Advisor David Sacks.  The AI Action Plan claims that whoever wins the race will set global AI standards, and that it is imperative the US does this in order to counter “authoritarian influence” and protect free speech. In the plan, the administration raises concerns over how Chinese companies may promote agendas that are contrary to “American values” and that set unfavourable standards for public surveillance. Similarly, the National Center for AI Standards and Innovation, a US government agency, warns of “security shortcomings and censorship” in current Chinese models, such as DeepSeek. Naturally, economic benefits play an equally important role in the administration’s aims. In any case, the stakes now seem significantly higher than they were when Sputnik left Earth in 1957. 

On the Chinese front, developments are looking promising. As of March 2026, Anthropic’s Claude, the US’s top model, leads by just 2.7% to China’s top model by DeepSeek. Meanwhile, China’s lead in robotics is only becoming more significant, with over 80% of humanoid robot installations and over half of all industrial robot installations being attributed to Chinese companies in 2025. Unlike US developers seeking AI ‘superintelligence’, Chinese developers are focusing more on integrating AI into the real world, specifically in industrial production

Apart from aims of attaining a “fully AI-powered” society, the Chinese government has expressed a desire to lead global AI cooperation, calling for the creation of an AI ecosystem that is “diverse, open, and innovative” and that “jointly promote[s] international exchanges and dialogue on AI governance.” Echoing this sentiment, Scott Singer, fellow at the Technology and International Affairs Program at the Carnegie Endowment for International Peace, told DW that China is not seeking AI dominance, but looking to replace an increasingly withdrawn and self-involved US as a global leader. Considering recent US actions that have strained trust between the superpower and its allies, these efforts may prove to be successful, though the jury is still out on this issue. 

Public sentiment on regulation

Less people have bought into the Space Race rhetoric than might be expected. Polling has found that two-thirds of Americans are concerned about the rapid development of AI, and 76% believe it needs to be regulated. In another study, almost 80% of Trump voters expressed support for regulation. When asked about the rise of AI in daily life, 50% of people in the US said they were more concerned than excited. Interestingly, trust in AI is higher in the EU than in the US, presumably because greater moves to regulate the technology have given the reassurance that public interest and safety is being taken into account. 

Some citizens have sought to make their discontent heard. Parents, alongside 37 state attorneys-general, are calling for xAI’s Grok to be held accountable for allowing the generation of non-consensual images of women and children. In another blunder, ChatGPT uninstalls surged by 295% after the signing of a controversial deal with the Department of Defence, propelled by the internet movement QuitGPT. Meanwhile, communities are fighting against the construction of data centres, wary that, if put in place, they may cause disruptions in key resources such as water, land, and electricity. The town of St. Charles, Missouri was the first to place a year-long ban on the construction of a new data centre, with residents proposing a permanent ban. Though the AI framework proposed in March attempts to address these issues, there is still much work to be done to write their concerns into law. 

With so many competing interests in the AI regulation debate, it is impossible to predict what the global order will look like in the coming decades. For who can deny that AI will transform our societies, our education systems, our economies, and how we value work? If the core of what we consider important in these systems is not protected, we may wake up one day and realise that we no longer recognise anything that we once knew and found meaningful. If we do not mould AI to fit our values, it will not hesitate to mould us to its own.