
At the same time, ethical considerations have become ever more concerning.
AI has infiltrated financial markets, defense industries and many other areas of life. Unfortunately, governments have been slow to create oversight on AI’s impact. There is also no consensus across the world as to how AI should be handled. This confusion endangers AI’s progress, but also those people who are affected by its use.
In 2018, OpenAI released GPT-1, the first “Generative Pretrained Transformer.” The model was developed with more parameters until late 2024, when a GPT-5-class AI was released, and open-source LLMs (e.g., Mistral, Falcon, Mixtral) started to proliferate. By early 2025, many people were using AI for a wide variety of tasks, from proofreading to writing articles and creating images and videos and much more. However, the rapid development of AI has left regulators in its dust and leaving AI without real controls.
The EU has developed very strict AI compliance, which critics suggest will stifle innovation, as the cost of compliance will create a barrier to entry into the AI arena. The UK, Asia and the USA have much more relaxed regimes where innovation is encouraged. However, these regimes are not the same. For example, the USA’s approach to AI is to issue executive orders and voluntary frameworks and to muddy the regulatory waters further, states including California and New York have created their own AI bills, ensuring a patchwork approach (Brookings, 2025).
In contrast, China requires that every generative model be registered centrally. Strangely, this doesn’t stifle innovation as long as the development is related to Chinese political objectives. Those coordinated national objectives include military AI integration, domestic surveillance, and global market capture (SCMP, 2025).
It may sound as though global regulators are well behind the AI development curve. There are G7 codes of conduct and laid down OECD and UNESCO ethical principles. There is even the new High-Level Advisory Body on AI from the UN. However, there is little enforcement as countries have differing views on AI.
The result is that companies choose to base themselves in countries where their AI development is less constrained, which makes a mockery of a country’s regulatory efforts.
Regulation has now become a form of soft power as countries don’t want to regulate their domestic market too much, as it could hand another country the win in the race to AI Nirvana, where AI becomes a strategic win. However, each country brings its strengths to bear to ensure that those who buy from them are constrained by their AI regulations.
This spiral to the bottom makes regulatory cooperation almost impossible, even as risks escalate (Carnegie Endowment, 2025).
After the UK’s 2025 AI Safety Summit, proposals have been shared for AI-safety benchmarks, compute-licensing regimes, and international risk registries. People have also suggested that there needs to be some sort of global watching brief to have oversight of the most important training runs to provide transparency. However, these laudable attempts are still constrained by national security concerns and normal competitive issues between companies and countries.
While there is still hope, global regulation and cooperation really need to accelerate; otherwise, AI will develop outside of any real oversight, and we may not like the results!

