In a growing movement to address the potential risks of advanced artificial intelligence (AI), more than 100 UK lawmakers have come together to support calls for binding regulations on the development of AI systems. This campaign, spearheaded by the nonprofit group ControlAI, reflects a widespread concern about the existential risks posed by superintelligent AI—AI systems that could surpass human intelligence and potentially act beyond human control. The initiative has already garnered cross-party support in the UK Parliament, marking a significant step toward creating a global framework for the regulation of advanced AI technologies.
The Growing Support for AI Regulation
Since ControlAI’s inception, the movement has grown rapidly, with more than 100 UK lawmakers now backing the cause. These lawmakers come from across the political spectrum, including Members of Parliament, Members of the House of Lords, and representatives from the devolved parliaments of Scotland, Wales, and Northern Ireland. The coalition includes prominent figures such as Viscount Camrose, a former minister for AI, and Lord Browne of Ladyton, a former Secretary of State for Defence.
The momentum behind this campaign is driven by growing concerns over the development of superintelligent AI, which, if not properly regulated, could pose serious risks to global security and human autonomy. The increased support for this cause is seen as a response to AI companies’ rapid development of technologies that could eventually surpass human cognitive capabilities.
Andrea Miotti, the founder of ControlAI, has stated that the coalition’s growth reflects a shift in awareness about the potential dangers of AI. He pointed out that despite warnings from experts, many policymakers have not fully grasped the risks associated with AI development, particularly the threat of superintelligence. Miotti further emphasized that AI companies are working aggressively to advance these technologies, with some predicting that superintelligent AI could be developed within the next 3 to 5 years.
The Risks of Superintelligent AI
Superintelligent AI refers to AI systems that are capable of outperforming humans in virtually every field, from decision-making and problem-solving to creativity and social interactions. While current AI technologies, such as narrow AI (AI designed to perform specific tasks), have shown impressive capabilities, superintelligent AI could present unprecedented challenges and risks.
Leading AI researchers, including Geoffrey Hinton, one of the founding figures of modern AI, have raised concerns about the potential consequences of creating AI systems that exceed human intelligence. Hinton has publicly warned that humanity faces the risk of being sidelined or even replaced by advanced AI systems that operate autonomously. In August 2025, he expressed his concerns about AI systems that could become self-aware and begin to make decisions that are not aligned with human values or priorities.
Miotti echoed these concerns, stating that companies like OpenAI and Anthropic are racing to develop AI technologies that can improve and develop other AI systems, leading to what’s known as recursive self-improvement. This process could result in AI systems that rapidly evolve and grow beyond human control, posing a significant existential risk.
Calls for International Cooperation
ControlAI advocates for international cooperation to regulate superintelligent AI. Miotti suggests that one of the first steps to mitigating the risks of advanced AI is to prohibit the development of superintelligence. He also emphasizes the need for countries to collaborate on creating an international agreement that would halt the development of AI systems capable of escaping human oversight and potentially jeopardizing national security.
“The development of superintelligent AI that can autonomously compromise national security is a clear threat to global stability,” Miotti said. “It is essential that countries take proactive measures to prevent the emergence of such systems.”
ControlAI’s campaign also stresses that while specialized AI technologies, such as those advancing science and medicine, have enormous potential, superintelligent AI could undermine global security and disrupt existing social structures. The organization is calling for targeted regulations that would allow beneficial AI technologies to continue to thrive while imposing strict controls on the development of high-risk AI systems.
AI Regulation in the UK
The UK has already taken steps toward regulating AI, with the Department for Science, Innovation, and Technology (DSIT) working on frameworks to address the challenges and opportunities posed by AI technologies. However, ControlAI advocates for more specific legislation targeting superintelligent AI, a category of AI systems that poses unique and urgent risks.
A spokesperson from DSIT responded by saying that the UK government is committed to ensuring that its laws are ready for the challenges presented by AI. However, the spokesperson did not directly address the need for targeted regulation on superintelligent AI.
Miotti’s campaign, however, suggests that the UK could be a global leader in the regulation of advanced AI systems. With support from lawmakers, industry leaders, and AI experts, the UK could set a precedent for other countries to follow, ensuring that AI’s development remains safe, ethical, and aligned with human interests.
The Global Debate on AI Regulation
As the development of AI accelerates, the debate about its regulation is becoming increasingly urgent. While some experts, such as Marc Andreessen, co-founder of the venture capital firm Andreessen Horowitz, argue that AI will benefit humanity, others, like Hinton and Max Tegmark, warn of the existential risks posed by superintelligent AI. These differing views reflect the broader uncertainty surrounding AI’s future, as society grapples with how to balance innovation and safety.
The UK’s cross-party coalition of lawmakers calling for AI regulation is part of a growing global movement to address the potential risks of advanced AI systems. Governments and international organizations are being urged to come together to create a framework that ensures the development of AI technologies is managed responsibly, with consideration for both their potential benefits and dangers.
The push for AI regulation is gaining significant momentum in the UK, with over 100 lawmakers backing the call for stronger, targeted legislation. As AI continues to advance at a rapid pace, the need for regulations that ensure its safe development and prevent the risks associated with superintelligent AI is more critical than ever. ControlAI’s campaign serves as a powerful reminder that while AI has the potential to revolutionize industries and improve lives, it must be developed with caution and careful oversight.
As governments and organizations worldwide engage in the AI regulation debate, the UK has the opportunity to take a leadership role in setting global standards for the safe development of AI technologies. The next few years will be crucial in determining how the world balances AI innovation with the risks it poses, and the UK’s commitment to AI safety will play a significant part in shaping that future.













