·

Can Global Governance Keep Pace with the Rapid Rise of Artificial Intelligence?

Artificial Intelligence (AI) is no longer a distant aspiration – it is a transformative geopolitical and economic force reshaping industries, democracies, and national security architectures worldwide. Global AI investment surpassed $91.9 billion in 2022, and the number of AI-related scientific publications has grown by over 35% annually since 2015. Yet, international governance mechanisms remain fragmented,…

Artificial Intelligence (AI) is no longer a distant aspiration – it is a transformative geopolitical and economic force reshaping industries, democracies, and national security architectures worldwide. Global AI investment surpassed $91.9 billion in 2022, and the number of AI-related scientific publications has grown by over 35% annually since 2015. Yet, international governance mechanisms remain fragmented, reactive and largely voluntary. This brief argues that global governance, in its current form, cannot adequately keep pace with AI’s rapid advance and proposes a multi-tiered, binding and inclusive international framework to address this institutional gap.

Policy Position: The World’s Governance Systems Are Falling Behind

This brief strongly disagrees with the current approach of having weak, voluntary and disconnected AI rules across different countries. It calls for a unified, legally binding, and truly global system of AI governance. The European Union’s AI Act (2024) is an important step forward, but one region acting alone is not enough. Without a global system similar to how the IAEA (International Atomic Energy Agency) manages nuclear energy or how the WHO(World Health Organization) handles global health , AI could easily become a tool for deepening inequality, enabling authoritarianism and causing harm that no single country can fix on its own.

Background: Just How Big is the Gap?

The speed of AI development keeps leaving international institutions behind. In 2023, the most powerful AI models were doubling in capability every six months. Meanwhile, the UN AI Advisory Body is one of the fastest international responses to AI that took a full 18 months just to produce a report that wasn’t even binding on anyone. This gap is not simply about lacking the will to act. It reveals a deeper problem: the world does not yet have the right systems or structures to govern a technology that moves this fast.

Region / BodyKey FrameworkLegally Binding?YesKey Gap
European UnionEU AI ActYes2024Hard to enforce outside EU borders
United StatesExecutive Order on AINo2023No national law yet
ChinaGenerative AI RegulationsYes2023Does not align with global standards
United NationsAI Advisory Body ReportNo2024 Only advisory, no enforcement
G7 / Hiroshima AI ProcessGuiding Principles & Code of ConductNo2023Voluntary; most developing nations excluded
Table 1: How Different Parts of the World Are Governing AI (2023–2024)

Why Governance is Failing: Three Key Problems

1. AI Moves Fast, Laws Move Slow

In most countries, passing a new law takes anywhere between 3 to 7 years. But new generations of AI tools are coming out every 6 to 12 months. The EU AI Act, though historic, took four full years to become law. In that same time, tools like ChatGPT and Gemini became products used by hundreds of millions of people, changing the very risks the law was trying to address. The world needs to move away from slow, one-time rule-making and towards flexible, regularly updated regulation that can keep pace with technology.

2. Every Country Wants to Do It Their Own Way

AI governance is deeply tied to national interests and politics. The United States Executive Order on AI (October 2023) and China’s Generative AI Regulations are built on very different values and priorities. India’s approach through NITI Aayog, based on the ‘AI for All’ philosophy, shows that developing countries have their own concerns like making sure AI helps their people, not just the wealthy nations. Any global framework must be fair to all sides: richer countries should lead on safety, while developing countries should get real help building their own capacity.

3. Most of the World is Left Out of the Conversation

The Bletchley AI Safety Summit (2023) and the G7 Hiroshima AI Process were important meetings but they left out most of the world. Out of 193 UN member countries, fewer than 30 have any serious AI governance system in place. This creates dangerous empty spaces in global regulation, where AI companies or even authoritarian governments can operate with no checks. The OECD’s( Organisation for Economic Cooperation and Development) AI Principles (2019) are a good starting point, but they are not enforceable. They need to become real, binding rules that every country can access and follow.

What This Means for India

India is at a turning point. The India AI Mission (2024) has committed ₹10,371.92 crore to build AI computing infrastructure, create datasets and support Indian AI startups. India is clearly aiming to become a major global player in AI. At the same time, India used its G20 Presidency to push for responsible, inclusive and human-centric AI governance in the New Delhi Leaders Declaration. Now India needs to back up these words with action by leading the push for a UN Convention on AI Safety and Ethics that speaks for the interests of developing nations, while also building stronger AI rules at home.

What Should Be Done: Clear Recommendations

Short-Term (0–2 Years)

1. Create a UN AI Agency: India should build a coalition of developing nations to propose a dedicated UN Artificial Intelligence Agency at the UN General Assembly. This agency would work like the IAEA does for nuclear energy, setting basic safety standards, checking on powerful AI systems and helping countries without the resources to govern AI on their own.
2. Make AI Incident Reporting Compulsory: Any country building or using powerful AI should be required to report serious AI failures or harms to a shared international database just like how the aviation industry requires mandatory reporting of accidents and near-misses.

Medium-Term (2–5 Years)

3. Create a Global AI Risk Treaty: Taking inspiration from the EU AI Act but making it work at the international level, a Global AI Risk Treaty should require all member countries to: ban the most dangerous uses of AI (like mass social scoring or fully autonomous weapons), tightly regulate AI used in sensitive areas (like hospitals, courts and power grids), and allow lighter rules for everyday low-risk AI tools.
4. Set Up a Technology Transfer Fund: Wealthier nations should contribute 0.1% of their AI industry revenues into a shared AI Governance Capacity Fund. This money would help smaller and poorer nations build the skills, institutions and digital infrastructure they need to actually govern AI in their own countries.

Long-Term (5–10 Years)

5. Build In Regular Reviews: Any treaty or framework must be updated every three years automatically without needing to renegotiate the entire agreement from scratch. AI changes too fast for any fixed rulebook to stay relevant for long.

Conclusion

The real question is no longer whether AI will change the world , it already is. The question now is who gets to make the rules. If the world continues with its current fragmented approach, the gap will be filled by the most powerful countries and companies, often in ways that benefit the few and harm the many. Global governance today is not keeping up with AI. But that does not have to be permanent. With a fair, binding and inclusive global AI framework and with India stepping up as a leader for the developing world humanity can still make sure that the most powerful technology ever created works for everyone, not just those at the top.

More from the blog

Discover more from NitiVerse Foundation

Subscribe now to keep reading and get access to the full archive.

Continue reading