TechChannels Blog News

Guardrails or Gatekeepers? U.S. Challenges Global AI Oversight

Written by Maria-Diandra Opre | Nov 13, 2025 1:33:42 PM

When the United Nations General Assembly convened recently in New York, the U.S. took a defiant position, rejecting the idea of centralized international oversight of AI.

“We totally reject all efforts by international bodies to assert centralized control and global governance of AI,” Michael Kratsios,  Director of the U.S. Office of Science and Technology Policy, told the assembly, casting global frameworks as “bureaucratic management” that would stifle sovereign innovation (NBC News, 2025).

The world’s governments gathered to confront what U.N. Secretary-General António Guterres called “the fastest-moving technology in human history.” Artificial Intelligence was not merely on the agenda; it was the agenda itself (The New York Times, 2025).

President Donald Trump, addressing the Assembly a day earlier, took a similarly nationalistic tone, announcing that the U.S. would launch an AI-powered verification tool to enforce the Biological Weapons Convention. “It could be one of the great things ever,” Trump said of AI, “but it also can be dangerous.”

During the meeting the U.N. unveiled its Global Dialogue on AI Governance, the first truly inclusive multilateral body dedicated to international AI coordination (Press UN, 2025). A 40-member scientific panel will follow, with formal sessions beginning in Geneva in mid-2026.

Guterres, in launching the initiative, argued that the goal was not top-down control, but shared guardrails. “Without guardrails, [AI] can be weaponized,” he warned (Press UN, 2025). Others echoed the call for shared governance. Spain’s Prime Minister Pedro Sánchez made a pointed contrast with the U.S. position, urging the U.N. to act as “We need to coordinate a shared vision of AI at a global level, with the U.N. as the legitimate and inclusive forum to forge consensus around common interests.” China’s Vice Foreign Minister Ma Zhaoxu was more direct, condemning “unilateralism and protectionism” and backing a central U.N. role (Yahoo News, 2025).

This tension between the U.S. preference for bilateral “minilateralism” and the global South and G7 push for shared oversight is a critical fault line in 21st-century technology governance.

From Washington’s perspective, global AI regulation brings forward risks that cannot be ignored: to U.S. tech dominance, to innovation speed, and to American political sovereignty. The U.S. strategy is to maintain informal alignment with “like-minded nations,” while avoiding any formal ceding of oversight to U.N. bodies, which it sees as slow-moving and politically diffuse.

Yet as AI becomes more integrated into critical infrastructure, from military systems to health diagnostics and financial systems, its risks are no longer contained within national borders. The effects of misuse in one jurisdiction can ripple out globally. AI doesn’t respect sovereignty. So, that’s the paradox the U.S. position now faces: how to lead a global conversation while rejecting its institutional framework.

The new U.N. Global Dialogue is scheduled to meet for the first time in Geneva in summer 2026, coinciding with the AI for Good summit. Its associated expert panel will include co-chairs from both developed and developing nations: a development that mirrors geopolitical balance, but one unlikely to sway Washington.

The State Department’s post-summit statement left little doubt: the U.S. will support governance that “promotes innovation, reflects American values, and counters authoritarian influence,”  but it will not back any move toward legally binding global standards.

Renan Araujo of the Institute for AI Policy and Strategy offered a more nuanced view: “No one wants a burdensome, bureaucratic governance structure and the U.S. has succeeded in starting bilateral and minilateral coalitions,” Araujo said (AOL, 2025). “At the same time, we should expect AI-related challenges to become more transnational in nature as AI capabilities become more advanced.” 

The U.S. is building an ecosystem that prioritizes strategic advantage, but the U.N. is attempting to build a framework that prioritizes collective responsibility. For now, those two blueprints are incompatible. But as AI systems continue to evolve and outpace regulatory models, the world will need more than rhetoric. Whether leadership comes from New York, Geneva, or Washington, someone will have to write the rules.