US Capitol building during legislative session

Something truly notable happened last month in Johannesburg that was barely picked up by the US tech press: the Group of Twenty (G20) leaders, despite uneven participation from some members, adopted a declaration that signals a growing global alignment on AI governance. The key takeaway is simple: the world is increasingly viewing AI not just as a geopolitical weapon or a commercial product, but as a public good that requires collective, multilateral governance.

And the US, with its focus on domestic strategy and a preference for innovation over regulation, risks being left out of the room when the global rules are set.

AI as a "Public Good"

The G20 declaration insists that AI must be "human-centered" and "development-oriented," linking data governance and ethical guidelines directly to economic necessity and democratic resilience. Countries from the Global South—like South Africa, Brazil, and India—are pushing hard for this framework, determined to stop this new wave of innovation from "hard-wiring the injustices of the last".

This is a stark contrast to the US approach, which, despite having an Executive Order, is largely defined by a desire to remove perceived impediments to innovation and maintain technological dominance. When the US defaults to market dominance as its primary strategy, it tells the rest of the world that governance can wait. The G20's message is that it cannot.

The Patchwork of Policy

The global policy landscape is becoming a complicated patchwork:

  • Europe (EU AI Act): Taking a risk-based approach, banning systems like mass biometric surveillance and placing strict requirements on high-risk systems used in employment or law enforcement.
  • Australia (National AI Plan): Avoiding a standalone AI Act, instead choosing to uplift existing laws, encouraging industry-led governance, and prioritizing investment and job creation over new restrictions.
  • Indonesia (OJK Code of Ethics): Refining ethics guidelines for the financial technology industry to mitigate risks like algorithmic bias, data leaks, and hallucinations from generative AI.

The problem for US tech giants is that the EU's laws will apply to non-EU providers, effectively exporting its regulatory standards. If the US chooses a fragmented, domestic approach, its companies will simply have to comply with the rules set by others. Governance norms are sticky; once they are set, they embed themselves in expectations and standards.

The Danger of Ignoring the Conversation

My friend, who is an international policy expert, said the US is making a dangerous bet: that its market power will allow it to shape the rules later. "It’s a sign of arrogance," he told me. "The G20 didn't wait for US support; they moved forward, recognizing that global cooperation cannot wait for universal participation".

The policy discussion is moving beyond just safety and security into issues of fairness in financial products, human rights, and democratic values. By focusing almost exclusively on commercial and security assets, the US is missing the broader conversation about AI as a developmental necessity.

My Take

The US approach is shortsighted. Maintaining a fragmented regulatory landscape might allow for faster innovation in the short term, but it creates uncertainty and forces compliance with external standards (like the EU's).

The real win would be aligning with the G20's "human-centered" vision. The US helped create the foundations for these principles, and by engaging—especially with the upcoming India AI Impact Summit in 2026—it can influence the shape of global norms instead of risking becoming a rule-taker. We shouldn't let a handful of billionaires and our own domestic political squabbles dictate the global future of a technology that is clearly a force for humanity, not just profit.