Office worker thoughtfully looking at laptop screen

I’ve been tracking the AI policy debates closely, and one thing is clear: governments and tech companies are chasing a moving target. Australia's National AI Plan is conservative, opting to uplift existing laws. The EU AI Act is comprehensive, but complicated. The US policy is frankly a mess, focused on innovation and security, and less on human rights.

But a movement is quietly emerging that I think is the actual solution to the ethics problem: Fiduciary AI.

The Problem with Unfettered Capitalism

As one founder, Amber Stewart of GuardianSync, put it: "I'm not afraid of the tech. I'm afraid of unfettered capitalism, of people releasing powerful tools with no accountability".

That's the core of the issue. AI is being released into the world with minimal guardrails, leading to everything from algorithmic bias against marginalized communities (like the facial recognition systems that misidentified people of color) to the use of private data for training models without consent. We have a system where AI reflects and magnifies human behavior and errors, without any duty to act in the user's best interest.

A fiduciary framework is simple: just as a lawyer or doctor has a duty to act in your best interest, the AI system should have a duty of care toward the people whose data it relies on.

Coding Ethics into the Infrastructure

This isn't just theory; it’s about making ethics enforceable by building them into the infrastructure. Instead of relying on vague terms, the focus is on creating a digital trust layer that protects things like biometrics, creative work, and user data through consent, transparency, and accountability.

This is a technical challenge, not just a legal one. How do you prove in court that an AI was used to commit fraud? How do you audit a black-box algorithm for bias? The fiduciary approach forces companies to design their systems for independent auditing and compliance, earning a visible "consumer seal of trust" that signals their commitment to ethical standards.

The Dangers of Centralized Power

This movement is a necessary counter-force to the concentration of power we see today. The fact that the US is betting on market dominance to shape AI norms is troubling. The ultimate goal of AI—to help us reduce scarcity, cut working hours, and foster cooperation—is at risk of being overridden by current political and economic incentives.

We're in a race where the commercial interests are constantly outpacing policy and ethics. The creation of a dedicated "ethics technocrat" class is going to be crucial here—people who understand both the technical stack and the human cost of unregulated systems.

My Take

I am skeptical of any solution that relies purely on government regulation because Congress is just too slow. The Fiduciary AI approach bypasses some of that slowness by putting the onus on the companies to make their systems trustworthy by design.

We need to stop asking whether AI will replace us (a misconception, as AI simply reflects and magnifies human patterns) and start demanding that the companies building it have a legal and ethical obligation to protect us. The market should reward trust, and this framework gives consumers the tools to identify who is actually doing the hard, expensive work of building an ethical AI stack versus who is just using ethics as marketing fluff.