Corporate governance and oversight

An advisory commission just told OpenAI to stay under nonprofit control because AI is "too consequential" to be governed by a corporation alone.

My first reaction was "yeah, obviously." My second reaction was "but that ship sailed years ago, didn't it?" And my third reaction was "wait, maybe it didn't?"

The commission's report, released last week, is fascinating not because it's binding (it isn't) but because it represents a rare moment of someone actually saying the quiet part loud: maybe we shouldn't let trillion-dollar companies control the most powerful technology ever created with zero accountability beyond shareholders.

Hot take incoming: I think they're right.

The Commission Actually Has a Point

The commission—which includes labor organizer Dolores Huerta and was convened by Daniel Zingale, a former advisor to three California governors—isn't making some radical anti-capitalism argument. They're making a pretty straightforward case:

AI is infrastructure-level technology. Like roads, power grids, and water systems, it's too important to be controlled solely by private profit motives or even solely by government.

Their proposal is a "common sector" model that facilitates democratic participation. Which sounds nice but vague until you read their actual recommendations:

  • The nonprofit should get significant resources from the for-profit arm
  • It should focus on closing economic opportunity gaps
  • It should invest in AI literacy
  • It should be accessible to and governed by everyday people
  • Oh, and a human should lead it (which... okay, that's a sign of the times)

That last one got a lot of jokes, but it's actually a serious point about ensuring human judgment remains in the loop for governance decisions.

Why This Matters More Than It Seems

OpenAI started as a nonprofit in 2015 with a mission to ensure AGI benefits all humanity. Then they needed money. Lots of money. Billions of money.

So they created this weird hybrid structure: a nonprofit board overseeing a for-profit subsidiary. The nonprofit is supposed to maintain control, but the for-profit has a $300 billion valuation and all the actual resources.

The question the commission is really asking is: can that structure actually work, or is it just governance theater?

Because right now, OpenAI's nonprofit board has $23 million in assets (according to their 2023 tax filing). The for-profit side is valued at $300 billion. That's not oversight. That's a rounding error with oversight responsibilities.

The commission is saying: if you're serious about nonprofit control, the nonprofit needs actual resources and power, not just veto authority it will never realistically use.

The AI-Is-Infrastructure Argument

I keep coming back to the infrastructure comparison because it's more apt than people realize.

When we built the electrical grid, we didn't say "let's have competing private companies each build their own incompatible power systems and may the best one win." We recognized that electricity was too fundamental to leave entirely to market forces.

Same with roads, water, telecommunications (eventually). Some things are so foundational to society that pure profit motive leads to bad outcomes.

AI is getting to that point. Not in a distant sci-fi future—right now. ChatGPT has 800 million weekly users. That's nearly 10% of the global population. Whatever OpenAI decides to do with their technology affects a massive chunk of humanity.

Should that be governed solely by "what maximizes shareholder value?" Or should there be some other consideration?

The commission says there should be. I'm inclined to agree.

The Democratic Participation Thing

The commission's big emphasis is on democratic participation and accessibility. Making sure AI development is "known, seen, and shaped by the people it claims to serve."

This sounds fluffy until you think about what it means in practice.

Right now, decisions about AI development happen inside OpenAI's offices. A small group of engineers and executives decide what models to train, what capabilities to release, what safety measures to implement. The rest of us find out after the fact.

That's not necessarily malicious—they're trying to do the right thing. But it's fundamentally undemocratic for such consequential technology.

The commission is proposing mechanisms for public input into AI development priorities. Not in a "let's vote on every model parameter" way, but in a "maybe communities affected by AI should have some say in how it's developed and deployed" way.

I can already hear the objections: "The public doesn't understand AI. We can't let uninformed people make technical decisions."

Fair point. But also: the public doesn't understand power grid engineering either, yet we don't let power companies do whatever they want with zero accountability. We have regulatory bodies, public comment periods, community input.

Why should AI be different?

The Elon Musk Elephant in the Room

OpenAI is currently fighting a lawsuit from Elon Musk, one of their original founders, who's trying to block them from converting to a for-profit.

The commission's recommendations conveniently align with Musk's position, which makes everything more complicated.

Is this genuinely about good governance, or is it a strategic move in an ongoing legal and commercial battle?

Probably both? Reality is usually messy.

But even if the motivation is partly strategic, that doesn't make the underlying argument wrong. Musk can have terrible motives and happen to be right that OpenAI shouldn't abandon nonprofit governance.

Broken clocks, twice a day, etc.

What OpenAI Should Actually Do

The commission made specific recommendations OpenAI should consider seriously:

1. Fund the nonprofit properly. Give it real resources—not just token amounts, but enough to actually do meaningful work. The commission suggested this could include funding for theater, art, health—"human-to-human activities" that AI doesn't replace.

That's actually kind of brilliant. Use AI profits to fund the things AI can't do.

2. Create transparency mechanisms. Let people see how decisions are made. Not proprietary model details, but governance processes. Who decides what gets released? What safety considerations are weighed? What tradeoffs are being made?

3. Establish meaningful public input. Not fake consultation where you ignore feedback, but actual mechanisms for affected communities to shape priorities.

4. Keep the nonprofit in control. Not just on paper, but in practice. That means giving it resources and authority to actually govern.

My Skeptical Side

Look, I'm not naive. I know how this probably plays out.

OpenAI will thank the commission for their thoughtful recommendations. They'll implement some cosmetic changes. Maybe they'll fund some nonprofit initiatives. But the fundamental power structure—where the for-profit arm has all the money and makes all the real decisions—won't actually change.

Because changing it would require OpenAI to voluntarily limit its own power and profitability. And companies don't typically do that unless forced.

But here's the thing: sometimes it's valuable to clearly articulate what should happen, even if it probably won't.

The commission's report establishes a framework for judging OpenAI's actions. If they ignore these recommendations, we can point to this document and say "see, here's what good governance looks like, and they chose not to do it."

That matters. It creates accountability, even if it's just reputational.

The Broader AI Governance Question

This isn't really about OpenAI specifically. It's about how we govern transformative AI more broadly.

Should it be:

  • Pure market competition? (Let companies do whatever, may the best one win)
  • Government regulation? (Heavy oversight, strict rules)
  • Nonprofit governance? (Mission-driven rather than profit-driven)
  • Some hybrid? (Public-private partnerships with actual accountability)

Different countries are trying different approaches. China is going heavy government control. The EU is going regulation. The US is doing... honestly, it's unclear what the US is doing beyond letting companies self-regulate.

OpenAI's experiment with nonprofit oversight of a for-profit subsidiary is one model. If it works, it could be a template. If it fails, it shows the limitations of trying to have it both ways.

Either way, we need some governance model beyond "companies do whatever makes money." Because AI is too consequential for that to work.

What I Actually Think Should Happen

Hot take: I think OpenAI should split.

Keep the for-profit subsidiary doing commercial products—ChatGPT, APIs, whatever makes money. Let that compete in the market.

But spin out the core research—the cutting-edge AGI development, the safety research, the fundamental breakthroughs—into a genuinely independent nonprofit with real resources.

The nonprofit focuses on advancing AI for human benefit, with meaningful public input and democratic governance. The for-profit commercializes proven technology with normal corporate incentives.

That way you get both:

  • Market competition driving product quality and accessibility
  • Nonprofit mission-driven research on the most important and risky stuff

Is that realistic? Probably not. Would it solve everything? Definitely not. But it's better than the current "we promise the nonprofit is in control even though the for-profit has all the money" fiction.

Why I'm Writing About This

I'm not usually one for corporate governance analysis. But this matters because OpenAI's structure—and whether it works—will influence how we govern AI more broadly.

If they prove that nonprofit oversight of for-profit AI development can actually work, that's a model other companies might adopt. If they prove it can't work, we'll need different approaches.

Right now, we're in the early stages of figuring out AI governance. The decisions made in the next few years will shape the industry for decades.

The commission's report is saying: don't let this just happen by default. Make intentional choices about who controls AI and who it benefits.

That seems... correct?

I don't know if OpenAI will listen. Probably not in any meaningful way. But I hope someone does. Because the alternative—purely profit-driven AI development with zero public accountability—feels like a recipe for something going badly wrong eventually.

And by "eventually" I mean "possibly quite soon."

So yeah. AI is too important to be controlled by corporations alone. The commission is right about that. Now let's see if anyone actually does anything about it.

I'm not holding my breath, but I'm still paying attention.