01:55:10
In a revealing conversation with Lex Fridman, OpenAI CEO Sam Altman discusses the painful board saga that nearly destroyed the company, lessons about trust and governance, and why compute may become the world's most precious commodity. He shares insights on GPT-5's development, Sora's breakthrough capabilities, and the realities of building artificial general intelligence (AGI).
"Compute is gonna be the currency of the future. I think it'll be maybe the most precious commodity in the world. I expect that by the end of this decade, and possibly sooner, we'll have remarkable systems that make us say 'wow'."
Altman describes the November 2023 board conflict as his "most painful professional experience," revealing the governance flaw in OpenAI's nonprofit structure: boards with unchecked power. While acknowledging well-meaning intentions, he notes the episode exposed critical vulnerabilities as OpenAI approaches AGI. Key structural changes include:
The experience fundamentally altered Altman's approach to trust: "I've always been extremely trusting. This has definitely changed how I think about default trust and planning for bad scenarios."
Regarding Musk's lawsuit, Altman contextualizes OpenAI's structural evolution: "We started as a research lab with no product plans. When we needed massive capital, we patched the structure iteratively." He clarifies that "open" means democratizing powerful tools, not necessarily open-sourcing everything. On Musk's departure, Altman states: "He wanted full Tesla control. We wanted independence. That's fine."
Altman analyzes OpenAI's video generation model Sora as a leap in world simulation: "When objects reappear correctly after occlusion, you see glimmers of physical understanding." Current limitations like "cats sprouting extra limbs" stem from both architectural constraints and scaling challenges. He addresses concerns:
Altman calls GPT-4 "underwhelming" relative to future systems: "We're on an exponential curve. GPT-4 will soon look like GPT-3 does now." Key developments include:
128K context windows enabling "lifetime memory" integration
"Smarter across the board" with enhanced reasoning and task execution
Altman highlights reasoning as OpenAI's current focus: "The ability to break down 10-step problems independently remains limited but crucial."
Altman avoids defining AGI rigidly but predicts: "By decade's end, we'll have systems so capable they materially change the global economy." He emphasizes governance:
"No single person should control AGI. I never wanted super-voting control. We need government frameworks – this isn't regulatory capture."
He downplays "theatrical" AI extinction risks while prioritizing tangible concerns: biased systems, economic disruption, and security threats from state actors.
Altman positions compute as tomorrow's fundamental currency: "Unlike smartphones, compute demand is elastic. Cheaper compute enables applications from email assistants to cancer research." He identifies energy as the critical bottleneck:
While confirming OpenAI will "return to robotics," Altman frames physical AI as essential: "It's depressing if AGI can't act in the physical world." He also addresses:
Altman remains hopeful about humanity: "Our societal scaffolding—science, technology, collective knowledge—enables progress no individual could achieve." When pressed about existential risks, he notes: "I don't expect to get shot over AI, but the chance isn't zero." Ultimately, he believes AGI should accelerate human flourishing: "If we can significantly increase scientific discovery rates, that's the real milestone."