00:47:30
In a revealing TED2025 conversation, OpenAI CEO Sam Altman discussed ChatGPT's explosive growth, ethical dilemmas in creative AI, and the impending shift to agentic systems that could redefine humanity's relationship with artificial intelligence.
When TED's Chris Anderson demonstrated Sora's video generation capabilities—including a simulated TED talk scene—Altman acknowledged the model's contextual understanding. The real breakthrough emerged when GPT-4o generated a conceptual diagram distinguishing intelligence from consciousness, showcasing:
Anderson confronted the elephant in the room: AI-generated Charlie Brown content without licensing. Altman outlined OpenAI's current approach:
Current Policy:
Altman acknowledged creative professionals' polarized reactions while advocating for systems where artists could opt into style usage with revenue sharing—a complex challenge given AI's training on collective human creativity.
When questioned about open-source rivals like DeepSeek, Altman revealed strategic shifts:
Despite competition, Altman emphasized that model intelligence is commoditizing, shifting competition toward product integration and personalized experiences like ChatGPT's evolving "Memory" feature.
The conversation pivoted to AI agents—systems that autonomously execute tasks. Anderson demonstrated "Operator" booking a restaurant, highlighting:
Key Challenges:
Altman framed safety and capability as converging priorities: "A good product is a safe product." He revealed OpenAI's preparedness framework for evaluating agent risks before deployment.
Altman challenged conventional AGI definitions, arguing current systems lack three critical capabilities:
He proposed focusing less on an AGI milestone and more on managing the exponential capability curve: "We have to build a society to get the tremendous benefits of this and figure out how to make it safe."
Addressing safety team departures, Altman pointed to OpenAI's operational record while acknowledging evolving challenges:
He defended iterative deployment as a safety learning tool: "The way we learn to build safe systems is this iterative process of deploying them... while the stakes are relatively low."
Confronted with the "moral authority" question by GPT-4o itself, Altman reflected:
"Our goal is to make AGI and distribute it safely for humanity. By all accounts, we've made significant progress despite tactical shifts. If invited back next year, you might criticize our open-source approach—there are always tradeoffs."
He connected recent fatherhood to intensified future-focused responsibility, envisioning a world where his son perceives current limitations as archaic: "They lived such horrible lives. They were so limited. The world sucked so much. I think that's great."