00:24:42
Jeremy Utley, an adjunct professor at Stanford University and a practical AI specialist, shares actionable insights on moving beyond simple prompts to effectively engineer the context for AI interactions, transforming it from an eager but unskilled intern into a powerful collaborative partner.
A common misconception is to treat AI like traditional software. Jeremy Utley argues for a different perspective: AI is not good software, but it is good "people." It behaves like an exceptionally eager, tireless, and capable intern who is predisposed to say "yes" and avoid pushing back or setting boundaries to avoid appearing unhelpful.
This innate desire to please can lead to unhelpful outputs. For instance, if an AI says, "Check back in a couple of days," it's often its way of avoiding the admission, "I can't do this." Recognizing this fundamental nature is the first step toward more effective collaboration.
Context engineering is an evolution of prompt engineering. It moves beyond a simple instruction to carefully curate all the necessary information an AI needs to perform a specific task accurately.
The core principle is that AI cannot read your mind. All implicit assumptions and knowledge must be made explicit. A simple test is the "test of humanity": if you walked down the hall and gave the same prompt and documentation to a human colleague, could they execute the task? If not, the AI will also struggle.
Inspired by the human cognitive benefit of "thinking out loud," this technique significantly improves AI output. Since Large Language Models (LLMs) predict the next word sequentially, asking them to articulate their reasoning forces a more deliberate thought process.
How to use it: Append this sentence to any prompt: "Before you respond to my query, please walk me through your thought process step by step."
Instead of immediately generating an email that starts with "Dear Friend," the AI will first explain its considerations for tone, audience, and objectives. This baked-in reasoning leads to a more thoughtful and appropriate final output and allows you to audit its assumptions.
AI is an exceptional imitation engine. Without guidance, it imitates the average of the internet. Few-shot prompting provides concrete examples of what you consider a good (and sometimes bad) output.
How to use it: Provide the AI with 1-3 quintessential examples of the output you desire (e.g., your best sales emails). For even better results, provide a counter-example of what to avoid. You can even ask the AI to generate a bad example based on your good one, using chain-of-thought reasoning to understand the differences.
A key tenet of treating AI as a teammate is giving it permission to ask questions. AI, trying to be helpful, will often invent information (e.g., sales figures) rather than trouble you for it.
How to use it: End your prompt with a phrase like: "...and before you get started, ask me for any information you need to do a good job." This instructs the AI to pause and request necessary data (e.g., "I will need the Q2 sales numbers for product X"), leading to more accurate and reliable results.
Assigning a role tells the AI where in its vast knowledge base to focus. It triggers deep associations and connections related to that domain.
How to use it: Instead of "review this email," try "Act as a professional communications expert, specifically channeling the principles of Dale Carnegie. How would he improve this email?"
You can also impose creative constraints to spark innovation: "How would Jerry Seinfeld solve this customer service problem?" or "How would Amazon's leadership team approach this logistics issue?"
Utley demonstrates a powerful framework for using AI as a "conversation flight simulator." This involves three dedicated AI interactions:
This process allows you to iterate. If the character was too agreeable, you can refine the profile instructions to add more edge and practice again, all before the high-stakes real conversation.
The most effective AI users are not necessarily developers. They are coaches, teachers, and mentors—people skilled at eliciting exceptional output from other intelligences (human or artificial).
The current primary limitation of AI is not its technology but the limits of human imagination. As more people master the art of collaboration and context engineering, the collective "adjacent possible"—the spectrum of what we can imagine and build—expands exponentially.
The key takeaway: The skills you've developed to work with people are precisely the skills you need to work effectively with AI.