AI Chatbots Are Like Observational Comics
Both lose their magic when talking about something you know
As a child of the 90s, I was raised watching observational comedy specials. Seinfeld, Carlin, Hedberg, Rock, Poundstone, Izzard… I watched it all. It wasn’t just the jokes I enjoyed. I was in awe of each comedian’s ability to fully engage you (and the entire audience), managing your expectations and attention. By themselves, alone on stage. By just talking.
You don’t notice how much work they’re doing – they make it appear effortless. But a decade later, I caught a post on Twitter from Seinfeld saying he’d be trying out new material at a Manhattan club that night. I quickly snagged tickets and two hours later we watched him walk on stage during a Wednesday open mic.
Watching Seinfeld try out new material gives you a peak behind the curtain. Before each bit he’d lift up his notepad, push up his glasses, hunch over, and quietly read the note aloud to himself: “Five hour energy drink… five hours is a weird amount of time.”
Every time he read his notes, you’d cringe a bit. “C’mon Jerry, that’s not a Seinfeld joke,” you’d think, “That’s a joke from somebody making fun of Seinfeld.”
The room never laughed when he read his notes.
Then he’d put the notes away, stand up straight, and switch into performance mode. The Seinfeld you’d always known magically reappeared and did the bit he’d just read to a dead room – and people ate it up. The notepad was gone1, all the work he’d put into honing the joke was hidden. It was just him, on stage, making it look easy.
ChatGPT is kinda like that.
Chatbots are practiced performers.
We only see chatbots’ responses. The mountains of data (~250 billion webpages just from Common Crawl), cumulative decades of work from unknown contractors teaching it how to converse, and billions of dollars of GPUs… It’s all invisible. We only see the confident, polished performance. And people eat it up.
I see people who don’t know how LLMs are built treat them like all-knowing experts, trusting everything that comes out2. Well… Nearly everything.
This brings us to another way AI chatbots are like observational comedy: they both lose their magic when talking about your expertise.
Nothing breaks the spell of an observational comic like a joke about something you know well. You might chuckle a bit, but the spell snaps and you think, “Actually, there’s a very good reason it’s like that…”
This effect was captured well during a Chris Rock guest appearance on King of the Hill. Voicing stand-up comic Buddha Sack, he trades “yo mama” jokes with noted propane expert, Hank Hill:
Buddha Sack: It’s been so long since yo mama’s last bath that her hairy armpits smell like propane gas.
Hank Hill: Now excuse me, hold on there fella. A joke’s a joke, but now you’ve gone too far. Propane has no natural odor. What you smell is actually put there by man for safety purposes.
After a string of well-received jokes, touching on Hank’s expertise spoiled the mood. Comedy performances are less impressive when a joke touches on your expertise.
Time and time again, when talking to people who rely on ChatGPT, Claude, Perplexity, and other general AI tools, I hear them say, “AI is incredible. It handles nearly everything I throw at them.”
“What does it fumble with?” I’ll ask.
“Well, it still gets things wrong when it comes to my line of work.”
Lawyers say this. So do accountants, marketers, researchers, salespeople, and engineers.
Chatbots know everything, but they make mistakes when it comes to things I know.
🤔
To be clear, these people frequently use chatbots to help them with their work. They just keep them on a close leash, reviewing and revising their work. This is the Intern use case: “Supervised copilots that collaborate with experts, focusing on grunt work.”
Programming is probably the best example going at the moment.
The new model of a software start up is a couple people armed with Cursor licenses, shipping apps in a handful of months. Previously, it would have taken a full team and a couple years to achieve this quality. Gone are the days of software start-ups needing piles of cash to build their product3.
This isn’t vibe coding. These are A-tier programmers using AI to help them ship faster. They design the architecture, pick the tools and libraries, sketch out the apps, then use Cursor and other AI tools to implement everything faster. The LLMs aren’t perfect – but the experts driving can easily mitigate their shortcomings.
Last week, a friend in the gaming industry told me this is happening with the best artists in gaming as well. Talented designers will sketch out models, then leave it to AI systems to perform the tedious work of constructing the wireframes. The tools aren’t perfect – designers are always tweaking and polishing the output before shipping the asset. But smaller teams of the best people are doing work only large corporations could previously achieve.
John Carmack just captured this idea perfectly:
AI tools will allow the best to reach even greater heights, while enabling smaller teams to accomplish more, and bring in some completely new creator demographics. Yes, we will get to a world where you can get an interactive game (or novel, or movie) out of a prompt, but there will be far better exemplars of the medium still created by dedicated teams of passionate developers.
This is the pattern: experts, firmly in the driver’s seat, using AI to go farther, faster.
This is a good pattern because the expert covers AI’s errors. The pattern we have to worry about is when laypeople hand over a job to AI and fully trust the output.
Personally, I don’t worry about superintelligent AGI enslaving humanity. But I do worry about people using AI to make consequential decisions affecting others’ lives in domains where they themselves lack expertise. Because that’s happening now.
Chatbots are like observational comics. They’re incredibly good at creating authority through performance, but the trick fails when you’re an expert on the topic at hand. This doesn’t mean you shouldn’t use them – but always be cautious when you’re doing consequential work, outside your area of expertise.
Remember that the best in their fields – programmers, designers, lawyers, accountants, writers, and more – never let the AI drive when it comes to their respective expertise.
-
A friend informs me the notebook isn’t exactly hidden: Seinfeld published a selection of his notebooks in 2020. It even includes the notes for the 5 hour energy bit. ↩
-
To be fair, people who do know how LLMs work are also impressed, but they have better mental models for what to ask and how far to trust them. ↩
-
How this dynamic is going to affect the VC ecosystem is a topic for another day… ↩