I had the chance to attend the Vancouver AI Summit this year, and as expected, it was packed with thoughtful conversations — from practical AI implementations to thornier topics like governance and data sovereignty. But the moment that stuck with me most? A single slide from Tyler Akidau of Redpanda.
It was a grid of Muppets. Many Muppets. Nine Muppets plotted across two axes: Lawful → Chaotic and Good → Evil.

Original source: Alignment Chart – The Muppet Show
This is called an alignment chart—popularized by Dungeons & Dragons and used by many role-playing games to describe a character’s traits and behaviours. The alignment chart maps two axes: Lawful to Chaotic (how much they follow rules and structure) and Good to Evil (their intentions and outcomes). Let’s avoid a debate on if everyone is correctly placed 🙂 But Kermit sits in Lawful Good—dependable, structured, trying to do right by everyone. Meanwhile, Gonzo lives somewhere in the Chaotic Good zone—well-meaning but wildly unpredictable.
So why did this slide land so well with me?
Because when you think about AI agents, especially the ones we’re trying to deploy in real-world workflows, they’re often well-intentioned (Good or Neutral), but they can be wildly unpredictable. Overconfident, prone to hallucinations, and in a word—chaotic.
And that’s a problem if you’re trying to build reliable systems.
Sound familiar? Bringing on an agent? You've essentially hired Gonzo.
Turning Gonzo into Kermit
Sometimes you want a lot of unstructured ideas, and in those cases Gonzo is great. But often you don’t want that. How do we move our AI assistants from Gonzo to Kermit? How do we help them go from Chaotic Good to Lawful Good? That reliable, responsible team member we actually want in production?
It comes down to a couple of things:
Good = Policies and Governance. The goodness of AI behaviour isn’t accidental. It’s built through thoughtful governance—setting guardrails for data use, bias management, transparency, and having clear ownership within the organization.
Lawful = Workflow and Discipline. Making AI lawful requires structure. Disciplined prompting, appropriate task scoping, workflow generation, and well-defined evaluation loops. And importantly, keeping humans at key control points.
This process is more work than just “using AI.” Because it is. But the payoff? AI that's reliable. Systems you can trust. Impactful automation.
Everyone loves Gonzo’s creativity, but when it’s time to deliver, you want Kermit leading the show.
-- Bill
#AIGovernance #VancouverAI #AISummit2025
