Coming soon · Track 01 in production

Self-paced courses for QA leaders.

The same operating model Loop runs in private engagements, packaged as on-demand tracks you can take at your own pace. Track 01 ships first; the rest land in waves through 2026.

Catalog · 4 tracks planned

The four-track curriculum.

Every track is a companion to one of the published books. Take them in sequence or only the ones that match what you're doing this quarter.

TRK-001In production

Doing More With Less in QA

Companion to: the entry course

The compression problem, leverage gap, Last 20 Bugs, regression drag audit, AI leverage vs theater, the 90-day plan.

  • 6 modules
  • ~3.5 hours
  • 5 worksheets
  • 1 capstone plan
TRK-002Drafting

Quality Engineering Operating Model

Companion to: AI-Native Quality Engineering

Layered tests, named owners, leverage metrics, and the 5-bucket maturity framework. Applied to your team.

  • 8 modules
  • ~5 hours
  • Maturity assessment
  • Adoption plan
TRK-003Planned

AI-Driven Test-Driven Development

Companion to: AI-Driven TDD

Tests define intent, AI accelerates implementation, engineers own verification. Translated to your stack.

  • 7 modules
  • ~4 hours
  • Reference repo
  • Pairing sessions
TRK-004Planned

Bespoke Agentic Pipelines

Companion to: Bespoke Agentic Pipelines

Roles, permissions, observability, project-specific operating rules. For teams running AI-generated code.

  • 9 modules
  • ~6 hours
  • Pipeline starter kit
  • Cohort review

Watch · Companion videos

Talks behind the courses

Subscribe on YouTube · @benfellows-dev
Set Up Policy as Code in 1 Hour (Control AI Code Fast)

Apr 28, 2026

Set Up Policy as Code in 1 Hour (Control AI Code Fast)

If you want to start controlling AI-generated code today, this is the simplest way I’ve found to do it. In the previous videos, I talked about why agentic development breaks at scale and introduced the concept of policy as code as a way to fix it. In this video, I’m showing how to actually get started. The idea is straightforward. Instead of relying only on prompts, rules, or memory to guide AI, you introduce a deterministic layer that scans your codebase and flags violations. Think of it as a much more comprehensive, fully customizable linting system that works alongside tools like Claude. What surprised me is how easy it is to get a first version working. In this walkthrough, I show how you can go from zero to a basic policy as code setup in a very short amount of time. We start by generating a small set of rules, wire up a simple scanner, and immediately run it against a real codebase. Even with a basic setup, you’ll start catching issues and inconsistencies right away. This is not the full system I use in production. At scale, this turns into hundreds or even thousands of rules, with more advanced concepts like evidence layers, caching, and reporting. But the goal of this video is to show that you don’t need any of that to begin. If you’re using AI to write code and you’re starting to see drift, inconsistency, or quality issues over time, this is a practical way to start putting guardrails in place. Over time, what I’ve found is that as you add more rules, the amount of drift drops significantly, and the system becomes more reliable without slowing development down. If you haven’t watched the earlier videos in this series, I’d recommend starting with those for more context on why this approach exists and how it fits into a larger agentic workflow. If you try this yourself, I’d be interested to hear what kinds of rules you end up writing and what it catches in your codebase.

Watch on YouTube →
I Tried Building with Agentic Factories. They Failed. Here’s What Worked Instead.

Apr 27, 2026

I Tried Building with Agentic Factories. They Failed. Here’s What Worked Instead.

I spent time building with “agentic factories” - multi-agent pipelines that promise fully autonomous workflows. On paper, they look like the future. In practice, they broke down in ways that matter: reliability, coordination, and real-world constraints. In this video, I break down where these systems failed, why they fail structurally, and what actually worked instead in production. If you're building with AI agents, this will save you time (and probably some pain).

Watch on YouTube →
How We Use Policy as Code to Control Claude and AI Agents

Apr 24, 2026

How We Use Policy as Code to Control Claude and AI Agents

Claude and other AI agents are incredibly good at writing code. The problem is they don’t stay consistent over time. In the first few iterations, everything looks great. Output is fast, patterns are mostly correct, and it feels like you’ve unlocked a new level of development speed. But as the codebase grows, small inconsistencies start to compound. Patterns drift, structure degrades, and eventually the system becomes harder to maintain than it was before. That’s the problem this video is about. In this walkthrough, I break down how we use a concept called policy as code to control AI-generated code in real systems. Instead of relying only on prompts, rules files, or memory, we introduce a deterministic layer that enforces how code is allowed to be written. Every time an agent makes changes, those changes are checked against a large set of rules. If something doesn’t match the expected patterns, it fails. The agent has to fix it before moving forward. This ends up acting like a much more comprehensive version of linting, but tailored specifically to your architecture, your patterns, and your codebase. The result is that we’re able to keep the speed benefits of AI while dramatically reducing drift and long-term degradation. This video focuses on how the system works in practice. What kinds of rules we write, how they’re structured, and how they integrate into an agentic workflow using tools like Claude. If you’re experimenting with AI coding and running into issues with inconsistency or quality over time, this is one approach that has worked well for us. I’ll also be doing follow-up videos on how to implement this from scratch and how it fits into larger agentic pipeline systems. If you’ve tried something similar or have different approaches to controlling AI-generated code, I’d be interested to hear about it.

Watch on YouTube →

Common questions

Before you sign up.

When does Track 01 open?
Final cut is in production. Beta enrollment opens first to anyone on the notify list. We'll send a date as soon as we have one.
Will the courses cost the same as the live workshop?
The on-demand pricing will be lower than the live cohort. Live cohorts include the working sessions and Q&A. Courses are pure curriculum.
Can my team buy seats together?
Yes. Team-license pricing is on the roadmap. Email and we'll set up early access for your team.
Will it be just video?
Each module ships with the video, the worksheet templates, a printable summary, and a community thread for the cohort.

Two doors right now

Get notified or take the live version.

Template

90-Day QA Leverage Plan

Coming soon