diff --git a/content/posts/2025-03-04-plan-assign-build-retro/events-filtered.png b/content/posts/2025-03-04-plan-assign-build-retro/events-filtered.png
new file mode 100644
index 00000000..7e14c80e
Binary files /dev/null and b/content/posts/2025-03-04-plan-assign-build-retro/events-filtered.png differ
diff --git a/content/posts/2025-03-04-plan-assign-build-retro/index.md b/content/posts/2025-03-04-plan-assign-build-retro/index.md
new file mode 100644
index 00000000..9c799e51
--- /dev/null
+++ b/content/posts/2025-03-04-plan-assign-build-retro/index.md
@@ -0,0 +1,170 @@
+---
+title: "Plan, Assign, Build, Retro: A Replicable Workflow for AI-Augmented Development"
+date: 2026-03-04T12:00:00-05:00
+draft: false
+tags:
+ - ai
+ - development
+ - workflow
+ - claude
+ - svelte
+description: "I built a community events board from scratch, while making biang biang noodles. Here's the four-phase methodology that made it repeatable: structured planning, detailed tickets, supervised builds, and automated review."
+lastmod: 2026-03-05T04:22:48.537Z
+---
+
+
+
+{{< callout type="note" title="Originally published on the Infinity Interactive blog" >}}
+This post originally appeared on the [Infinity Interactive blog](https://iinteractive.com/resources/blog/plan-assign-build-retro-a-replicable-workflow-for-ai-augmented-development). Reprinted here with minor edits.
+{{< /callout >}}
+
+I spent 2 hours and 10 minutes actively supervising the construction of a complete web application. Twenty-one tickets, 53 story points, 21 pull requests merged, roughly 8,500 lines of code. During one epic I was in the kitchen making biang biang noodles from scratch. During another I was helping my daughter with history homework. My total hands-on-keyboard time for the build phase was about what most developers spend in a single standup cycle.
+
+
+_Biang biang noodles with chili oil and broccolini in a green ceramic bowl. The Epic 2 build session: 13 story points, 3 PRs, 10 minutes of my attention._
+
+But the numbers aren't what made it repeatable. The process is.
+
+Over the past year, I've been developing a methodology for AI-augmented web development across multiple client projects at Infinity Interactive. What started as "let me try having Claude write some code" evolved into a structured four-phase workflow that consistently compresses multi-month timelines into weeks of part-time work. The methodology got tighter with each project. By the most recent one, I'd stopped thinking of AI as a tool I used and started thinking of it as a team member I managed.
+
+To validate that the methodology was teachable, not just something that worked in my hands, I designed a training exercise for my team: a small community event board built from scratch using the full workflow. I ran through it myself first, documenting everything obsessively so my teammates could see exactly how the decisions got made. This post is what I learned.
+
+## How the Loop Works
+
+The workflow has four phases that repeat in a sprint cycle: Plan, Assign, Build, Retro.
+
+
+_The core loop: Plan → Assign → Build → Retro, with role labels showing who does what at each phase._
+
+In **Plan**, I'm having an architecture conversation with Claude Desktop. I describe what we're building, what tech stack, what the constraints are. Claude asks questions I haven't thought of yet, then structures my thinking into documentation, epics, and detailed tickets. This phase typically takes 60 to 120 minutes and produces everything the build phase needs.
+
+In **Assign**, I sequence the tickets, set priorities, identify dependencies. This is 15 minutes of manual work. Infrastructure before features, data layer before UI.
+
+In **Build**, Claude Code (the CLI tool) does the implementation, ticket by ticket. I supervise. That means: kick off a ticket, let it work, check the diff when it's ready, approve the commit, approve the push, review the PR. Automated review agents catch what I miss. I merge when satisfied.
+
+In **Retro**, Claude Code writes the initial retrospective document from the sprint artifacts and conversation history. Then Claude Desktop reviews and expands it with additional context, cross-referencing the planning docs and build logs. I review the analysis, and Claude Desktop adjusts tickets and epics for upcoming sprints accordingly.
+
+The role division is the important part. I'm the architect who makes decisions. During planning, Claude is the stenographer who structures my thinking. During building, Claude Code is the developer who writes code. During retro, Claude is the analyst. At every stage, I approve every commit, every push, every merge. No exceptions.
+
+## What I Built
+
+The training project was a community event board for a fictional neighborhood association. Public-facing static site where residents can browse, search, and filter local events. Familiar domain, exercises the full workflow, mirrors real client patterns.
+
+I chose SvelteKit with Svelte 5, static adapter, MDsveX for markdown-based event content. Eighteen sample events with search-as-you-type, category filters, responsive design, accessibility compliance. The whole thing [deploys to GitHub Pages](https://iinteractive.github.io/community-board-eric/).
+
+
+_The Community Events page with the Social category filter active, showing 3 of 18 events. Search bar, category pills, and event cards with the Nightmare on Elm Street theme._
+
+The creative direction was a Nightmare on Elm Street theme, because the fictional client was the "Elm Street Community Association" and I couldn't resist. Dark palette, blood red accents, Freddy's sweater green as the secondary color, event titles like "Nightmare 5K Fun Run" and "Elm Street Séance & Social Hour." One of the hero section components has a barely-visible striped Easter egg at 6% opacity.
+
+Silly? Sure, but it's a quirk that worked. The strong creative direction forced the AI to make consistent aesthetic choices instead of defaulting to generic templates. Every event description, every color pairing, every piece of copy had a guiding sensibility to follow. Creative constraints helped here the same way they usually do — they gave the AI something specific to follow instead of reaching for safe defaults.
+
+## What 105 Minutes of Planning Produced
+
+Here's where most people's mental model of "AI coding" breaks down. They imagine typing a prompt and getting a project back. The reality is that the build phase was easy _because_ I spent 105 minutes planning before a single line of code existed.
+
+The planning conversation was a real conversation, not a prompt. I described the architecture. Claude asked whether search should be client-side or server-side. I said client-side, it's a static site with 20 events, no need for a server round-trip. Claude asked about the content pipeline. I said MDsveX, one markdown file per event, frontmatter for metadata. Claude asked about the data model. I described the fields. It structured all of this into an epic plan with dependency chains and story point estimates.
+
+A note on story points, since they're controversial even in conventional development: I'm not using them to estimate time. In this workflow, time-to-implement is almost meaningless as a planning metric. I use them as a shorthand for how complex the _ask_ is — how much architectural thinking the ticket requires, how carefully I'll need to review the output, how many moving parts are involved. A 2-point ticket means I'll glance at the diff and approve. A 5-point ticket means I'm reading every line and probably testing in the browser. They're a gauge for my review effort, not the AI's build effort.
+
+That conversation produced three reference documents: a Technical Architecture spec (stack decisions, data flow, component hierarchy), a Design Brief (color theory, typography rationale, mood references, component-level direction), and an Event Content Guide (character names, tone, category definitions). Each document was detailed enough that a specialized AI agent could work from it independently, without needing me to re-explain context.
+
+Then we wrote tickets. Twenty-one of them across five epics, each with detailed acceptance criteria. A good ticket for this workflow reads like a spec: exact field names, expected sort behavior, specific responsive breakpoints, accessibility requirements. This matters because the ticket becomes the prompt. When I hand a ticket to Claude Code, it should contain everything needed to complete the work without me re-explaining the architecture.
+
+
+_Jira ticket CBERIC-22 showing the SearchBar component spec: acceptance criteria with file paths, styling requirements, keyboard behavior, and a Technical Context section with code references and cross-ticket dependencies. Story Points: 2._
+
+Here's a concrete example. The acceptance criteria for the EventCard component included: full-card link wrapped in ``, `` landmark with `aria-labelledby`, metadata rendered as `