AIDD Framework

Deep Dive into AI-Driven Development

Lecture 2

Mastering the six-phase workflow for production-ready AI coding

Why We Need AIDD

The gap between AI capability and code quality

Without Methodology

  • Code works initially, breaks at scale
  • Duplicate logic scattered everywhere
  • AI forgets context mid-project
  • No test coverage
  • Security vulnerabilities undetected

With AIDD

  • Production-ready from day one
  • Single source of truth enforced
  • Vision document maintains context
  • TDD guarantees correctness
  • Review phase catches issues early
Key insight: AI is a powerful tool, but tools need processes. AIDD is that process.

The AIDD Workflow

1
Discover
Map journeys
2
Plan
Create tasks
3
Review
Simplify
4
Execute
TDD build
5
Commit
Push code
6
Test
Validate

Planning Phase

Discover + Plan + Review
Goal: Define exactly what to build

Building Phase

Execute with TDD
Goal: Build with tests first

Validation Phase

Commit + Test
Goal: Verify it works

The Vision Document: Your Project's North Star

Every AIDD project starts with vision.md

Project Overview

What we're building and why it matters

Target Audience

Who uses this and their needs

Primary Goals

Must-achieve objectives

Non-Goals

Explicit scope boundaries

Technical Stack

Languages, frameworks, services

Architecture Decisions

Key choices with reasoning

Constraints

Performance, security, compliance

Success Metrics

How we measure achievement

Why it matters: AI consults this before every task, preventing context drift.

Phase 1: Discover

Discover - Map User Journeys
Read vision.md first. Map user journeys for [feature name]: 1. Personas - Who are the users? What are their roles? 2. Goals - What do they want to achieve? 3. Entry Points - Where do they start? 4. User Flows - What steps do they take? 5. Success Criteria - How do they know it worked? 6. Edge Cases - What could go wrong? Output as a structured story map.

User Personas

Define distinct user types and their characteristics

Story Mapping

Visual representation of user activities

Edge Cases

What happens when things go wrong?

Out of Scope

Explicitly define what we won't build

Phase 2: Plan

Plan - Create Task Epics
Based on the user journeys for [feature], create task epics. For each epic: 1. Title - Clear, actionable name 2. User Story - "As a [user], I want [action] so that [benefit]" 3. Acceptance Criteria - Testable conditions (checkboxes) 4. Dependencies - What must exist first? 5. Complexity - S / M / L estimate Group related tasks. Order by dependency.

Example Epic

Epic: User Registration
Story: As a visitor, I want to create an account so I can access the app
Criteria: [ ] Email validated [ ] Password hashed [ ] Confirmation sent [ ] Error handling
Dependencies: Database schema, Email service | Size: M

Phase 3: Review

Review - Simplify & Validate
Review the task epics for [feature]. Check for: 1. Duplication - Are any requirements repeated across tasks? 2. Coupling - Are tasks too interdependent? 3. Vision Alignment - Does this match vision.md goals & non-goals? 4. Missing Cases - Are error states and edge cases covered? 5. Over-engineering - Can anything be simplified? 6. Security - Are there obvious vulnerabilities? Suggest improvements and flag concerns.

Common Issues

  • Duplicate validation logic
  • Missing error states
  • Scope creep (non-goals)

After Review

  • Tasks are independent
  • Single source of truth
  • Clear test criteria

Phase 4: Execute

Execute - TDD Implementation
Implement [task name] using strict TDD. Step 1: Write a failing test for the first acceptance criterion Step 2: Write minimal code to make the test pass Step 3: Refactor if needed (keep tests green) Step 4: Repeat for next acceptance criterion RULES: - Do NOT implement multiple requirements at once - Show me the test FIRST, wait for approval, then implement - Each commit should be one red-green-refactor cycle

The TDD Cycle

RED: Write failing test → GREEN: Write minimal code to pass → REFACTOR: Clean up while keeping green

Phase 5: Commit

Commit - Push Clean Code

Commit Guidelines

  • All tests must pass before commit
  • One logical change per commit
  • Descriptive commit messages
  • Reference task/epic in message
  • No commented-out code

Pre-Commit Checks

  • Linting (ESLint, Prettier, etc.)
  • Type checking (TypeScript)
  • Unit tests (Jest, pytest)
  • Security scanning (Snyk, etc.)
  • Code coverage thresholds
Prepare commit for [completed task]: 1. Run all tests and verify they pass 2. Check code against linting rules 3. Write commit message following format: type(scope): description Example: feat(auth): add user registration endpoint

Phase 6: Test

Test - Human + Automated Validation

Human Testing

Think-Aloud Protocol:

  • User verbalizes thoughts while testing
  • Reveals usability issues
  • Catches UX problems AI misses
  • 3-5 users find 65-85% of issues

Automated Testing

E2E Test Scripts:

  • Playwright/Cypress for UI tests
  • API integration tests
  • Performance benchmarks
  • Screenshot comparisons
Generate test scripts for [feature]: 1. Human Test Script: Step-by-step tasks, questions to ask, what to observe 2. Automated E2E Tests: Happy path + edge cases with assertions

The AIDD Iteration Loop

Continuous refinement, not one-time execution

After Testing

Issues found? Go back to Discover for new requirements or Execute for fixes

Scope Changes

New requirements? Update vision.md, then restart from Discover

Bug Reports

Create new epic, Review for impact, Execute with TDD fix

Refactoring

Review existing code, Execute improvements (tests stay green)

Key principle: The workflow is iterative. Each loop improves quality and understanding.

Choosing the Right AI Tool for Each Phase

PhaseBest ToolsWhy
DiscoverClaude, ChatGPTStrong reasoning for requirements analysis
PlanClaude, CursorTask decomposition, codebase awareness
ReviewClaude Code, Amazon QCode analysis, security scanning
ExecuteClaude Code, CursorMulti-file edits, TDD support
CommitGitHub CopilotCommit message generation
TestClaude Code, PlaywrightTest script generation
Pro tip: Use Claude 4.5 Sonnet or higher for complex reasoning tasks.

Common AIDD Mistakes to Avoid

Skipping Discover

"Just build it" leads to rework. Always map requirements first.

No Vision Document

AI loses context. Vision.md is the source of truth.

Batching Tests

Writing tests after code defeats TDD's purpose.

Skipping Review

Duplicate code and coupling go unnoticed.

No Human Testing

AI can't catch UX issues. Users must validate.

Giant Commits

Hard to review, hard to revert. Small commits win.

AIDD Best Practices

1. Start with Vision

Create vision.md before writing any code. Update it when requirements change.

2. One Task at a Time

Complete one epic fully before starting the next. Avoid context switching.

3. Test First, Always

Write the failing test, show it to AI, then implement. Never batch.

4. Review Everything

AI makes mistakes. Review phase catches them before they become tech debt.

The goal isn't to write code faster. It's to write better code, faster.

AIDD Framework Summary

6

Phases: Discover, Plan, Review, Execute, Commit, Test

1

Vision document as the single source of truth

TDD

Test-Driven Development at the core of Execute

1
Discover
2
Plan
3
Review
4
Execute
5
Commit
6
Test

Questions?

Deep Dive into AIDD Framework

Next: Effective Prompt Engineering

Slide Overview