Your 90-Day Action Plan
Three eras. Twenty-three milestones. One quarter to transform your team.
Every advance in the tech tree below maps to a concrete action you can take this week. Research them in order or jump around. Click a node to mark it complete, unlock the next era by finishing six in the current one. When you're ready to make the case, grab the copyable plan at the bottom and send it to your CTO.
0 of 23 advances discovered
Your team has discovered:
Phase 1: Foundation (Days 1-30)
- Audit test coverage and identify critical paths below 80%
- Set up a project instruction file (called CLAUDE.md) that teaches the AI agent your codebase's rules and conventions
- Choose one senior developer as the AI pilot
- Install and configure agentic tools (Claude Code, Cursor, etc.)
- Pre-approve routine operations (running tests, checking code style) so the agent doesn't stop and ask permission for every small step
- Pick one well-defined feature for the first AI-assisted build
- Measure baseline: how long does a typical feature take today?
- Establish AI code review guidelines (what to look for in AI output)
Phase 2: Expansion (Days 31-60)
- Roll out agentic tools to the full development team
- Train team on effective prompting and agent oversight
- Integrate AI code review into your PR process
- Set up automated security scanning for AI-generated code
- Set up automatic quality gates that check the agent's code for errors and style violations on every edit
- Start tracking AI-assisted vs. manual feature completion times
- Identify codebase gaps that slow down AI (missing tests, types, docs)
- Begin filling those gaps as part of regular sprint work
Phase 3: Optimization (Days 61-90)
- Train senior devs on running parallel AI sessions (3-5 concurrent)
- Define project-specific AI conventions (what patterns to follow, what to avoid)
- Set up automated AI-assisted security reviews
- Redesign sprint planning to account for AI-boosted velocity
- Document your team's AI playbook (what works, what doesn't)
- Calculate ROI: effective output vs. tool costs vs. baseline
- Plan next quarter's roadmap based on new velocity
## Sprint Planning — AI-Augmented Workflow
### Story Classification
Tag each story before estimation:
**AI-SOLO** - Agent can complete with minimal oversight
Examples: CRUD endpoints, unit tests, data migrations,
boilerplate components, documentation updates
Estimate: 10-20% of pre-AI estimate
**AI-ASSIST** — Developer leads, agent accelerates
Examples: New features, refactoring, integrations,
bug fixes with clear reproduction steps
Estimate: 30-50% of pre-AI estimate
Note: Open-ended or greenfield work fits here, but
requires detailed PRDs/specs upfront. The harder the
problem, the more the effort shifts from coding to
specifying. Budget time for spec writing accordingly.
**HUMAN-ONLY** — Requires human judgment throughout
Examples: Architecture decisions, security-critical code,
performance optimization, vendor evaluations
Estimate: 80-100% of pre-AI estimate
### Capacity Planning Adjustments
- Senior dev with AI: 2-3x previous velocity on AI-SOLO/ASSIST
- Junior dev with AI: 1.2-1.5x (still learning to review)
- AI-SOLO stories can run in parallel (batch 3-5 per dev)
- Budget 20% of sprint for AI infrastructure improvements
### Sprint Review Additions
- Track AI-assisted vs. manual completion times
- Review AI-generated code quality (bugs found post-merge)
- Update CLAUDE.md with new patterns discovered
(e.g., agent keeps starting the dev server when it's
already running, add one line to CLAUDE.md and it
never happens again. Encode the fix, not the complaint.)
- Share effective prompts across the team
Most teams can research Era I on their own. Where it gets tricky is the codebase-specific stuff: which tests to write first, how to structure the CLAUDE.md for your architecture, what conventions the agent needs to be productive. That's where having someone who's done it before saves you weeks of false starts.