What Your Team Needs to Change
Your Codebase Might Not Be Ready
Agentic coding doesn't just work on any codebase. I've assessed teams where the codebase simply wasn't ready: no tests, no typing, no conventions. In those cases, I start by creating basic CLAUDE.md files and setting up automated PR reviews and security scanning with Claude. These entry points help the team benefit from AI agents without requiring developers to change their entire workflow overnight. The investment you make in code quality directly multiplies your AI productivity.
Creator of XP/TDD · Nov 2025
Why Good Practices Are Now Infrastructure
Before AI agents, test coverage was a best practice. Now it’s infrastructure. An agent that can run tests after every change catches its own mistakes and iterates to a correct solution. Without tests, the agent writes code that looks right but may not work, and you won’t know until production.
Code Review Needs Redesign
When AI generates 70-80% of the code, your review process needs to change. Developers shift from reviewing each other’s line-by-line changes to evaluating AI output for correctness, security, and architectural fit. That’s a different skill, and your team needs to develop it intentionally.
Security Is Non-Negotiable
AI-generated code needs both automated security scanning and human oversight. Agents can inadvertently introduce vulnerabilities that pass tests but create attack surfaces. Automated security scanning (SAST for static analysis of source code, DAST for dynamic testing of running apps; tools like Snyk, SonarQube, and ZAP) plus human security review create the right safety net.
Skill Composition Creates Emergent Capabilities
When you combine strong typing, comprehensive tests, clear architecture docs, and fast CI, the agent becomes dramatically more capable than with any single practice. A well-prepared codebase makes the agent 5-10x more effective than a messy one.
Is Your Codebase AI-Ready?
- Test coverage above 80% on critical paths
- Type system in use (TypeScript, Python type hints, etc.)
- Automated build-and-test pipeline (CI) runs in under 10 minutes
- Code follows consistent naming conventions
- Architecture is documented (even briefly)
- Code style checking and formatting are automated
- Dependencies are up to date (within 6 months)
- No hardcoded secrets in the repository
- Clear separation of concerns (not a monolithic tangle)
- API contracts are well-defined
## PR Review Checklist — AI-Generated Code
### Correctness
- [ ] Does the code actually solve the stated problem?
- [ ] Are edge cases handled (null, empty, overflow)?
- [ ] Do all tests pass, including new ones?
- [ ] Is the logic correct, not just plausible-looking?
### Security
- [ ] No hardcoded secrets or credentials
- [ ] User input is validated and sanitized
- [ ] No SQL injection, XSS, or CSRF vulnerabilities
- [ ] Authentication/authorization checks in place
- [ ] SAST scan passes clean
### Architecture
- [ ] Follows existing patterns in the codebase
- [ ] No unnecessary abstractions or over-engineering
- [ ] Changes are in the right layer (service/controller/model)
- [ ] No circular dependencies introduced
### Performance
- [ ] No N+1 queries or unbounded loops
- [ ] Database queries are indexed appropriately
- [ ] No memory leaks (event listeners, subscriptions cleaned up)
### Maintainability
- [ ] Code is readable without AI-generated comments
- [ ] No dead code or unused imports
- [ ] Variable names are meaningful, not generic
- [ ] Complex logic has tests, not just comments
Scored lower than you'd like? Most teams do. Scored high? The gaps that matter most are the ones a checklist can't catch. My free AI Readiness Audit covers your actual codebase and shows you exactly what to fix first.
Book your free audit →