TechLead
Lesson 21 of 25
6 min read
AI-Native Engineering

AI-Native Team Practices

Build an AI-native engineering culture — from shared CLAUDE.md standards and AI tool policies to hiring for AI literacy and measuring team AI maturity

From Individual to Team AI Adoption

Individual AI productivity is powerful. Team-level AI adoption is transformational. When one engineer uses Claude Code, they are 5-10x more productive. When the entire team uses AI tools with shared standards and practices, the compound effect changes what the team can deliver. But team adoption requires intentional practices — you cannot just tell everyone to "use AI" and expect consistent results.

The Four Pillars of Team AI Adoption

  • Standards: Shared CLAUDE.md, .cursorrules, and conventions that every team member follows
  • Policies: Clear rules about what AI tools can and cannot do (security, code ownership, review requirements)
  • Knowledge Sharing: Regular sharing of effective prompts, workflows, and tips across the team
  • Measurement: Tracking AI adoption and its impact on team output and quality

Creating Shared CLAUDE.md Standards

# Team CLAUDE.md strategy:

# 1. Root CLAUDE.md (committed to repo — team-wide)
# Contains: tech stack, architecture, coding conventions,
# testing requirements, deployment procedures
# Maintained by: tech lead or rotating owner
# Review cadence: updated with every architecture change

# 2. Per-package CLAUDE.md (for monorepos)
# Contains: package-specific conventions and gotchas
# Maintained by: package owners

# 3. Personal .claude/CLAUDE.md (gitignored)
# Contains: individual preferences, editor settings
# Each engineer maintains their own

# Team agreement template:
# "Our CLAUDE.md is a living document. When you add a new
#  convention, update CLAUDE.md in the same PR. When Claude
#  makes a mistake due to missing instructions, add the
#  instruction to CLAUDE.md so it does not happen again."

AI Tool Policies

Policy Area Recommended Policy Rationale
Approved ToolsClaude Code, Cursor, GitHub Copilot (approved list)Consistent experience, security vetting, license compliance
Code OwnershipHuman reviewer owns all AI-generated codeAccountability, quality standards
Review RequirementsAI code gets same review standard as human codeQuality consistency, trust building
SecretsNever paste credentials, .env files, or PII into AISecurity — data sent to AI APIs may be logged
Sensitive CodeAuth, payments, and PII handling require human-written or human-verified codeHigher scrutiny for security-critical paths
DependenciesAI-suggested dependencies must be vetted before installationSupply chain security

Onboarding Engineers to AI-Native Workflows

## AI-Native Onboarding Checklist (Week 1)

### Day 1: Tool Setup
- [ ] Install Claude Code (npm install -g @anthropic-ai/claude-code)
- [ ] Install Cursor and import VS Code settings
- [ ] Read the team CLAUDE.md and understand conventions
- [ ] Set up personal .claude/CLAUDE.md with preferences

### Day 2: First Tasks with AI
- [ ] Complete a bug fix using Claude Code
- [ ] Write tests for one module using Claude Code
- [ ] Do a Cmd+K inline edit in Cursor
- [ ] Review an AI-generated diff and provide feedback

### Day 3-4: Intermediate Workflows
- [ ] Implement a small feature using Claude Code plan mode
- [ ] Use Claude Code for a code review (/review-pr)
- [ ] Refactor a file using AI assistance
- [ ] Generate documentation for a module

### Day 5: Team Practices
- [ ] Share one effective prompt or tip with the team
- [ ] Read the team's "Effective Prompts" document
- [ ] Understand the team's AI tool policies
- [ ] Pair with a senior engineer on an AI-assisted task

### Ongoing (Weeks 2-4)
- [ ] Gradually increase task complexity
- [ ] Contribute improvements to CLAUDE.md
- [ ] Share learnings in the team's AI tips channel
- [ ] Build comfort with reviewing AI-generated diffs

Knowledge Sharing About Effective Prompts

# Create a shared "Effective Prompts" document or Slack channel

# Example entries:
# -------
# TASK: Write tests for a React component
# PROMPT: "Write comprehensive tests for [Component] using our
#   testing patterns from [ExistingTest.test.tsx]. Cover: rendering,
#   user interactions, edge cases, error states. Use our test
#   utilities from app/test/utils.ts."
# WHY IT WORKS: References existing patterns, specifies scope
# -------

# TASK: Debug a production error
# PROMPT: "[error message + stack trace]. This started after
#   [recent change]. Expected: [X]. Actual: [Y]. Check [specific
#   files] and trace the data flow."
# WHY IT WORKS: Provides complete context, narrows scope
# -------

# Run monthly "AI Tips" sessions where team members share
# their best prompts, workflows, and discoveries

Measuring Team AI Maturity

Level Description Indicators
Level 1: CuriousSome engineers experiment with AI tools individuallySporadic use, no standards, inconsistent results
Level 2: AdoptingTeam has chosen tools and started using them regularlyApproved tool list, basic CLAUDE.md, some shared prompts
Level 3: IntegratedAI tools are part of the daily workflow for most tasksComprehensive CLAUDE.md, AI review in CI, team prompt library
Level 4: NativeAI is embedded in every engineering processCustom MCP servers, internal AI tools, AI in CI/CD, hiring for AI skills
Level 5: MultipliedTeam output is 5-10x what it was pre-AI with higher qualityMeasurable throughput increase, lower defect rates, faster onboarding

AI Literacy as a Hiring Criterion

As AI becomes core to engineering workflows, AI literacy becomes a hiring criterion. This does not mean candidates need to be AI experts. It means they should: (1) be comfortable using AI tools in their workflow, (2) know how to review AI-generated code critically, (3) understand the limitations and risks of AI, (4) be adaptable as tools evolve. Add an AI-assisted coding exercise to your interview process — not to test AI skills specifically, but to see how candidates think with AI tools available.

Team AI Adoption Playbook

Month 1: Choose tools, write CLAUDE.md, create policies, run a workshop. Month 2: Everyone uses AI for at least one task daily, share prompts weekly, iterate on CLAUDE.md. Month 3: Add AI review to CI, build first custom tool or MCP server, measure throughput. Month 4+: Continuous improvement — refine workflows, share advanced techniques, hire for AI literacy. The key is starting with small, daily usage and building from there.

Summary

Team-level AI adoption requires more than individual tool usage — it requires shared standards, clear policies, knowledge sharing, and measurement. Start with a team CLAUDE.md and approved tool list. Build onboarding checklists and prompt libraries. Measure maturity and iterate. The teams that adopt AI systematically will outperform those where AI usage is ad-hoc by a widening margin.

Continue Learning