TechLead
Lesson 20 of 30
5 min read
Project Management

Metrics, KPIs, and Dashboards

Master project metrics including velocity, DORA metrics, cycle time, burndown charts, and build effective dashboards for tracking project health

Why Metrics Matter

"You cannot improve what you do not measure." — Peter Drucker. Metrics provide objective data to answer questions like: Are we on track? Are we getting faster? Is quality improving? Without metrics, decisions are based on gut feeling and the loudest voice in the room.

But beware: Goodhart's Law states "When a measure becomes a target, it ceases to be a good measure." Track metrics to understand, not to punish. Never use velocity to compare teams or evaluate individuals.

Key Project Metrics

Metric What It Measures Target Watch Out For
VelocityStory points completed per sprintStable (not increasing)Inflating points to look productive
Cycle TimeTime from work started to doneDecreasing trendLarge items skewing the average
Lead TimeTime from request to deliveryDecreasing trendBacklog wait time not included
ThroughputItems completed per time periodStable or increasingCounting trivial items
Sprint BurndownRemaining work vs time in sprintSmooth downward curveFlat lines (no progress) or cliff (all done last day)
Escaped DefectsBugs found in productionNear zeroNot testing in production-like environment
Team HappinessTeam morale and satisfaction>= 4/5Burnout signals in declining scores

DORA Metrics

The DORA (DevOps Research and Assessment) metrics are the gold standard for measuring software delivery performance. Research by Google's DORA team has shown these four metrics reliably predict organizational performance.

The Four DORA Metrics

  • Deployment Frequency: How often code is deployed to production. Elite teams deploy on-demand (multiple times per day).
  • Lead Time for Changes: Time from code commit to production. Elite teams: less than 1 hour.
  • Change Failure Rate: Percentage of deployments causing a failure. Elite teams: 0-15%.
  • Mean Time to Recovery (MTTR): Time to restore service after an incident. Elite teams: less than 1 hour.
// Metrics Dashboard Data Model
interface ProjectDashboard {
  projectId: string;
  sprintMetrics: SprintMetricsData;
  doraMetrics: DORAMetrics;
  qualityMetrics: QualityMetrics;
  teamMetrics: TeamMetrics;
}

interface SprintMetricsData {
  currentSprint: {
    number: number;
    goal: string;
    plannedPoints: number;
    completedPoints: number;
    daysRemaining: number;
    burndownData: { date: string; remaining: number; ideal: number }[];
  };
  velocityHistory: { sprint: number; planned: number; completed: number }[];
  averageVelocity: number;
  velocityTrend: 'increasing' | 'stable' | 'decreasing';
}

interface DORAMetrics {
  deploymentFrequency: {
    value: number;
    unit: 'per-day' | 'per-week' | 'per-month';
    rating: 'elite' | 'high' | 'medium' | 'low';
  };
  leadTimeForChanges: {
    value: number;
    unit: 'hours' | 'days' | 'weeks';
    rating: 'elite' | 'high' | 'medium' | 'low';
  };
  changeFailureRate: {
    value: number; // percentage
    rating: 'elite' | 'high' | 'medium' | 'low';
  };
  mttr: {
    value: number;
    unit: 'minutes' | 'hours' | 'days';
    rating: 'elite' | 'high' | 'medium' | 'low';
  };
}

interface QualityMetrics {
  codeCoverage: number;
  escapedDefects: number; // per release
  technicalDebtRatio: number; // percentage
  codeReviewTurnaround: number; // hours
  bugFixRate: number; // bugs fixed per sprint
  openBugs: { critical: number; high: number; medium: number; low: number };
}

interface TeamMetrics {
  happinessScore: number; // 1-5
  focusTime: number; // hours per day of uninterrupted work
  meetingLoad: number; // hours per week in meetings
  onCallBurden: number; // hours per week
}

// DORA rating thresholds
function rateDORA(metric: string, value: number): string {
  const thresholds: Record = {
    'deploymentFrequency': { elite: 1, high: 0.14, medium: 0.033 }, // per day
    'leadTimeHours': { elite: 1, high: 24, medium: 168 },
    'changeFailureRate': { elite: 5, high: 15, medium: 30 }, // percentage (lower is better)
    'mttrHours': { elite: 1, high: 24, medium: 168 },
  };

  const t = thresholds[metric];
  if (!t) return 'unknown';

  if (metric === 'changeFailureRate') {
    if (value <= t.elite) return 'elite';
    if (value <= t.high) return 'high';
    if (value <= t.medium) return 'medium';
    return 'low';
  }

  if (metric === 'mttrHours') {
    if (value <= t.elite) return 'elite';
    if (value <= t.high) return 'high';
    if (value <= t.medium) return 'medium';
    return 'low';
  }

  if (value >= t.elite) return 'elite';
  if (value >= t.high) return 'high';
  if (value >= t.medium) return 'medium';
  return 'low';
}

Metrics Anti-Patterns

  • Vanity Metrics: Metrics that look good but do not drive decisions (e.g., "lines of code written"). Focus on actionable metrics.
  • Individual Metrics: Tracking individual developer output creates perverse incentives. Measure team performance instead.
  • Too Many Metrics: If you track 50 metrics, you track none. Pick 5-7 key metrics and focus on those.
  • Comparing Teams: Each team has different context, tech stack, and domain complexity. Never compare velocity across teams.

Dashboard Best Practices

  • One Page: A dashboard that requires scrolling is a report, not a dashboard. Fit everything on one screen.
  • RAG Colors: Use red/amber/green consistently to draw attention to what needs it.
  • Trends Over Snapshots: Show trends (last 5 sprints) not just current values. A single data point means nothing.
  • Automate: Manually updated dashboards are always out of date. Connect to Jira/GitHub APIs.

Continue Learning