TechLead

Testing in CI/CD Pipelines

Configure GitHub Actions for automated testing, parallel execution, test splitting, flaky test management, and coverage gates

Why CI/CD Testing Matters

Running tests automatically on every push and pull request prevents broken code from reaching production. A well-configured CI pipeline runs tests fast, provides clear feedback, and blocks merges when quality gates fail.

CI Testing Goals:

Fast feedback (under 10 minutes), reliable results (no flaky tests), clear reporting, and automated quality gates for code coverage and test pass rates.

GitHub Actions Test Workflow

# .github/workflows/test.yml
name: Tests
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18, 20, 22]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: \${{ matrix.node-version }}
          cache: 'npm'
      - run: npm ci
      - run: npm run test -- --coverage --ci
      - uses: actions/upload-artifact@v4
        with:
          name: coverage-\${{ matrix.node-version }}
          path: coverage/

  e2e-tests:
    runs-on: ubuntu-latest
    needs: unit-tests
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: npm run build
      - run: npx playwright test
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: playwright-report
          path: playwright-report/

  coverage-gate:
    runs-on: ubuntu-latest
    needs: unit-tests
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm ci
      - run: npm run test -- --coverage --ci
      - name: Check coverage thresholds
        run: |
          npx istanbul check-coverage \
            --branches 80 \
            --functions 80 \
            --lines 80 \
            --statements 80

Parallel Execution & Test Splitting

# Parallel test execution with sharding
# .github/workflows/parallel-tests.yml
jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        shard: [1, 2, 3, 4]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm ci
      - run: npx playwright install --with-deps
      - name: Run tests (shard \${{ matrix.shard }}/4)
        run: npx playwright test --shard=\${{ matrix.shard }}/4
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: results-\${{ matrix.shard }}
          path: test-results/

  merge-reports:
    needs: test
    runs-on: ubuntu-latest
    if: always()
    steps:
      - uses: actions/download-artifact@v4
        with:
          pattern: results-*
          merge-multiple: true
      - run: npx playwright merge-reports ./test-results --reporter=html

# Jest parallel with workers
# jest.config.js
module.exports = {
  maxWorkers: '50%',  // Use half of available CPUs
  // Or set to specific number
  // maxWorkers: 4,
};

Flaky Test Management

// Playwright retry configuration
// playwright.config.ts
export default defineConfig({
  retries: process.env.CI ? 2 : 0, // Retry twice in CI only
  reporter: [
    ['html'],
    ['json', { outputFile: 'test-results.json' }],
  ],
});

// Tag and track flaky tests
test('sometimes flaky network test @flaky', async ({ page }) => {
  // Mark test with annotation
  test.info().annotations.push({
    type: 'issue',
    description: 'JIRA-1234: Flaky due to network timing',
  });

  // Add retry logic for known flaky operations
  await expect(async () => {
    const response = await page.goto('/api/data');
    expect(response?.status()).toBe(200);
  }).toPass({ timeout: 10000 });
});

// Jest retry with jest-retries
// jest.config.js
module.exports = {
  // Built-in retry for Jest 29+
  testRetries: process.env.CI ? 2 : 0,
};

// Track flaky tests over time
// Create a script to parse test results
// const results = require('./test-results.json');
// const flaky = results.suites
//   .flatMap(s => s.specs)
//   .filter(spec => spec.tests.some(t => t.results.length > 1));

Pre-Commit Hooks with Husky

# Install Husky and lint-staged
npm install --save-dev husky lint-staged
npx husky init

# .husky/pre-commit
npx lint-staged

# .husky/pre-push
npm run test -- --bail --findRelatedTests

# package.json - lint-staged config
{
  "lint-staged": {
    "*.{ts,tsx}": [
      "eslint --fix",
      "prettier --write"
    ],
    "*.test.{ts,tsx}": [
      "jest --bail --findRelatedTests"
    ]
  }
}

# Only run tests for changed files on commit
# Run full suite in CI

Test Reporting in PRs:

  • GitHub Actions annotations: Use --reporters=github-actions for inline PR comments.
  • Coverage comments: Use actions like jest-coverage-comment to post coverage diffs.
  • Required checks: Configure branch protection to require test jobs to pass.

Key Takeaways

  • Run unit tests on every push, E2E tests on PRs, and full suites before deploy
  • Use test sharding to parallelize slow test suites across multiple CI runners
  • Fix flaky tests immediately -- retries are a band-aid, not a solution
  • Set coverage thresholds as gates but focus on meaningful coverage, not just numbers
  • Use pre-commit hooks for fast feedback and CI for comprehensive validation

Continue Learning