Why Prioritization Is Hard
Every engineering team has more work to do than time to do it. Features, bug fixes, technical debt, infrastructure improvements, security patches, and developer experience investments all compete for the same finite engineering capacity. As a Tech Lead, you are constantly making prioritization decisions, whether explicitly in planning meetings or implicitly by choosing what to work on next.
The temptation is to prioritize based on who shouts loudest, what feels most urgent, or what is most technically interesting. These are all poor heuristics. Effective prioritization requires frameworks that help you evaluate work objectively, communicate trade-offs clearly, and make decisions that align with your team's goals.
RICE Scoring
RICE is a quantitative framework developed by Intercom that scores initiatives on four dimensions:
RICE Components
| Factor | Definition | Scale |
|---|---|---|
| Reach | How many users/customers will this affect per quarter? | Number of people |
| Impact | How much will this affect each person? | 3=Massive, 2=High, 1=Medium, 0.5=Low, 0.25=Minimal |
| Confidence | How confident are you in your estimates? | 100%=High, 80%=Medium, 50%=Low |
| Effort | How many person-months will this take? | Number of person-months |
// RICE Score = (Reach * Impact * Confidence) / Effort
interface RICEItem {
name: string;
reach: number; // users per quarter
impact: number; // 0.25 | 0.5 | 1 | 2 | 3
confidence: number; // 0.5 | 0.8 | 1.0
effort: number; // person-months
}
function calculateRICE(item: RICEItem): number {
return (item.reach * item.impact * item.confidence) / item.effort;
}
// Example: Search improvements
// Reach: 50,000 users/quarter
// Impact: 2 (high - significantly faster results)
// Confidence: 80% (we have prototype data)
// Effort: 2 person-months
// Score: (50000 * 2 * 0.8) / 2 = 40,000
// Example: Admin dashboard redesign
// Reach: 50 users/quarter (internal admins)
// Impact: 1 (medium - nicer but works today)
// Confidence: 100%
// Effort: 3 person-months
// Score: (50 * 1 * 1.0) / 3 = 16.7
ICE Scoring
ICE is a simpler alternative to RICE, useful for faster prioritization:
- Impact: How much will this move the needle? (1-10)
- Confidence: How sure are we about the impact? (1-10)
- Ease: How easy is this to implement? (1-10, where 10 is easiest)
ICE Score = Impact x Confidence x Ease. It trades RICE's precision for speed and is well-suited for backlog grooming sessions where you need quick relative comparisons.
MoSCoW Method
MoSCoW is a categorical framework that is especially useful for defining scope within a fixed timeline (e.g., "What must we ship by the launch date?"):
MoSCoW Categories
- Must Have: Non-negotiable requirements. The release is a failure without these. Should be no more than 60% of capacity.
- Should Have: Important but not critical. Significant value, and painful to leave out, but the release is still viable. 20% of capacity.
- Could Have: Desirable features that are easy to include or leave out. Nice to have. 20% of capacity.
- Won't Have (this time): Explicitly out of scope for this release. Important to name so stakeholders do not expect them.
Impact Mapping
Impact mapping connects engineering work to business outcomes by asking four questions:
- Goal: What business outcome are we trying to achieve? (e.g., increase monthly active users by 20%)
- Actors: Who can help us achieve or prevent this goal? (e.g., new users, existing users, marketing team)
- Impacts: How should the actors' behavior change? (e.g., new users should complete onboarding, existing users should return weekly)
- Deliverables: What can we build to create those behavioral changes? (e.g., onboarding wizard, weekly digest email)
Impact mapping prevents the common failure of building features that are technically impressive but do not serve business goals.
Prioritizing Technical Work
Technical debt, infrastructure improvements, and developer experience investments are notoriously hard to prioritize against feature work because their value is indirect. Use these strategies:
- Quantify the cost: "This technical debt costs us 10 engineering hours per week in workarounds. Fixing it pays for itself in 3 weeks."
- Attach to features: "To build the new reporting feature, we first need to upgrade the database driver (adds 3 days to the feature estimate)."
- Use the 20% rule: Reserve 20% of each sprint for technical investments. This is non-negotiable and does not require stakeholder justification for each item.
- Track velocity impact: If velocity is declining over time, it is evidence that technical debt is accumulating and needs attention.
Prioritization Anti-patterns
- HiPPO-driven: The Highest-Paid Person's Opinion determines priority regardless of data
- Squeaky wheel: The loudest stakeholder gets their work prioritized
- Shiny object syndrome: Chasing the newest, most exciting idea at the expense of boring but important work
- Sunk cost: Continuing to invest in a failing project because of prior investment
- Everything is P0: When everything is the highest priority, nothing is. Force rank ruthlessly.
Summary
Prioritization frameworks bring objectivity and transparency to one of the hardest aspects of engineering leadership. Use RICE for data-driven scoring, MoSCoW for scope definition, ICE for quick comparisons, and impact mapping for strategic alignment. The framework you choose matters less than having a framework at all. Consistent, transparent prioritization builds trust with stakeholders and keeps the team focused on what matters most.