Answer

What is RICE prioritization — and how to score it without fooling yourself.

Short answer

RICE is a prioritization framework that scores product ideas by four factors: Reach (how many users affected per period), Impact (how much it moves the needle for each user, scored 0.25–3), Confidence (how sure are you of the estimates, 20%–100%), and Effort (person-months to build). RICE score = (Reach × Impact × Confidence) / Effort. Higher score = higher priority. Developed at Intercom, it's one of the most widely used product prioritization methods.

RICE was developed by Sean McBride at Intercom and published in 2016. It was designed to replace gut-feel prioritization debates with a consistent numerical framework. The formula produces a single score per initiative that makes comparison straightforward.

The four dimensions.

Reach. How many users will this affect in a defined time period (typically one quarter)? Use real data where possible — daily active users who see this feature, number of customers who'll use this workflow. Don't guess; pull numbers from your analytics.

Impact. How much will this move the needle for each user who experiences it? Use a fixed scale to keep estimates comparable:

  • 3 = massive impact
  • 2 = high impact
  • 1 = medium impact
  • 0.5 = low impact
  • 0.25 = minimal impact

This is the most subjective dimension. Ground it in user research, A/B test data from similar changes, or structured expert estimation.

Confidence. How confident are you in your Reach and Impact estimates? Expressed as a percentage:

  • 100% = high confidence, backed by data
  • 80% = medium confidence, some assumptions
  • 50% = low confidence, mostly hypothesis
  • 20% = rough guess

Confidence penalizes items where you're estimating blindly — a high-impact item you're only 20% confident about scores much lower than a moderate-impact item you're 80% confident about.

Effort. Total person-months required from all team members (design, engineering, QA, data). Use your team's standard estimation method — story points converted to time, or direct time estimates. Effort is in the denominator, so higher effort means lower RICE score.

The formula. RICE score = (Reach × Impact × Confidence) / Effort

Example: A feature that reaches 500 users per quarter, has medium impact (1), at 80% confidence, requiring 2 person-months: Score = (500 × 1 × 0.8) / 2 = 200

Running a RICE session. Build a table on the whiteboard: initiatives in rows, RICE dimensions in columns. Have each team member estimate each dimension independently, then average or discuss to reach a working estimate. Calculate scores. Rank by score.

The ranking surfaces non-obvious priorities — items that seemed unimportant often score high because they're low-effort with broad reach. Items that felt urgent often score low because they're low-confidence.

RICE's limitations. It's not a substitute for strategy — a high RICE score doesn't mean an initiative is strategically right. Items that serve different user segments can't be directly compared without normalizing by segment size. Use RICE for tactical backlog prioritization within a defined strategic direction.

Snap the RICE scoring whiteboard with BoardSnap. The AI reads the estimates and formula results and produces a ranked list with scores — ready to paste into your product backlog.

Frequently asked

Who created the RICE framework?

Sean McBride, then a product manager at Intercom, developed and published RICE in 2016. Intercom's blog post on the framework became one of the most shared product management articles of that year and established RICE as a standard method across the industry.

What's the difference between RICE and ICE scoring?

ICE (Impact, Confidence, Ease) is a simpler three-factor model. It lacks a Reach dimension, making it less useful when you're choosing between initiatives that affect very different numbers of users. RICE is more calibrated for product backlog prioritization; ICE is useful for quick ranking of experiment ideas or smaller feature sets where reach is roughly constant.

How often should you re-score your RICE backlog?

Monthly or at the start of each sprint planning cycle. RICE scores change as you learn more (confidence improves), as your user base grows (reach changes), or as engineering estimates are refined (effort changes). A score that was accurate in January may be meaningfully different in April.

See it work in ten seconds.

BoardSnap is free on the App Store. Snap a board — get a summary and action plan.

Free · 1 project, 30 boards Pro $9.99/mo · everything unlimited Pro $69.99/yr · save 42%
BoardSnap Free on the App Store Get