RICE Scoring Model

How Does the RICE Scoring Model Work?

To use the RICE scoring model, you evaluate each of your competing ideas (new products, product extensions, features, etc.) by scoring them according to the following formula:

RICE Prioritization Framework for Product Managers [+Examples]



The first factor in determining your RICE score is to get a sense of how many people you estimate your initiative will reach in a given timeframe.

You have to decide both what “reach” means in this context and the timeframe over which you want to measure it. You can choose any time period—one month, a quarter, etc.—and you can decide that reach will refer to the number of customer transactions, free-trial signups, or how many existing users try your new feature.

Your reach score will be the number you’ve estimated. For example, if you expect your project will lead to 150 new customers within the next quarter, your reach score is 150. On the other hand, if you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, your reach score is 360.


Impact can reflect a quantitative goal, such as how many new conversions for your project will result in when users encounter it, or a more qualitative objective such as increasing customer delight.

Even when using a quantitative metric (“How many people who see this feature will buy the product?”), measuring impact will be difficult, because you won’t necessarily be able to isolate your new project as the primary reason (or even a reason at all) for why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, you can assume that estimating it beforehand will also be a challenge.

Intercom developed a five-tiered scoring system for estimating a project’s impact:

  • 3 = massive impact
  • 2 = high impact
  • 1 = medium impact
  • .5 = low impact
  • .25 = minimal impact


The confidence component of your RICE score helps you control for projects in which your team has data to support one factor of your score but is relying more on intuition for another factor.

For example, if you have data backing up your reach estimate but your impact score represents more of a gut feeling or anecdotal evidence, your confidence score will help account for this.

As it did with impact, Intercom created a tiered set of discrete percentages to score confidence, so that its teams wouldn’t get stuck here trying to decide on an exact percentage number between 1 and 100. When determining your confidence score for a given project, your options are:

  • 100% = high confidence
  • 80% = medium confidence
  • 50% = low confidence

If you arrive at a confidence score below 50%, consider it a “moonshot” and assume your priorities need to be elsewhere.


We have discussed all of the factors to this point—reach, impact, confidence—represent the numerators in the RICE scoring equation. Effort represents the denominator.

In other words, if you think of RICE as a cost-benefit analysis, the other three components are all potential benefits while effort is the single score that represents the costs.

Quantifying effort in this model is similar to scoring reach. You simply estimate the total number of resources (product, design, engineering, testing, etc.) needed to complete the initiative over a given period of time—typically “person-months”—and that is your score.

In other words, if you estimate a project will take a total of three person-months, your effort score will be 3. (Intercom scores anything less than a month as a .5.)

comment appreciated