ICE Score Model: A Smart & Fast Way to Prioritize New Software Features

Written by Jonathan Richter

As businesses leaders and decision makers, we’re constantly bombarded with too much information and too little time. If you run a software or app-driven business, one of your biggest challenges is choosing which new features to prioritize.

Will these new software features make a significant impact? How complex or costly will the features be? These are the types of questions and concerns that are important to consider, but can also slow down decision-making if there’s no process in place.

So what’s a simple, effective strategy to help you make decisions quickly and with greater certainty? In this article, I’ll outline the ICE Score Model (Impact-Confidence-Ease), what it is, and how it can help your team. As a result, you’ll be able to prioritize new software features in a way that is smart, easy, and totally free!

What is the ICE Score Model?

Hacking Growth author Sean Ellis is credited for developing the ICE Score Model. The ICE model has been used to help companies like DropBox and Eventbrite prioritize their software features. As a result, these companies have been able to experience explosive growth and optimization!

Hacking Growth book by Sean Ellis and Morgan Brown
Hacking Growth by Sean Ellis and Morgan Brown

The ICE score model is also our preferred method for prioritizing new software features for ourselves and our clients. Using the ICE model not only helps you make better decisions, but it also helps you streamline your decision-making process and development. Therefore, the ICE model is great for organizations of all shapes and sizes!

So what is the ICE score model and what does it stand for? ICE is an acronym for three factors that are independently evaluated: impact, confidence, and ease.

The ICE Score Model definition by Winnona Partners
The ICE Score Model – definition by Winnona Partners

Impact

How significant is the feature in question? That’s what the impact score represents.

Will the feature provide much needed support or functionality to a wide user base or audience, or only a small subset of users? Can the feature help provide better analytics or increase revenue? These are the types of questions you’ll want to consider when determining what score to award the feature’s impact column.

Confidence

How confident are you that the feature in question will have an impact? Are the details and roadmap for the feature accurate and complete? These are just a couple of questions you’ll want to consider when determining your confidence score.

The confidence score might seem like a surprising metric to evaluate, but it’s actually an extremely important factor to help you decide whether or not a score should be an immediate priority. Organization leaders, project managers, and even developers should collaborate on this score. To that extent, a useful synonym for confidence might be ‘clarity’.

Do we have all the necessary info ready to roll out this feature? Can we rely on data to validate our assumption that this is an important feature? If so, then the confidence score should be pretty high. On the other hand, if there’s a lot of questions, unknowns, or lack of a process, the confidence score should be relatively low.

Ease

If you’re developing a software project, you’ll want to rely on developers or tech experts to help determine the ease score. Ease refers to how technically difficult something will be to execute on in terms of time, effort, and cost. Similar phrases for the Ease score are “level of difficulty” or “level of effort”.

Will the feature take hours or days to implement? If so then you can award it a high ease score. Alternatively, if the feature will take months to complete, it should receive a lower ease score.

How does the ICE Score Model work? 5 Step Guide

Here’s our 5 step guide for how the ICE score model works. Each step is covered in greater depth below:

  1. Create a simple spreadsheet
  2. Define your 1-5 rating for each ICE column (B, C, and D)
  3. Evaluate each ICE column independently and provide a score of 1-5 for each
  4. Calculate the total priority score and rank the features in order of highest to lowest
  5. Continue to adjust scores over time as needed

1. Create the Spreadsheet

Creating your spreadsheet for the ICE Score Model is easy. Using Google Sheets or Excel, simply start a new sheet. You’ll only need 5 columns:

  • Write out the detailed feature description in Column A
  • Impact score in Column B
  • Confidence score in Column C
  • Ease score in Column D
  • Total Priority score calculation in Column E

Here’s an example ICE score Google Sheet created by Winnona Partners:

ICE Score Model Spreadsheet Example for new software features by Winnona Partners
ICE Score Model Spreadsheet Example by Winnona Partners

As you can see in the example above, each ICE column header has a thorough description of what is being evaluated and what those evaluations mean (see Step 2 for more details). In addition, I recommend the following:

  • Freeze Row 1 so the header remains sticky as you scroll (highlight the row, then select View>Freeze>1 Row)
  • Wrap the text of each column so the text doesn’t run into the next columns (highlight the columns, then select Format>Text Wrapping>Clip)
  • Set a SUM function to calculate the sum of columns B,C, and D (see Step 4)

2. Define your 1-5 rating system for each column

Step 2 of the ICE score model is one that not all agencies might follow, but those who do will have a much better chance at success.

In the past, we would simply assign a rating of 1 to 5 for each of the ICE categories. We felt pretty good about the approach. However, in a collaborative setting it quickly becomes clear that my understanding of what constitutes a Confidence score of 4 is very different from clients or even my fellow business partners interpretations of what a “4” rating means!

When you think about it, assigning a numeric value to something is futile if the meaning of what those numbers represent is vague, arbitrary, or leaves too much room for interpretation.

I first came across this conundrum in Daniel Khaneman’s book Noise. Ever since then, we’ve become more diligent in defining what each numeric value represents as specifically as possible. The result? Greater transparency company-wide, and everyone can be on the same page about exactly what each score means.

Noise by Daniel Khaneman

So how should you go about doing this? Collaborate with your team to define what each score represents based on your team structure and average deployment cycles. For some projects, here’s how we’ve defined our Ease scores (measured in time/cost):

  • 1 – Very difficult or we have no solution figured out yet
  • 2 – Difficult and will take up to 1-3 months to implement
  • 3 – Moderately difficult and will take 1 month to implement
  • 4 – Not too difficult and will take 1-2 weeks
  • 5 – Can be implemented within days

Notice that for most of the scores, we don’t just describe the level of difficulty–we also provide explicit time ranges to ensure everyone has the same schedule in mind when assigning a score.

So although defining your scale isn’t something ICE Score Model creator Sean Ellis talks about, you can certainly see how doing so helps clarify exactly what each point represents! If you skip this step, you will almost certainly encounter issues with score misinterpretation, unwanted variance or conflict.

3. Evaluate and Score

black and white dartboard
Photo by Engin Akyurt on Pexels.com

After collaborating with your team to define your rating system for each ICE column, it’s time to start making assessments! If you’ve taken the time to define your ratings (Step 2), then this step should go relatively quickly.

While evaluating and scoring your new software feature, do your best to rate each column independently. For instance, when discussing the Impact a new feature might bring to users, try your best to only talk about the Impact–not the other variables!

You might notice that your mind is being heavily influenced by the Ease score (cost/time). This is very normal! Most people just want to know how difficult or costly something is going to be to develop. However, the goal of the ICE exercise to help guide your decisions in a more objective way based on importance and certainty (Impact and Confidence), not just dollars and hours (Ease).

Host discussions with all relevant parties involved: project managers, developers, clients, and even customers! In fact, analytics and customer feedback/surveys should be the primary motivators for your Impact and Confidence scores. As for determining a feature’s Ease score, you’ll want to rely on your developers or other tech experts.

4. Calculate the total priority score

After assigning individual scores for each ICE column for each new software feature in question, it’s time to calculate the total priority scores! If you’ve created a spreadsheet using the same format as our example in this article, you can use a simple SUM function in the first open cell in column E. In our example, the first open cell is Row 3.

=SUM(B3,C3,D3)

ICE score model example spreadsheet showing how to calculate the SUM of three columns together.
Notice the =SUM equation next to the fx symbol above the column headers.

Once you’ve set up this calculation correctly, you can apply this to the rest of the rows in Column E by clicking and holding the lower right hand corner of the highlighted cell (E3), and dragging down. As a result, you should see each row in Column E come up with the proper calculation based on Columns B, C, and D.

Congratulations! You now have a method for prioritizing your new software features in a way that is clear, logical, and practical. As you can see, it’s now fairly simple to see at a glance which features you should execute on first, and which can wait or need more clarification.

What do you notice when you do this exercise? Are there some features that you thought needed to be executed on immediately but turn out to be more confusing, difficult, or time consuming than you originally thought? That’s the beauty of the ICE score model!

By optimizing your decision-making process with this rather simple “algorithm”, you’ll find that your entire team is able to develop features in a manner that’s quick, confident, and impactful.

5. Continually adjust your scores over time

The final step in the ICE Score Model process is one that is important and ongoing. Setting up your initial ICE spreadsheet is a big step towards optimizing your decision-making process. However, once it’s set up, you’ll want to continue evaluating and adjusting your ICE column scores as things progress.

Have you received user feedback indicating that something you thought would have low impact is in fact really important? Then it’s time to raise the Impact score. Are significant data trends emerging that validate your assumptions about a lower priority feature? Then it’s time to give a higher Confidence score.

Your business is like a living organism, constantly adapting and evolving to new market conditions. Whenever new information or developments come to light, you’ll want to make sure someone is on top of managing and maintaining your ICE spreadsheet. That way, everyone can be on the same page about which features should be prioritized and executed on.

faceless woman working with soil in garden
Photo by Karolina Grabowska on Pexels.com

Conclusion

Business leaders, developers, and customers all have different roles and priorities. When you’re tasked with making a system that can address the various needs everyone is throwing at you all the time, you need some sort of system for organizing your decision-making in a calculated way.

If you take the time to set up an ICE Score Model spreadsheet (usually minutes not hours), you’ll find that you can make better decisions faster, and more cohesively. At the end of the day, having any system at all is better than no system–even if it’s a relatively simple algorithm like the ICE Score Model.

So next time you’re trying to determine which new software features to develop, give the ICE Score Model a shot!

Want to learn about more business ideas and concepts? Check out our Tech Business Glossary for more!


Stay informed about our projects and events! You can follow the Winnona Partners on Google News, Facebook, LinkedIn, and Instagram.

By Jonathan Richter

Jonathan is CEO of Winnona Partners, a custom software development company based in Atlanta, GA that specializes in helping startups and small businesses thrive. He's also a classical guitarist, and has studied Chinese language, music and culture extensively.

Leave a Reply