8 months! That’s how long it took Slack to become a unicorn, probably the fastest ever in history. A big contributor to its success was the fact that Slack’s founders understood the importance of prioritizing their product’s unique features.
They went all in on prioritization, and it paid off big time, as the company is now valued at over $27 billion.
Prioritization frameworks are essential because they help product teams determine the right features to put out to increase revenue and even the customer base. They ensure that these decisions are strategic and not just based on anyone’s whims.
Must have, should have, could have, will not have (MoSCoW) and reach, impact, confidence, effort (RICE) are very popular frameworks that companies around the world use, but what do they mean exactly, and what do they entail?
WHAT IS RICE PRIORITIZATION?
Ever heard of Intercom? It is a software system used by Amazon, Notion, and so many other amazing companies to communicate with their customers.
But outside of providing this amazing service, Intercom also gave us the RICE prioritization framework.
Who invented the RICE model?
Intercom has a lot of competing project ideas, and it was a struggle for them to find a suitable prioritization model for their product managers, so they developed their own.
With this framework, they decided to consider four factors (reach, impact, confidence, and effort) and came up with a formula for calculating and combining them. They decided that the score from this formula could be applied to any feature at any given time and be used to make an objective decision on what to prioritize.
Each word in the acronym RICE is meant to answer different questions.
For Reach, it is “how many people will this affect?”
For Impact, it is ”how much will this impact people?”
For Confidence, it is ”how confident are we about the Reach and Impact scores?”
For Effort, it is ”how difficult will this be to achieve?”
How do you calculate RICE?
The very first thing you need to determine with RICE is the reach score. The reach score is an estimate of the number of people you will affect by implementing a feature in a given timeframe. How much impact a feature will have on the company’s revenue. The time frame can be one month, a quarter, or a year, while the effect can refer to the number of customer transactions or signs up you can get from a new feature.
For example, if you expect that implementing a new feature will lead to 100 new customers within the next quarter, your reach score is 100.
With reach, if you have a feature idea that will affect everyone who signs up and another idea that will only affect 5% of signups, then the first idea has a higher reach and should be prioritized.
These estimated numbers are usually gotten through external surveys or by looking through existing statistics, but it can really be hard to calculate reach if there are no existing users or statistics to look out for.
The impact score calculates your new feature's impact on your users. How is it different from reach? Reach is about how many people, while impact is about how much a feature will influence people.
So if reach talks about how many people are likely to sign up due to a new feature, impact talks about how high or how low the likelihood of someone signing up is.
Impact is measured by using a five-tiered scoring scale that is divided as follows:
- 3 = massive impact
- 2 = high impact
- 1 = medium impact
- .5 = low impact
- .25 = minimal impact
This score represents how certain you are of both your reach and impact. It is important because it helps you avoid bias and make decisions that are not based on fact. With confidence, if your reach score is based on statistics and data while your impact score is steeped in a “gut feeling,” it helps you decide whether to prioritize a feature.
Intercom also created a tiered system to score confidence, so that product teams wouldn’t get stuck trying to figure out which exact percentage gives them the go-ahead. For confidence, 100% represents high confidence, 80% represents medium confidence and 50% equals low confidence.
If you land on a score lower than 50%, then you shouldn’t be prioritizing that feature.
Confidence scores should be rooted in data from users like research feedback, experimentation results, and the success or failure of rough models.
Reach, Impact, and Confidence are measures of the potential benefits an idea will have towards a specific goal, but with Effort, the negative impact is measured. Effort measures how much time a project will take, and is an estimate of the amount of work one team member can do within a given time frame.
For example, if a project requires 3 different people to work on it for a week, that would bring about an effort score of 3 people per week.
Effort makes it easy to prioritize projects that may run for less than the entire quarter or cycle that you're planning for. But in order to come up with the right estimate, you have to work hand in hand with the people involved in building your new feature, like software engineers and tech leads.
Working with these people ensures that your effort score is driven by the complexity of the project and the reality of the existing technical systems.
What you should remember about the RICE Prioritization Framework.
- The formula for calculating the RICE Score is
Rice × Impact × Confidence
——————————————— = RICE SCORE
- For the RICE prioritization framework to be effective, every estimation must be steeped in data and collaboration between all stakeholders.
- The same scoring scale must be used throughout when measuring Reach, Impact, Confidence, and Effort.
What Is The MoSCoW Method?
The MoSCoW method is an acronym for must-have, should-have, could-have, and will not have, and it is a technique used to clarify which features to prioritize when working on a project/product. It is also known as MoSCoW prioritization or MoSCoW analysis.
It is applied at the beginning of a project and helps align a team around its values and expectations. It also really helps with visualizing tasks required and meeting their important requirements.
Who Invented the MoSCoW Method?
The MoScoW method was created in 1994 by a software developer called Dai Clegg, who used to work at Oracle. But it began to gain momentum in the early 2000s, and since then, a lot of companies leading the market have used the MoSCoW method to align their teams on different projects.
For this category, you have to ask yourself what features are absolutely essential for the completion and success of your project. These are features that are non-negotiable, and you can’t do without
Everything that will be placed under this category must meet the Minimum Usable Subset (MUST). To determine what falls under MUST.
It must meet the following criteria:
- You can’t replace it or find a workaround for this feature.
- You are breaking some kind of policy or law by not including this feature.
- Without this feature, there’s no point in even trying to complete your project.
- Your product will be unsafe without this feature.
- Your solution will not solve any problems with this feature.
You must also ask yourself when trying to file anything under “Must Have” what happens if this feature is not created? If there are no real consequences, it’s probably a should-have or a nice to have.
Features that are not really critical to this project fall under this category. They are often features that might help the functionality and usability of a product, but are not necessarily required by the next launch date.
If you were creating a file hosting application, the ability to share your uploaded files will be a Should-Have not a Must-Have. This is because the main product offering of a file hosting application is storing files on a remote server and viewing them later.
Another great example of a Should-Have will be performance improvements if a feature is already functional.
A good way to determine if you should classify your feature under Should Have is to ask yourself, “Leaving this feature out will be painful, but will the end product still be viable?”
Could Have features are generally features that are nice to have.
These features do not directly impact the core functionality of the product and are most times negligible. They are the features that you create when you have finished the Must-Have and Should Have and have a little time left.
Most times, these features can contribute greatly to the success of the product, but consumers might not also miss them if they were not there.
Another to judge Could Have is by asking if you can sacrifice these features when a product launch is already behind schedule.
Will Not Have
The “Will Not Have category” contains features that absolutely will not be created, or at least will not be created right now. The features that fall under this category are usually not important to the success or viability of the final product.
These Will Not Have can be marked and prioritized at the second iteration of the product release or added to the to-do list of a future release.
What’s great about the “will not have category” is that it tells you where not to focus.
Which is the better prioritization framework?
These two frameworks are used at completely different times. The RICE Prioritization framework is used when you are planning a product roadmap, while the MoSCoW method is used when there’s a set deadline, and you need to meet that deadline.
So determining which one is better for use is not always a good idea; instead, decide what you need a prioritization framework for and make your decision from there.
Whichever one you choose to use, they are both structured in a way that if followed properly, you can meet your goals.
How Can Ramen Club Help?
The great thing about the RICE Prioritization framework and the MoSCoW method is that they have been used by a lot of startup founders like you, and what better way is there to find out which one works best for you than by asking other founders.
Here at Ramen Club, we will provide you with a community of founders that you can always learn from, not only on the prioritization framework but also anything startup related.