We're often asked what makes the difference between a green and red rating. This post explains our rating system, what we look for and how government departments can improve the quality of their assessments.
Our formal ratings: green and redFirst, it is worth being clear that the RPC is commenting on the assessments and associated evidence that support policy proposals, not on the policy itself.
For options assessments (OAs), impact assessments (IAs) and post-implementation reviews (PIRs), we provide formal ratings of either green (fit for purpose) or red (not fit for purpose).
Green means we have no significant concerns over the quality of the department’s assessment. There may be minor issues or points for improvement, but the submission provides adequate support for ministerial/parliamentary decision-making.
Red means we have significant concerns over the quality of evidence and analysis that need to be addressed to present the impacts of the proposal properly.
What we formally rateFor OAs and IAs, we assess three specific areas:
rationale – has the department explained the problem clearly and why government intervention is needed? identification of options – has the department considered a range of alternative options, including a small and micro business assessment (SaMBA)? justification for preferred way forward – is the recommended option evidenced properly?If any one of these is insufficient, the assessment is likely to receive a red rating.
For PIRs, we assess whether the recommendation to retain, amend or remove the regulation is evidenced sufficiently.
Our 4 point quality indicatorsBeyond the formal rating, we provide quality indicators on other important areas. These help departments understand where their analysis is strong and where improvements are needed.
For OAs and IAs, we assess:
the regulatory scorecard proposals for monitoring and evaluationFor PIRs, we assess:
monitoring and implementation evaluationWe use four quality indicators:
Good Addresses the issue well. Analysis is robust, based on high-quality evidence and appropriate assumptions. Could be improved only in minor areas. Satisfactory Addresses the issue adequately. Analysis is based on adequate evidence and appropriate assumptions. Some improvements could be made, but it is sufficient to support decision making. Weak Analysis is not sufficiently robust. Improvements needed in one or more areas. Provides inadequate support for decision making. Very weak Analysis is poor with significant flaws. Significant improvements required. Provides inadequate support for decision making.Repeated weak or very weak ratings in the same categories will prompt us to work with a department to achieve better outcomes.
Practical tips: avoiding common pitfallsBased on what we see across government, here are the most common issues that lead to red ratings – and how to avoid them.
1. Engage with us earlyDon't wait until an assessment is complete. Contact your Better Regulation Unit or reach out to us directly if you're uncertain about expectations. Early engagement saves time and reduces the risk of significant rework later.
2. Build a clear rationaleA weak rationale is one of the most common reasons for a red rating. Be specific about the problem you're trying to solve and provide evidence that government intervention is necessary. Avoid vague statements – show us the data.
3. Genuinely consider alternative optionsWe often see assessments where the "do nothing" option and non-regulatory alternative options haven't been explored properly. Even if regulation is clearly the right answer, departments need to demonstrate that they've considered and evidenced why other approaches wouldn't work.
4. Don't neglect your SaMBAThe small and micro business assessment is frequently underdeveloped. Consider whether small businesses could be exempted – this should be the default – or at least given lighter-touch requirements. If exemption isn't appropriate, explain why, with evidence.
5. Make your evidence proportionate but robustThe analysis should be proportionate to the significance of the regulation. But "proportionate" doesn't mean superficial. For significant regulatory changes, we expect robust quantification of costs and benefits. Where there are assumptions, explain and justify them.
6. Plan for monitoring and evaluation from the startThink about how you'll know whether the regulation has achieved its objectives. A monitoring and evaluation plan isn't an afterthought – it's essential for future post-implementation reviews and demonstrates that a proposal is based on testable assumptions.
7. Review previous RPC opinionsLook at our published opinions on similar assessments from your department or others. Understanding what we've flagged before can help you avoid the same issues.
What this means for departmentsOur role is to scrutinise the quality of the evidence and analysis presented – not to judge the policy itself. A well-evidenced assessment helps ministers and parliamentarians make informed decisions, regardless of which option they ultimately choose.
If you’re unsure about any aspect of your submission, contact us early. We're here to help you succeed.
Contact us at enquiries@rpc.gov.uk for help and subscribe to our blog for more guidance and updates
seen at 11:33, 1 April in Regulatory Policy Committee.