Scoring criteria made simple: keep it clear, keep it strong

As awards veterans, we reckon there are as many ways to set up judging criteria as there are awards programmes. And we’ve seen them all, from overly complicated to frustratingly vague. The best scoring criteria fall somewhere in the middle - not so ambiguous that judges interpret things differently, and not so rigid that you risk missing exceptional entries that don't fit the mould. Pitch it right, and you’ll have a framework that guides judges, reassures entrants, and builds lasting credibility.

Start with crystal-clear objectives
Before writing a single criterion, ask yourself: what does excellence look like in this category? If you're judging ‘Best Innovation,’ does that mean groundbreaking technology, creative problem-solving, or measurable impact? Define it explicitly. Vague criteria like ‘overall quality’ leave too much room for personal preference and unconscious bias.

Make criteria measurable and balanced
Each criterion should be specific enough that different judges reach similar conclusions. Instead of ‘demonstrates leadership,’ try ‘provides evidence of measurable team impact and stakeholder engagement.’ Weight your criteria appropriately too - if innovation matters more than budget size, your scoring should reflect that. The latest awards management software platforms will do all the back-end calculations for you, making criteria weighting a breeze.

Don’t overcomplicate it
It might be tempting to set a judging scale of 100, or tons of criteria for each category but - and we’re speaking from experience here - it’s best to keep your scoring set-up as simple as possible. Your judges will thank you for this too: scoring out of 100 feels impossibly vague (what's the real difference between a 64 and a 68, for example?), and wading through endless criteria becomes tedious and time-consuming - exactly when you need judges to stay sharp and engaged. For robust, reliable results, use a five or ten-point scale with no more than four criteria per category, each with clear descriptors and point allocations.

Test and refine with real judges
We’ll let you into a secret: your first draft won't be perfect, and that's okay. Run a pilot with a small group of judges using sample entries. Where do scores diverge wildly? That's where your criteria need tightening. Where do judges ask questions? That's where you need clearer guidance.

Publish the criteria, and stand behind them
Transparency is everything. Share your criteria publicly before entries open. When entrants know exactly how they'll be evaluated, they submit stronger work and trust the process. Judges appreciate clear frameworks that remove guesswork and protect against challenges.

If you’d like to upgrade your judging experience - and your whole awards programme - we’d love to chat to you about how AwardStage can help. Just drop us a line or request a demo.

Next
Next

2025: the AwardStage round-up