Developed by an MIT Scientist, it is a mathematically and scientifically sound method to evaluate a large number of ideas, opportunities and insights.

The idea behind Head to Head Reviews goes back to the late 18th century when the Marquis de Condorcet, political scientist and mathematician, outlined a pairwise method that would be a fair and equitable way to run an election. Condorcet's method returns a full rank-order of public preference for the candidates, suitable for a president (one winner) or an entire parliament (many winners). A Condorcet vote is considered exceptionally fair and unbiased, but its application has been limited by the complexity of determining the final "right answer" as the sketch illustrates:

Here we have three voters (or reviewers), six options to review (A,B,C,D,E,F), and pairwise reviews represented by arrows. Alice thinks that option A is better than C, A is better than D, and F is better than D. So far she hasn't seen B or E. Similarly the arrows show Bob's and Chris' pairwise votes. Two things are evident: there is probably a "best answer" composed of all of the reviewers' input, and it's not obvious how to arrive at it! Fortunately this problem has been thoroughly addressed since Condorcet's time and we can handle the complexity by embedding the validated algorithms into Idea Central. Our task is a little different from a political election. Firstly, because we may have any number of factors, it's as if a political ballot asked you multiple questions ("Which candidate is smarter / more experienced / better in foreign policy / a better orator"). Secondly, because our coverage is usually quite different: in a national election you might have 3 candidates and 30 million voters with one vote each, which means 10 million-fold coverage. In an Idea Central event you might have 100 ideas and 5 reviewers. If each reviewer does 20 reviews, you just achieve 1-fold coverage. If each reviewer sees all ideas, you'd achieve 5-fold coverage. The more consistency between reviewers and the more coverage, the more the final rank order approaches "absolute truth." 

We suggest a minimum of 1-fold coverage by each reviewer (that is, each reviewer should see every idea at least once), and 3-fold coverage overall (that is, a review team of at least 3 people). Since the Condorcet math can do a very good job with even less information, these minima are recommended to assure fairness across ideas and diversity of reviewer knowledge and opinion.


Doing a Head to Head Review takes no training and little if any coaching. Nearly everyone understands just what to do when they see the ideas presented side-by side: read the ideas, move the sliders to express the direction and conviction of their opinion, and press “Submit”. That's it!

Now the importance of phrasing in the setup becomes clear: the entered phrases for each factor must complete the sentence, “Which of these Ideas...” with the result that the input form is self-explanatory.

There is also a small progress bar on this form to show each reviewer how many more comparisons they have to complete. It’s our recommendation for each reviewer to do at least as many pair reviews as there are ideas, to assure approximately equal coverage by all reviewers. To achieve statistical significance and diversity of opinion, it’s a good idea to have at least three reviewers.

To learn more about the different types of voting and evaluation methodologies within Planbox, please download The Following Guide

Now, let’s look at the functionality in more detail.

H2H Configuration - Challenge

A Head to Head session can be initiated at a challenge level. Once we have at least 10 ideas in a challenge we are able to run a session and invite at least three reviewers to participate. 

Within the challenge “edit” mode, click on the H2H tab to identify individual reviewers or groups of reviewers that you would like to involve in the evaluation process at a given challenge. 

The users will be automatically notified by email, mobile, slack and/or internally within Planbox that it is time to participate in the review process.

Then, you can configure specific criteria of evaluation, that may differ from the default ones setup at the system level, depending on the challenge.

For example, risk to business, financial impact, alignment with strategy or ease of deployment.

Now that the configuration is complete, let’s initiate an H2H session for that challenge.

Initiate Head to Head – Challenge

To initiate an H2H session, simply click on “Initiate Head to Head” to start the evaluation process. 

You will then be directed to a page to identify a Head to Head session with a name.

While the default option is to review all the ideas, the reviewer can filter ideas by status. This is done by unchecking the “All” button and instead manually choosing the status. 

The initiated session can then be accessed by Reviewers from “H2H Sessions” in the Manage drop-down list. This will only appear to reviewers who get selected to participate in the session.

To view the results, click on View Head to Head results.

Choose the session that has been completed and click on the view results button. 

Now that we have gone over setting the evaluation process, let’s review the results.

Head to Head Session – Challenge

The Results tool allows you to analyze and prioritize concepts based on various factors and scenarios.

The leftmost column shows the overall weighted score (a number from 100 to 0) colored red (“hot”) to blue (“cool”) of every concept/idea. The center columns show the scores for each idea against the importance of the various factors you have selected (the sliders in the review process). Idea author name and title complete each row; the idea can also be shown by clicking on the title to view the idea details.

The sliders above the big table encourage you to explore your data. They default to equal weight across factors but it’s easy to change by simply moving the sliders left or right. The results table recalculates the overall weighted score to reflect your choices.

Clicking any total score circle highlights it, and this highlight stays with the idea no matter how the weighting is changed.

We can also look at the results in a chart by clicking on the “Show Chart” button. H2H gives you the ability to make decisions with complex information across multiple factors by using the 2×2 charts. The 2×2 Charts also reflect the weighting selections made in the sliders.

You can look at selected ideas in more detail, you can change your X & Y axis depending on your desired measures, you can make color and size depending on criteria to visually spot the concepts of potential interest for further discussion.

Now let me walk you through some additional options:

  1. Reset Sliders: Resets the weighting of the criteria back equally

  2. More Tools: Allows you to export

  3. Change Status: Allows you to select ideas and change their status to move them forward

You have probably also noticed additional tabs above the sliders. Let me quickly explain what each tab represents:

  1. Factors: Passion shows how widely (comfort and conviction level) the user has swung the slider. Which factor has the highest influence on the total score.

  2. Progress: Planbox is capable of showing the review progress to help you determine how many reviews you need to achieve a good tradeoff between minimum reviewer effort and maximum accuracy and reliability of the ranking

  3. Backtrack: how many comparisons done for a single idea has had, which gives more confidence in the review process

  4. Reviewers: the amount of coverage (reviews) by each reviewer