MaxDiff’s Max Myths

MaxDiff is widely regarded as the gold standard for prioritization, but that reputation often leads teams to use it by default rather than by design.

This paper challenges common misconceptions about MaxDiff, clarifying what it actually measures (win tendency, not true value) and where it adds value versus where simpler, more efficient methods may be more appropriate. It introduces a practical decision framework to help researchers choose the right approach for their specific use case.

In this comprehensive paper you’ll learn:

  • Why MaxDiff does not measure true importance or value intensity, and how misinterpreting scores can lead to flawed decisions.

  • What MaxDiff actually captures, including win-loss signals, choice dominance, and respondent certainty.

  • Why common beliefs like “twice the score = twice the importance” are misleading.

  • When simpler methods like ranking or Top-N selection can deliver the same insights with less cost, time, and respondent fatigue.

  • How to apply a practical decision framework to determine when MaxDiff is the right tool, and when it isn’t.

  • Best practices for analyzing and reporting results, including when to use counts, net scores, or Hierarchical Bayes modeling.

Get the White Paper

Enter your information below to receive this informative white paper on MaxDiff’s Max Myths.

We respect your privacy. By clicking, you agree to receive this content and occasional updates from us. View our Privacy Policy. We’ll also send you occasional UX insights. You can unsubscribe at any time.