Adaptiveness could be the key to predicting consumer preferences
As telecom and technology markets are continually disrupted, our approach to their specific market research…
Have you come across the situation where many items (descriptions, statements or concepts) had to be put in preference order? Ranking and rating type questions are often used in these situations, although the last few years MaxDiff has been gaining more attention as a good alternative.
MaxDiff, unlike rating, is scale free and therefore has no scale or cultural bias. Secondly, it can deal with many items, whereas having to rank over six items tends to result in meaningless, trivial rank orders. MaxDiff overcomes these drawbacks.
MaxDiff is short for Maximum Difference scaling, invented by Jordan Louviere in 1987. In simple terms a MaxDiff exercise uses a list containing many items, e.g. 30 children’s names as its’ basis and then exposes respondents to several subsets of this master list through random selection of 5 out of the 30 names at a time. From each subset, respondents are asked to pick their most and least liked item. This choice task is repeated a number of times with a pre-defined set of randomly selected subsets to collect sufficient observations for a solid analysis.
On the fly utility estimation in SSI/Web assists in getting deeper into what motivates/moves people in their choices. It allows you to recall respondents’ best and worst item for follow up questions to understand why they find that item the best or worst. It helps setting the framework of relations, was it the best of the set, but nothing from the set really appealed. Asking open-ended questions on the motivation probes the respondent to reveal their rationale.
MaxDiff delivers a rank between the items tested, a metric distance between ranked items and reasoning behind the respondents’ choice. Since the ranks are established at the individual respondent level, they can vary greatly across the entire sample. Segmentation or Cluster analysis can be applied to find similar response patterns at a segment level (for example, help uncover characteristics unique to a specific group within the overall sample that like a specific item).
At an aggregate level we may see that item “X” wins, however we also may see there is an acceptable alternative collecting more than a fair share of preferences (item “Y”). At the segment level, however, we see that preferences vary and that systematic differences can be found. The segmentation prevents that we simply declare items that perform best at an aggregate level to be the “winner”, whereas their performance may also be due to the fact that they are best middle of the road items, lacking controversy. When looking at the results at segment level, we may come back with a different or in other words a more directed conclusion.
MaxDiff can be used whenever you need to support a choice among many items (let’s say up to 30), where the items are at a comparable level of execution and articulation. MaxDiff can be used to support selections among words (e.g. brand, range or product names), statements (e.g. benefit articulations or articulations of reasons to believe) or full concepts (e.g. combining full descriptions and visualizations). However, we suggest you to limit the number of items you test in a MaxDiff study proportional with their length or complexity to reduce unnecessary fatigue and to warrant optimal data quality.
No tool is perfect and certain limitations do exist with the MaxDiff approach. The most obvious limitation is that MaxDiff delivers only relative and not absolute measures of item performance. Meaning we can only show the performance of an item relative to other items in the same set; not across sets and studies. There are ways to circumvent it though, such as inclusion of a benchmark item in the set of items; an item with a known performance measured in other sets and studies. Other solutions exist as well and can be discussed on a case by case basis.