Benchmarking Emotional Reactions to Concept Tests

emotional reaction

Yesterday, Josh Kamowitz and Brent Snider of BrainJuicer presented a webinar, “Love at First Test: Why Feelings Matter When Optimizing Concepts.”

Traditionally, concept tests have focused on measuring consumers’ rational reactions to product concepts. First, a concept for a product, service, or ad is described, expressing an overall consumer insight, describing benefits and RTBs (Reasons To Believe), with a possible illustration. Then respondents are interrogated to determine:

  • Purchase intent
  • Advantages of the concept
  • Ways the concept can be improved
  • Value for money
  • Standard concept ratings (how different, believe, relevant, understandable the concept is; how it fits with the brand)
  • Custom attributes
  • How the concept would be described to a friend (“telephone test”)
  • Anticipated usage, including when and by whom
  • Substitution question

Brent said, “We are not going to dismiss previous business testing but open your eyes to a new way to measure concepts with emotion as the lead metric. Our approach is more in line with the latest thinking of making decisions emotionally first.”

Brent discussed Daniel Kahneman’s book, Thinking Fast and Slow, which describes System 1 and System 2 thinking. System 1 thinking is fast, intuitive, instinctive, metaphoric, and unconscious, while System 2 thinking is slow, analytical, effortful, propositional, and conscious. Fast thinking is emotional reaction, while slow thinking is rational contemplation. Brent said, “Thinking to humans is like swimming to cats: we can do it but we’d rather not.”

Relating this to research, Brent said, “Here’s the rub: the two systems are not equal in weight, but until now marketing has treated the rational and emotional as equals.” Why does traditional research get it wrong? “Quantitative approaches are way too rational; qualitative approaches are way too prescriptive. When is the last time asking ‘so tell us what you dislike about X category’ unearthed a revolution?”

“‘We think much less than we think we think,'” Brent said. “Yet most traditional concept testing tools require people to overthink and overrationalize their answers, while consumer decision making is rarely rational and always highly emotional. If your research is mainly about asking people to think rationally, you could be missing the honest truth about your idea.”

BrainJuicer’s System 1 Concept Optimizer includes all the traditional measures above but starts by asking consumers three questions to measure their emotional reaction to a concept:

  1. “How do you feel about this product?” The respondent is shown 8 facial expressions, each labeled with a universal emotion: contempt, surprise, anger, disgust, fear, happiness, sadness, or “neutral.”
  2. “To what degree did this idea make you feel [selected emotion]?” Respondents are shown a rating scale.
  3. “What was it about this statement that made you feel this way?” Respondents are asked for an open-ended comment.

Based on these three questions, BrainJuicer uses a proprietary calculation to measure “Emotion-into-Action” on a five-point scale, from 1 star to 5 stars. In a followup email to me, Brent elaborated on Emotion-into-Action:

While we can’t give away the actual formula of how we calculate the star ratings, we can say that the largest contributor to the Emotion-into-Action score (1 through 5 Stars) is Happiness.  It carries the greatest weight in the calculation, followed by Surprise.  The negative emotions detract from the Star potential and Neutrality is a detractor as well.  We take into consideration all of the emotions felt, as well as the intensity of each of those emotions.  Since there are no degrees of neutrality, it can hurt an idea a lot when it comes to the Star ratings.

The benchmark database rates over 4,000 tested concepts, from 16 categories in 43 countries, across the 5 Star ratings:

  1. High Loss Risk (17% of concepts) – “Below average concept, do not progress.” An example of a 1-star concept was a “Soft Drink Party Keg.”
  2. Low Returns (28%) – “Around the average, but unlikely to make much impact in its market. ‘Progress with improvements’ where diagnostics show clear way forward.” A 2-star concept was a “Stevia & Cane Sugar Sweetener.”
  3. Solid Investment (35%) – “Above average concepts and in most cases recommended for progression.” A 3-star concept was the Dole Fruit Squish’ems.
  4. Market Beater (14%) – “Very strong versus the database (within the top quintile) and like to deliver in-market success. Progress.” A 4-star concept was the Fig Newton Triple Berry.
  5. Next Big Thing (6%) – “Rare, top of the database concepts typified by high levels of happiness at a high degree of intensity and likely to deliver strong in-market success. Progress without delay.” A 5-star concept measured by BrainJuicer was Nutella & Go.

The Nutella & Go concept test showed a lot of positive emotion around happiness and surprise. “If a high level of surprise is felt, it can get people to feel differently about a brand or category. Surprise never lasts long and ultimately turns into happiness.”

How did the product do in market? “It is the best-selling confection at front of Walmart and Publix,” said Brent. “It disrupted a fairly substantial category in a short period of time.”

In cases like this, measuring emotion in concept tests can produce happiness and surprise for brand managers!



  1. The glaring inconsistency in their argument is that they use a reflective task (which engages system 2) to measure ’emotion’ !!

    What’s their evidence to support the assertion that consumer decision-making is ‘always highly emotional’? Kahneman’s system 1 encompasses cognition as well as affect. Brainjuicer seem to have cherry-picked affect only for their sales pitch.

  2. Hi Jeff – Tom from BrainJuicer here. Thanks for the comment (and thanks Jeff H for posting the write-up)

    To address your points.

    We use faces in order to make our emotional measurement more intuitive, fast and easy (i.e. closer to “system 1″) than either standard numerical or verbal scales. We’ve found it to be consistently more discriminating, richer and more predictive than the alternatives, and while we wouldn’t claim it as perfect, we think it’s the best currently available method of getting close to System 1 responses (including facial coding, which we’ve experimented extensively with – though it is rapidly improving and we expect it to play more of a role in our testing in future).
    As for the emotional nature of System 1 decisions, you’re right – it was an exaggeration to say decisions are “always” emotional. In our behavioural activation work we stress the roles of habit, for instance. But we strongly stand by our emphasis on emotion – it’s the best and most measurable proxy for the drivers of decision making, particularly when testing new ideas.

    The general point behind both these is that market research works by coming up with methods and metrics that represent the best practical and scalable way of getting to consumer decisions and using them to predict behaviour. Measuring emotion is the best predictor of those decisions, and using faces is the best way we’ve found of getting there.

  3. Hi Tom, many thanks for your reply. I understand your point about faces and would agree that your method is more valid than using ‘traditional’ direct and explicit questions (so credit to you guys for moving the industry forward) but the most valid way to measure system 1 involves automaticity and a reflexive task rather than a controlled and reflective task so I think your claim on your website (‘we are the system 1 agency’) has to be questioned.

    Also, I think (with respect) that you guys need to read up on the studies into emotion. Apart from extreme emotions such as anger and fear, which do promote self-defeating and maladaptive behaviour, there is hardly any evidence that emotions drive behaviour. I know this sounds surprising for people in marketing but read Baumeister for example. He conducted a meta study of 4,000 studies and found only a few with a causal link. What he did find was the emotions are the outcome of behaviour and, because of dopamine tagging and the brain’s learning mechanisms (but that’s cognitive not affective), that behaviour comes to pursue emotion. Also, when the learning’s happened the emotion is gone. So I’m not sure what you’re actually measuring, nor the grounds for your assertion that ‘measuring emotions is the best predictor of those decisions’.

    Finally, what do you mean by ’emotions’ anyway. There are at least four ways that I’ve heard marketers express this; i. Feelings (conscious feedback, therefore explicit) e.g. joy, well-being, surprise. ii. Attitudes (how one feels about a brand eg. trust, closeness). iii. Motivations (goal achievement e.g. pride, relief) and iv. brand personality (character e.g. friendly, authentic). I guess, since you’re using Ekman’s faces, that you mean ‘feelings’ – but these are outcomes not drivers.


  1. […] Benchmarking Emotional Reactions to Concept Tests – Josh Kamowitz and Brent Snider of BrainJuicer presented a webinar about moving concept tests beyond measuring consumers’ rational reactions to product concepts. […]

Speak Your Mind