We conduct a lot of customer research. It is at the heart of most of our projects. And surveys are part of the research design for more than half of these studies.
I was reflecting last week with a colleague in media on what we had learned over the years about customers, and the conversation turned to the advantages and limitations of customer surveys.
Customer surveys can be powerful tools when used in the right context, and when they are designed rigorously from the outset. They allow businesses a window in their customers' minds, creating opportunities for improved products and services, and uncovering new opportunities.
Likewise, surveys can be weak data collectors and potentially misleading when applied in inappropriate situations or put together poorly.
There are three problem types where customer surveys are limited ways to gather meaningful data:
1. Self-reporting of reasons for past behaviour
2. Predicting future behaviour
3. Determining why and how customers do what they do
Careful survey design, or alternative and non-traditional research approaches, must be employed to tackle these problem areas.
Self-reporting of reasons for past behaviour
Surveys are solid tools for gathering facts, especially in the present (how many times per month do you shop at…?) or in general (which colour of car do you most prefer?). Behaviours – particularly past behaviours – are much more difficult to accurately measure via survey because people will later rationalize irrational decision making.
We did some work in the chocolate-milk space a couple of years ago and employed in-field observation and intercepts (convenience and grocery stores) for data gathering. We would ask consumers why they chose the carton or bottle they selected, a few seconds after the customer had pulled it from the cooler. A number of consumers would say something like “I picked the one with the latest freshness date,” and we'd counter with “no you didn't, you just reached in and grabbed the one closest to the front.”
Only after confrontation would consumers say “oh, you're right, I guess I liked the package” or “I was in a real rush.”
Consumers don't lie, but their brains do want to put order and process around decisions where the intent and decision drivers were different – so asking similar questions on surveys can lead to bad data and inaccurate interpretations. In this case, non-traditional qualitative research can yield better results.
Predicting future behaviour
Surveys are normally quite bad at predicting future behaviour as well. That has more to do with survey methodology and design than with the customer taking the survey.
If we think about pricing for example, asking a customer to select one price from three available options for a new offering will normally result in the survey taker choosing the lowest price. A better survey methodology for dealing with pricing is called conjoint analysis, which takes the survey taker through a series of trade-off decisions where prices are tied to other elements of value, such as product features or quality levels. Only in context can customers make intelligent decisions about money on a survey.
Another example involves asking customers about potential new products – “would you like to be able to buy a mobile phone from a vending machine?” Again, out of context it is very difficult for a customer to know how to answer that question. Instead, a much more descriptive narrative followed by a question is often required, where the conceived use is laid out. In this instance, if the conceived use is transit hubs and phones for travellers, a better narrative-followed-by-question would be: “Imagine you are travelling throughout Europe and you've just arrived in Barcelona, late at night, without a mobile phone functional in Spain. Would you like to be able to buy a mobile phone from a vending machine in a public place?”
When trying to model future behaviour, surveys can be employed, but time and thought must be given to providing maximum context (without biasing the results) and potentially employing advanced analytics to get to better answers.
Determining why and how customers do what they do
Surveys are good at four of the five W's. Who, what, when and where? These questions are typically fact-based, quantifiable or at least mutually exclusive, and to a large degree free from judgment on the part of the survey taker. ‘Why' and ‘how' are the opposite – they involve opinion and they are hard to put numbers around. It is difficult to obtain solid ‘how' and ‘why' data via survey.
The limitation here is a practical one. It is very tricky – even if some qualitative research has been done in advance to inform the survey design – to adequately list reasons and answer options to ‘how' and ‘why' questions.
Why did you purchase your current motorcycle?
Answer options: a) low price, b) great ride, c) high quality components, d) brand, e) other.
The real answer may be a combination of answers or parts of answers that will be listed, or lie outside of the listed options. Leaving questions open ended is preferable but just moves the issue of interpretation later in the process. It is then up to the analysis team to spot trends and put these answers into groupings, which is a complex, inexact science.
A more preferable means of tackling ‘how' and ‘why' questions in surveys is to ask many indirect, fact-based questions, and then correlate the answers to infer the answer to the problem. For example, if customers are asked about the importance of a number of product attributes, and then they are asked about their satisfaction with those same attributes, it may be possible to determine which attributes drove certain decisions (such as high importance and low satisfaction hints at why a customer may not have chosen to repeat with a certain brand).
Depending on how the questions are set up and how the math is done, this can be known as “derived importance.” Derived importance is controversial as a methodology, as some research experts claim it is too big a leap from correlation to causation – however, it tends to be more accurate than asking customers ‘how' and ‘why' questions directly.
A different approach is to go off-survey for answers to ‘why' and ‘how' questions. Focus groups and/or one-on-one customer interviews tend to be better venues to get at these answers, since it is possible to ask follow-up questions and go on “fishing trips” with the customer in this setting.
Mark Healy, P.Eng, MBA, founder of Torque Customer Strategy, is now a partner at Satov Consultants – a management consultancy with practice areas in corporate strategy, customer strategy and operations strategy. Mark's focus areas inside the Customer Strategy practice include consumer insights, customer experience, innovation and go-to-market strategy. He is a regular speaker and media contributor on topics ranging from marketing to strategy, in telecom, retail and other sectors. Mark is known as much for his penchant for loud socks and a healthy NFL football obsession as he is for his commitment to Ivey and recent Ivey grads. He currently serves as chair of the Ivey Alumni Association board of directors. Mark lives with his wife Charlotte and their bulldog McDuff in Toronto.