Baking Good Banana Cake: Attitudinal Questions in Quantitative Research

Andrew Robertson – MMResearch™

 Quantitative surveys commonly contain three kinds of research question1. Demographic questions, or classification questions, permit comparisons between groups of people who differ on one or more dimension(s). A respondent’s gender is perhaps the most common kind of classification. However the number of dimensions by which respondents can be grouped is infinite. In market research, classification questions often include organisation size, industry type, or family size. In social research, classification questions can include ethnicity, religion, education, income, or even sexual orientation. These questions are generally straight forward and easy to interpret.

Behavioural questions often ask respondents to retrospectively recall the occurrence or frequency of specific actions or events. For example, "How often, in the last month, have you done [activity]?", "How frequently do you purchase [product]?", "How many times have you seen [advertisement]?" Behavioural questions are as reliable as a person's memory, or ability to estimate, and are therefore well suited to measuring the occurrence or frequency of recent or re-occurring events.

Attitudinal questions gauge respondents'; opinions, perceptions, beliefs, and even their psychological dispositions. That is, attitudinal questions measure something internal - and they can be a minefield. Consider the following analogy. A friend finally gives you the secret recipe for his amazing banana cake. You follow the recipe closely, carefully weighing each ingredient. You bake your cake according to your friend's exact specifications. Alas! Your cake is horrible and dry. Has your friend played a mean trick on you? He assures you that he gave you the original recipe. You try again, thinking you may have forgotten something - but next time your cake is even dryer than before! Your friend suggests that you check your kitchen scales. Behold! They give you a different reading each time you weigh the same bag of flour! It's time to use more reliable scales.

The same is true for survey research. Your methodology can be brilliant! Your sampling procedure can be flawless! You analytical skills can be unsurpassed! But are your scales reliable (consistent) and valid (they measure what they are supposed to measure)? Reliable and valid measurement is the key to interpretable and meaningful research results. The problem, however, is that unlike a banana cake, it can sometimes be hard to tell if the final research product is "a little off."

The problem, however, is that unlike a banana cake, it can sometimes be hard to tell if the final research product is "a little off."

Why are attitudinal questions such a minefield? Let's take a look at customer satisfaction as an example. Satisfaction is subjective; it’s a broad concept that can mean different things to different people. When asked about satisfaction, one respondent might consider their satisfaction in light of the way they were served the last time they contacted their service provider. Another person might feel that satisfaction has more to do with the usefulness and reliability of a product. Another person might not think of their reason for being satisfied at all. Instead, they may consider an overall positive “feeling” that they have for the service provider – rather than for any particular aspect of the service or product.

Or perhaps you want to understand the drivers, needs, or motivations of your customers? This information might be very helpful when designing group-specific marketing strategies or products to meet the needs of particular groups of people. An organisation might suspect that some of their customers are motivated by financial needs and desires. But how does one assess this? If you asked respondents to rate their financial needs and desires on a 1 (strongly disagree) to 7 (strongly agree) scale, the responses would be next to meaningless. What does one mean by financial need? Does this relate to a need for food, shelter, and clothing? Or does it relate to the desire for a BMW M5, rather than just a five-series BMW? Financial need may actually be multi-dimensional. That is, under one umbrella term, financial need, there may be two or more distinct kinds of need. But how are these different types of need identified? Who says they even exist? They may exist, but do they help us to understand people any better?

Fortunately, over the last centaury social scientists have spent enormous amounts of time and energy developing ways to identify and assess these kinds of “internal” constructs. Many of these ways are statistical, and involve rather complex techniques designed to ‘reduce’ data down to meaningful (useful) components, or to test a theory’s ability to ‘explain’ survey responses. However this can only be done effectively if the questions were developed properly in the first place. Here are a few things that you can look for when evaluating the attitudinal questions contained within a quantitative research instrument. Considering the following points will give you some indication as to whether your banana cake may be “a little off.”

 

1. Be wary of questionnaires that assess internal constructs with a single question. The general rationale for attitudinal survey questions is that the attitude, opinion, belief, or personality disposition causes a response to a question (e.g., a ‘6’ on a strongly disagree to strongly agree scale). Any score, however, is made up of two parts: the true score and the error. Sources of error are infinite. Respondents could be interpreting the same question quite differently for a variety of reasons. Responses can even be affected by mood, or the context in which a questionnaire is being completed.

Assessing a single construct with numerous questions can sometimes be a little more expensive, but your survey accuracy can increase exponentially! Asking more than one question allows the researcher to calculate average responses. Thus, the error associated with such things as the misinterpretation of a question is averaged-out across all responses. Note however, that assessing a single construct with numerous questions is not the same as asking the same question again, and again. This is pointless for participants, and it throws a huge spanner in the works of some of the more complex statistical techniques and procedures.

 

2. Be wary of questions that have not been drawn from the population of interest. Good researchers well tell you that there is really no substitute for this. This is just one reason why qualitative research is so useful. If questions are drawn from the population of interest, they are more likely to be understood by that population – and be relevant to them. This will further reduce your error. Also, if the qualitative phase is broad (wide-ranging) enough, the resulting questions are more likely to tap all facets of the opinion, belief, or attitude that is being assessed.


3. Be wary of questions that appear to deal with more than one point. This is not to say that long questions are bad. Of course, some opinions are complex. However questions that are double barrelled can be confusing for participants, and this will increase your error. Pilot testing can be a huge help when assessing more complex opinions or beliefs.

 

4. Where possible, use established instruments – and avoid changing them. There are a lot of publicly available instruments out there. The trick is determining which ones are well developed and suitable for your population of interest. Don’t be afraid to question your researchers about the use of established measures. They will be able to explain the pros and cons. Generally though, using an established instrument can reduce costs associated with an initial qualitative phase, while at the same time giving a valid and reliable measure.

If you decide to use established measures, it’s good to avoid changing them if you can. If you change the questions, they will reflect your interpretation or the interpretation of the researchers (i.e., your questions will no longer be drawn from the population of interest). Of course, sometimes it is necessary to make ‘context-relevant’ changes. For example, instruments developed in Australia may be very relevant to a New Zealand context. However, the use and meaning of some words can be quite different (take the word “thong” for example). Changes such as this should be considered carefully, and documental along with the presentation of methodology and results. If you change an established measure you can no longer associate previous reliability and validity information with it.

 

5. Be wary of attitudinal questions that have a limited response format. The whole point of a survey is to make distinctions between respondents, or groups of respondents. Limited response formats (such as yes/maybe/no) prevent this as there will be less variation between respondents. Furthermore, many statistical techniques cannot deal with these sorts of formats. Likert-type (e.g., 1 to 7) response formats are more suited to these analyses, especially if final scores comprise averages of responses to numerous items.

 

Trust between a researcher and a client is essential because a researcher needs to make many independent decisions when developing a research programme. Get to know them. Check that they understand your needs, and who your target population is. As you can hopefully see, there are a lot of issues to consider when developing attitudinal questions. A good researcher will have the skills to do this. However, you should be encouraged! Don’t be afraid to question your research provider. Ask them to justify their question selection, and the process they’ve used to develop questionnaire items.

I’m happy to be at an organisation that works extremely hard to build close relationships with its clients. We listen to our clients and we keep the entire research process open and transparent. Our clients can have input at every phase of their research (if they want to), but we are not afraid to offer guidance to avoid compromising our research methodology

– we enjoy good banana cake!

1. Single questions can also serve more than one purpose.


The oMMniBuild™ , oMMnibus™, MMysteryShop™ and MMDashboard™ brands are owned by MMResearch™.