Do I Have Your Attention?

Inattention or natural human behaviour? Re-examining the case of the failure to “mark a 2.

I noticed an interesting and puzzling figure in a paper by Burke Research presented at the 2009 CASRO conference. The paper, authored by Miller and Baker-Prewitt, ably demonstrated that at least 20% of respondents failed an attention trap which asked them to “please mark response number 2” at item 40, and again at item 80, in a grid format.

The less obvious finding was the number of people in their experiment who failed the trap when it was placed at the beginning of the survey: just 4%. In addition, people who were exposed to the trap up front were much more likely to pass subsequent iterations of the same trap.

So what is the level of inattention this trap type measures; 4%, 20%, or some other number? Or is this trap not measuring inattention at all?

stop-hand-signThe Natural “Base of In-attention”

As researchers, we have tended to rely heavily on answers to questions to identify inattention. But other disciplines measure inattention differently. Might their measures be more effective?

Humans are naturally inattentive, even when we try to pay attention, as demonstrated by atten-tion tests like the one where subjects are randomly shown a number between 1 and 9 on a screen for 250 milliseconds (about the blink of an eye) followed by a 900-millisecond mask. During the mask, the subject clicks a device if the number they saw was NOT a 3. The error rate on this test (where the subject is actively trying to pay attention) is about 3%.

So, does the first occurrence of the attention trap in the Burke experiment, with its 4% failure rate, measure a natural base level of inattention? If so, what explains the 20% failure rate of the trap within the grid?

Decision-making processes can be thought of as a set of rules we use to choose an outcome based on a given set of inputs within a certain context. If one of the inputs is missing, then cognitive underspecification occurs. Since the decision must be made (and in the context of a market research study, it is being made rather quickly), the missing input may be either replaced by some heuristic or other, or the context re-assessed. Usually this leads to the correct outcome (recall that Burke found 80% of people did correctly mark 2 as instructed), but not always.

Heuristics that might be employed include the following:

• Frequency bias – choosing the most usual/normal outcome

• Recency bias – repeating what was just done

• Confirmation bias – the one expected by “authority” amongst many others

The context is extremely important, and as long as it remains unchanged, decision processes can continue successfully.

In a research questionnaire setting, the context is a task where the respondent is asked to: “Please consider each of these statements and indicate your agreement or disagreement with each”. The outputs are defined; the answer choices and inputs are the attitude statements themselves. All is congruent, unambiguous and fully specified; a memory schema is defined to provide the answers relative to the context. Little additional specification is required to retrieve the answer once the contextual frame is established.

In the middle of this context, there then comes an input that is not related to the current context. Some people will correctly re-establish a new context, imitrex rxlist while others will act more automatically and “fail” this question. The wrong heuristic employed means the wrong answer given, or the right answer given accidentally.

This may explain why people don’t fail the trap when it is the first item on the questionnaire: the entire context (task, outcome, and input) hasn’t yet been set. The first-item trap merely measures a background level of inattention – the fewer than 5% expected.

(The trap itself is causing the failure)

The conclusion is that the trap is not uncovering pre-existing inattention – rather the trap itself is causing the failure.

The key to understanding what drives the failure is not to look at the outcome, but to watch and time the processes involved that lead to the outcome. To do this, we had to recreate the Burke experiment.

Our results further showed that the time taken over the first item (that is the time taken to read the context, absorb the output, take in the first input, and respond) is 60% higher when that item is a trap and not a cognitively fully-specified item. We also see that those who saw a trap up front need to take longer over the next item as the (correct) mental schema is re-established. From the third item onwards, the two groups process items at the same speed.

This speed is maintained all the way to the trap buried in the grid. At this point, those who “pass” the trap slow down and take twice as long as those who “fail” – the failures treat this trap as if it were an ordinary item in the grid and use heuristics to answer the question.

However, when the trap was presented deep into the questionnaire as a standalone question rather than part of a grid, the failure rate was only 4% instead of 19%.

(It is the grid that is 111 responsible for the failure and not the respondent)

That it is possible for 96% of a sample to correctly follow this direct instruction when presented singly, when 19% fail it in the context of grid, with all other things being equal, must suggest that it is the grid that is responsible for the failure and not the respondent.

In the view of this author, this is yet another reason why grids have no place in market research. As Don Dillman warned us long ago, “if the sponsor [client] wants individuals to contemplate each item separately, it is advisable to present each of them as individual items.”

It is clear that an inability to follow a direct instruction is a natural human behaviour, well described by the theory of cognitive underspecification. If our trap question was a good measure of inattention, it would measure it anywhere within a grid or anywhere outside a grid. It patently does not, and therefore the trap cannot be measuring inattention.

In addition, trap failure rates do not necessarily indicate an ongoing absence of attention on the part of the respondent; they simply cannot help themselves. And, indeed, neither could we if we were to complete our own surveys.

SSI Recommendation:

SSI recommends not using grids at all in questionnaires and, if we must, not inserting “attention traps” in the middle of them and thereby unfairly penalizing large numbers of respondents.

By Pete Cape, SSI Global Knowledge Director

First published in Asia Research Magazine, Q1 2014