Conquering Uncertainty: Moving from Offline to Online Methodology

There are several reasons why it’s maybe a good idea to switch a research project from face to face or telephone to online. But just as we consider making major changes and leaving familiarity behind, our minds immediately jump to the potential risks of the uncertain and unpredictable ways our data may turn out, and we hesitate.

While these concerns are legitimate, we should also take a closer look at the sources of this hesitation and their potential magnitude before we let unfamiliarity hold us back.

Before we go through the likely causes of data changes and the steps we need to take when moving projects online, let’s start with thinking about why a researcher might be deciding to switch mode for a particular research project.

Two main reasons why you would be switching modes

1. Methodological reasons

While setting the ideal mode for a research project is usually done at the start, the reliability of that mode may change over time. This happens especially when the project has a long life, such as in the case of a tracking study that continually measures population over time.

The way people communicate changes over time, and by not adjusting the way we conduct our research to match these changes, we will inherently bias our longitudinal studies.

In Malaysia, for instance, while traditional methods such as telephone interviews are often used to sample the entire country, in this day and age you have to include mobile phone samples to get that true representation over the phone.

Another reason for using a different mode is to overcome any social desirability effects. This is the possibility that the participant does not want to give a full, truthful answer because they feel it may show them in a poor light in the eyes of the interviewer, or because they want to paint themselves in a good light.

2. Logistics

Cost and convenience are key drivers of business today. Important business decisions need to be made “now”, so getting the data into the client’s hands more quickly and in more convenient ways is as crucial, if not more crucial, as obtaining quality data.

So it is not surprising to see clients constantly looking for solutions that will save time and money. If they can get results via different modes, then they will consider switching if they can get those results faster, at less cost, or in a more convenient way.

Ilya Prigogine, a Belgian physical chemist and Nobel Laureate, says, “ The future is uncertain . . . but this uncertainty is at the very heart of human creativity”.

It is not only chemists who have this positive view of risk and uncertainty. In his influential work Risk, Uncertainty and Profit, Frank Knight (one of the world’s leading economists) carefully distinguishes between economic risk (which is measurable) and uncertainty (unmeasurable). He further argues that uncertainty gives rise to economic  profits that perfect competition cannot eliminate.

We should also frame our thinking in the same state to help us understand the dynamics involved in switching methodologies. Understanding the likely causes of data changes is the key to a successful transition from offline to online.

There are three essential sources of data changes when moving projects to online

1. Sampling effects

One of the primary concerns when moving projects to online is the effect of moving your sample from an offline (total population sample) to an online sample only.

The challenge usually stems from researchers looking for a nationally representative sample and seeing the relatively low Internet penetration of a target population.

However, in order to properly understand the effect of non-coverage, we also need to consider the subject matter of the survey, as well as the differential response rates by demographics of interest.

Looking at the chart below, we see the effect of sample biases on data and how certain groups can be over- or underestimated for certain product categories. Measures of usage of white-label goods or housing tenure, for example, might be under-represented by an online sample, since they are purchased more by the old and poor – the people who are under-represented in online samples.

SSI1Image 1: Effects of Sample Biases on Data

Usage patterns for a mobile phone study may well remain the same in offline and online samples. This shows us that if we are dealing with a general survey topic, we are likely to see similar results. On the other hand, bias can occur if the subject matter is strongly correlated to being online (or not).

When weighing up these issues of coverage, we also need to bear in mind that no data collection method is without bias, and we need to think about who is excluded in each method:

• non-response of people who won’t take phone surveys
• people who won’t join online panels
• members of the public that an interviewer chooses not to approach for a face-to-face interview

2. Mode effects

Earlier we mentioned social desirability bias and how this might lead us to shift from an interview mode to a self-administered survey.

With no interviewer present online, participants may be more honest and less likely to give socially desirable responses.

This effect can be clearly seen if we ask the same people the same question at about the same time, using different modes.

To explore this issue, we conducted research online and through telephone interviews, using groups of samples with similar coverage. Looking at the topic of alcohol consumption, we can see clear evidence of a social desirability effect. Our results showed that people who were interviewed over the phone were more reluctant to divulge details of their alcohol consumption, whereas almost half of the online respondents had no problem admitting to drinking alcohol. You can see that this is not a sample effect caused by isolating in the phone sample those that have an Internet connection – they are just as unlikely to “admit” to drinking alcohol.

The opposite effect can be seen for a question that exhibits positive behaviour: buying a book. People are more likely to say that they bought a book in the past week when they are talking to an interviewer compared to online self-reported data.

SSI2Image 2: Done last week…

This social desirability effect can vary in strength and direction depending on culture. Additionally, there is acquiescence bias – the tendency to agree with the interviewer to avoid any conflict or confrontation, or even to please the interviewer with their response.

3. Questionnaire effects

Just as there is less chance of acquiescence or giving socially desirable answers in an online self-completion method, using online there is no interviewer to clarify an ambiguous question or to probe more deeply into respondents’ answers.

Online, the respondents are on their own to make sense of the questions. They also have as much or as little time to answer as they like.

So, simply taking the same questionnaire (more so a poorly designed or poorly worded one) to an online setting can create differences in the data obtained.

While variations in question wording and format can cause differences in data, the questionnaire needs to be carefully re-designed from a format that is read by the interviewer to one that is to be read by the respondent.

To better illustrate, let’s look at a question we asked on customer satisfaction with the price of electricity.

While we don’t expect social desirability for this topic, we see very different results with the online compared to data collected by telephone.

This is due to the questionnaire effect – the difference between a question that is read out and listened to and a question that is read in front of people online.

What happens is that when the entire scale is constantly visible, as with an online self-completion survey, people tend to gravitate towards the midpoint of the scale.

On the other hand, if the scales are presented verbally, people have a tendency to skew towards the extremes.

SSI3Image 3: Satisfaction with Price of Electricity

This is again due to mode of administration. When data is being collected by the interviewer, the interviewer may ask the question: Were you satisfied or were you dissatisfied? And then, depending on the response, he might then probe in terms of the level of satisfaction: Is that strongly or slightly?

We also need to remember that in an interview mode the interviewer is in control of the pace of the interview and the time allowed for answering. This therefore has an effect on the number of answers given for some questions. Spontaneous brand awareness, for example, may increase in online where there is no interviewer in a hurry to move on with the survey. Conversely, in open-ended questions more data is collected with an interviewer who can engage in a conversation or probe and clarify.

With so many forces acting on the data, and with some of them being directionally dependent on culture, it is  extremely difficult to always predict the size and direction of the potential data changes.

So, having a good understanding of all the individual effects of each mode is essential to make sense of the cost, coverage, and quality trade-offs when switching methodologies.

Continuing to work with one’s current (traditional) mode can seem like an easier, less scary choice. But we should not let the fear of the unknown prevent us from taking advantage of the new opportunities that come with new methodologies. Neither should we deny the cost of sticking to the status quo. Imagine how much more comfortable one would be when, anticipating change in a certain direction, change is what is found. Truly, being forewarned is to be forearmed, and thus can uncertainty be conquered.

This article was first published in the Q3, 2015 edition of the Asia Research Magazine.