Using Video to Reinvigorate the Open Question

Pexels.com

Over the past 15 years, online data collection has proved itself to be the pre-eminent data collection mode. Being both electronic and a form of self-completion, it is inherently better than any other data collection method. Given its advantages, however, one question type has suffered: the open question.

In interview modes, the open question gives the participant a chance to speak in their own words, and gives the interviewer a chance to engage in something approaching a conversation, clarifying and probing until the topic is exhausted. In self-completion, there is no conversation; the question is put and the open box gapes like an empty void to be filled with the written word. Open responses from self-completion surveys will be peppered with incoherent statements that beg to be clarified, and responses that bear little relation to the original question posed. Whilst such responses are in the minority, the majority are thin and lacking in any colour or substance.

However, many devices are equipped with cameras and microphones as standard. Would then a video-recorded answer be any better than a typed one?

In order to make video an integral part of the online survey, the participant needs to be doing the questionnaire on a device with a camera and a microphone. Most participants take surveys on a PC or laptop. Just under half of participants (48%) in the US told us that they have both a camera and a microphone in their device. Unsurprisingly perhaps, there  is an age skew towards younger people having video-ready devices (three-quarters of 18–24-year-olds vs one in five of the over-65s). One would expect the relevance of cameras and microphones to increase over time, but it is worth considering skewing the starting sample if the sample doing the video needs to be demographically balanced.

In order to do a video, we also need to obtain informed consent, and not everyone agrees to be filmed. In some instances we see a gender difference in the levels of agreement to do a video. In the US, we found women to be less ‘ready right now’ to do a video. Two-thirds of American men agreed to do a video, compared to only half of women. This is not a global finding; in the UK the same numbers of women as men were willing to do a video. Age does not seem to play a part in agreement levels.

Overall, around a third of all participants were able and willing to do a video, with a distinct age skew. We found around 6 in 10 (in the US) agreeing to do the video and somewhat fewer in the UK (1 in 2). Of course, maximising the acceptance rate is of key importance.

It is always necessary to get full informed consent from the participant; they need to know who is going to be seeing the video and to what use it is going to be put. They may also need to be reminded not to include anyone else in the video.

We found that acceptance rates were highest when the circulation of the video was at its most restricted (i.e. when only a coder would view the video): two-thirds accepted. Acceptance reduced marginally to 6 in 10 when it was stated that the video might be shown to an external audience. Interestingly, acceptance did not vary depending on whether the video would be seen by a group of researchers at a market research conference or made publicly available via a website.

The language style of video requests is also important. Plain English requests consistently fared worse than their legal-sounding counterparts. In fact, the more permissions being sought, the greater power the legalese had.

Yes, will do video

Both the quantity and the quality of the responses gained via video were far better than those from typed-in answers. Since more answers were given (between 1.33 and 1.50 times as many), the relative importance of the answers changed. Second-ranked answers generally remained in second rank but became more important in absolute terms. Some even ranked more highly. As an example, “a reliable source” was mentioned by 8% of people who did not make a video, putting it 6th in rank. Almost three times as many people in the video group mentioned this (21%), making it a substantial answer and moving its rank to 4th. Quality also improved, with between five and six times as many characters being spoken as written, providing greater colour and depth.

These differences between video-recorded and typed-in answers mean that the two data types cannot simply be added together to produce a total answer; nor can video data be used alone, because for all the increase in quantity and quality, there was a twist. There was a correlation between the saliency of the topic and the willingness to make a video. We were slightly more likely to get a video about home heating choice from people who were either more or less satisfied with their heating fuel. A second experiment on the subject of politics revealed an even greater bias. A mere 17% of those claiming to be not at all interested in politics said they would answer using video, compared to 79% of those extremely interested in politics.

Yes, will do video

The importance of capturing, measures of interest in, satisfaction with, and saliency of the subject matter cannot be overstressed. These would be key variables required to weight responses to the open question.

We can clearly see that video offers great advantages in terms of the quantity and quality of data collected. But this will be collected from a minority and they are likely to be skewed in terms of both demographics and attitudes.