Please login to the form below

Measuring truth by the millisecond using meta-data

Meta-data (“data about data”) is, at last, gaining a profile in this big-data age. It is already used by researchers, most commonly in website evaluation, but in terms of helping us get more from our everyday online surveys it has thus far been grossly underutilised.
In this article I will be talking about meta-data as that which accumulates in the background, either naturally or by design, as a by-product of respondent activity in an online survey (or indeed any online activity). Meta-data tells us HOW someone responds, and using that information alongside the survey responses themselves can add a great deal to our understanding. Since respondents will have already consented to take part, there are no privacy issues regarding the capture of online survey meta-data, so long as established internet etiquette is observed. As behavioural science has taught us more about how people make choices, the stock of behavioural data has risen (i.e. data showing what happened, rather than what was said). Further, neuroscience has confirmed the relative importance of the non-conscious in decision making, and shown how capturing split-second responses can reveal the direction of our implicit motivations. So, what types of meta-data can we access, and how are they useful to us?

Meta-data as a way to access intuitive and non-conscious responses

When programming online surveys it is possible, using JavaScript, to create a wide variety of non-intrusive background timing tasks, recorded in milliseconds. For example, when researching promotional materials, the standard survey data for an Ad test will tell us which execution a respondent declared as her preference but not how long she spent viewing it relative to competitors, which would give insight into relevant information about its intuitive appeal, and engagement. When respondents are asked to highlight areas of an Ad (or other visual) showing the parts they like and/or dislike, knowing how long it takes to select each area adds to our understanding of their level of certainty. 

Recording timing by the millisecond in the background is the foundation of increasingly popular online tests of implicit association, and the more obvious timing of response or activity (for example using interactive on-screen stopwatches) can be an essential ingredient in gamification techniques. Millisecond level timing can also be used to draw a retrospective visual representation of the path of a cursor across a screen, helping us understand respondent interaction with stimulus materials, and determining the extent of non-conscious hesitation and deviation when making survey choices – providing more helpful surrogate measures for certainty.

Meta-data that helps us write better questions
Larger amounts of meta-data can teach us about how online questionnaires work. For example, we can say with confidence that multi-choice questions take almost exactly twice as long to answer as equivalent single choice questions (i.e. same number of precodes, excluding dichotomous questions). We know that respondents typically spend at least as long mulling over percentage based numeric entry questions, as they do answering open-ended questions. And our enquiries also show that meta-data tells us precisely when a grid question (i.e. precodes down the side, scale across the top) becomes too big, and starts to jeopardise data quality (figure 1).
This relationship between grid size and time taken may at first seem counterintuitive – we might have expected respondents to take extra time when grappling with big grids – but in fact the larger the grid the faster they will go, a pragmatic approach that shouldn’t really surprise us.

Using meta-data to diagnose fieldwork problems

Identifying ‘speeding’ and ‘straight-lining’ are staples of quality control and rely on an analysis of meta-data, but we can also use it to identify any emerging problem areas. For example, meta-data would be able to attribute an unexpectedly high dropout rate not only to a particular question, but to a specific aspect of that question, enabling swift and effective resolution. Panel companies know that analysing the response patterns of individual panel members over time can weed out poor quality respondents, but the very same meta-data could also be used to contribute to a behavioural profile that in turn might be used to improve future sample targeting and dispatch – who wants a sample that is full of known quick-responders? To repeat, much meta-data is already captured automatically, and to capture more of it is not especially difficult programming-wise. By nature meta-data is non-intrusive and does not infringe upon any ‘above the line’ research tasks that respondents are asked to undertake as part of an online survey. The added value of meta-data resides in analysing it alongside the main research findings. I’m confident that readers will be surprised at how easy it is to capture and use, and how much it can add to their interpretations and conclusions.    

John Aitchison, Managing Director, First Line Research

18th March 2015



Company Details

First Line Research

+44 (0)1904 799550

Contact Website

11 Carr Lane
YO26 5HT
United Kingdom

Latest content on this profile

Is Medical Market Research Serious About Science?
Detractors of market research have long tried to pin a "pseudo" science label on our methods. And whilst we might strongly contest such a tag, how convincingly can we argue that we actively embrace science?
First Line Research
Left to their own Devices?
Different types of UK Medics use different types of devices to complete online surveys, with almost 10% now using a smartphone. Taken together with tablets and iPads, almost one third complete on a mobile device.
First Line Research
The Doctor will NOT see you now
The recent BHBIA Members Exchange Forum on Customer Engagement was about as controversial as it gets for healthcare business intelligence.
First Line Research
Magic Numbers
If you are a user, a buyer, or a practitioner of healthcare market research, how would you reply to the following ...
First Line Research
How 'Smart' is your HCP survey
You give proper thought to questionnaire design, editing your ideas via a trusty “Word” document as you go. Once done you wait a while for programming, then test the online version. Naturally there are a few things to tidy up, and you spot others that have come out a bit differently to your minds-eye view. That’s fine – tweak, test again, and sign-off when happy. Actually, it looked and worked great on screen (better than you thought), so you can relax and wait for the completes to roll in. Yeh, right – if only…
First Line Research
Implicit Market Research Techniques: “What, Why, When, How?”
We’ve known for some years now that we don’t, or can’t, accurately express our motivations for doing the things we do. So what can we, and what should we, do about that?
First Line Research