If you are a user, a buyer, or a practitioner of healthcare market research, how would you reply to the following ...
How
often do you request (or propose) a sample of n=50 or n=100 on a
healthcare quantitative study?
How
often is that 'n=' sample number chosen automatically or based on 'gut
feel'?
How
often do you purchase (or sell) the 'n=' sample number that a
statistical test would suggest?
Do
you understand by how much a change in 'n=' changes confidence
in the resultant data?
Do
you know how to find your optimal 'n=' sample size?
We are all drawn,
trance-like, towards certain seemingly 'magic' numbers - decimal
milestones like 25, 50 75, 100, 200, 500 ...
and so on. Researchers and Marketers fall under their
spell more than most, as we go about our everyday work of
setting sample sizes and analysing data. What ‘magic’ is this?
Well, the sort that can instantly gratify our deep-rooted desire for
safety and security: the thought of n=100 makes us feel nice and comfortable,
and somehow beyond criticism in a way that n=87 just doesn't.
Besides, our colleagues recommend 'magic' number sample sizes all the
time - so even if we should be doing things differently (which
is unlikely, surely?) we can always point to massive precedent. I must admit that in
over 20 years of healthcare quantitative research I have never (yes, never)
quoted or requested anything other than n= some
'magic' number. Sure, we have sometimes ended up with n=51
instead of n=50, or n=93 when we struggled to achieve n=100, but I have never
set out to achieve such apparently oddball sample sizes.
Nor have I previously
challenged their cultural orthodoxy in any serious way.In the UK,
the reflex when researching a general topic amongst GPs will be
to ask for n=100, or n=200. If budget is tight then perhaps n=75, and n=50
if all we want is a so-called sanity check. But how much
confidence can we have in the outcomes produced? Is this something
we consider at the proposal, or briefing, stage of a project? I think
not. Being drawn to 'magic' numbers seems to be our
hard-wired sample-size heuristic. But how come?
I think in large part
it is because we are a bit fearful of...
…Statistics
- perhaps we assume that calculating optimal sample size is either beyond
us ("I'm not really a stats person") or involves appreciable
additional work (...and no way do we have the time). …Criticism
(possibly even ridicule), if we suggest n= something else, and
people laugh. Or think us incompetent.
At a time when the
evidence for declining participation rates is crystalizing, our
established practice of requesting n=100 when n=87 would do just
as well isn't helping - burning through sample unnecessarily. On other
occasions we must be reporting many of our findings with
unwarranted over-confidence because, for example, we have used a base of
n=50 when we really needed n=108.
Once you start to
investigate the issue, it quickly becomes clear that the Internet has made the
process of determining optimal sample size a lot less scary and a lot more
accessible. And - to me at least - it now feels rather embarrassing that most
of us are not running these simple checks as standard!
Online sample
size-calculators (e.g.flres.uk/samplecalc) make it easy to
check what your sample size really should
be, given what you know (or can go and find out) and a bit of faith in
statistics! Even entering best-guesses into such a tool has to be better than
defaulting to a ‘magic number’, doesn’t it? Psychologists Amos
Tversky and Daniel Kahneman would have had a great deal to say about the spell
cast by 'magic' numbers. They illustrate brilliantly how our behaviour
is driven by psychological heuristics and biases, powerful social norms,
and a desire to avoid additional work! Yet it is so tempting
to uphold the orthodoxy, even once we know how to make the
improvement... but I am resolved at least to experiment a bit on my
clients, and see what happens when I advocate n=83, rather than n=100!
Detractors of market research have long tried to pin a "pseudo" science label on our methods. And whilst we might strongly contest such a tag, how convincingly can we argue that we actively embrace science?
First Line Research
Different types of UK Medics use different types of devices to complete online surveys, with almost 10% now using a smartphone. Taken together with tablets and iPads, almost one third complete on a mobile device.
First Line Research
The recent BHBIA Members Exchange Forum on Customer Engagement was about as controversial as it gets for healthcare business intelligence. First Line Research
You give proper thought to questionnaire design, editing your ideas via a trusty “Word” document as you go. Once done you wait a while for programming, then test the online version. Naturally there are a few things to tidy up, and you spot others that have come out a bit differently to your minds-eye view. That’s fine – tweak, test again, and sign-off when happy. Actually, it looked and worked great on screen (better than you thought), so you can relax and wait for the completes to roll in.
Yeh, right – if only…
First Line Research
We’ve known for some years now that we don’t, or can’t, accurately express our motivations for doing the things we do. So what can we, and what should we, do about that? First Line Research