Monday, 23 September 2013
Tuesday, 30 July 2013
Thursday, 11 July 2013
Wednesday, 19 June 2013
Noisy Politics
As elections approach in Germany, the number of surveys on
voting intentions increases. Every week you can read some wild speculations on
why party X lost one point, while another gained one. If you are tired of journalists who interpret nothing but noise in the data, you might want to have a look at
the diagrams below. They show you the long term trend in voting intentions for
each major party as a moving average of individual survey results (including
surveys from all major research agencies). Confidence intervals for every survey
estimate are also shown. The data is up to date - last included survey was published on June 18.
Monday, 10 June 2013
The “yes we can” fallacy of market research
The plenitude of tools and specialisations that
agencies offer to potential clients on the research market is stunning. If you
listen to marketing speeches of the research industry, you will be tempted to
believe that any question can be answered by research, at any time, anywhere,
for any target population, and at reasonable costs. You want to know if the
packaging of your product should be gradually bluer? You want to measure the
long term return on investment of hiring external head hunters with absolute
precision? You want to estimate customer satisfaction with a service that does
not even exist yet? You might even have more esoteric issues that I cannot
imagine right know? Just ask a research agency about the feasibility of your
research project. The answer which is most likely to get is: yes we can. This
might be the consequence of a competitive market. If one agency says “no we
can’t” a potential client easily goes to another one. However, this does not
mean that the conclusions drawn from overstretched projects are any better than
speculations. But naïve trust in numbers makes dubious research a profitable
undertaking for some agencies. Actually, many clients do not buy knowledge and
insights but ‘numbers’ - no matter how they came about. To some people
‘numbers’ have the aura of precision and truth, at least as long as they
broadly fit into one’s own presumptions on an issue. This is surely a poor
understanding of what research is good for, but a popular one nonetheless.
Basic methodological issues like sampling, statistical inference or
psychometric considerations in research design are ignored way too often in
practice. Ignorance rules in many businesses which buy research services. So,
if you are on the client side and if you really want to gain insights, than focus
on methodologically justifiable projects and give up on projects that are
likely to produce nothing but useless ‘numbers’. And if you are really in the
need for ‘numbers’, just throw some dices. This will deliver equivalently useless
‘numbers’. But it is significantly cheaper than commissioning research.
Friday, 31 May 2013
New census data from Germany: 1.5 million people less than expected
Press release from the Federal Statistical Office of Germany:
https://www.destatis.de/DE/PresseService/Presse/Pressemitteilungen/2013/05/PD13_188_121.html
Tuesday, 28 May 2013
Business Insider: “31 Charts That Will Restore Your Faith In Humanity”
Despite the
attention bad news get, humanity is making progress in many areas.
Some charts to save your day!
Saturday, 18 May 2013
On integrity and the politics of research business
“I need some numbers which prove that I’m
right.” As a market or social researcher you will probably have heard this
phrase in one version or another. Regardless of where you work – in a private
research agency, a public research or evaluation body, or as an in-house
researcher for a company – you most likely provide a service to an end-user.
And often when research is asked for, difficult decisions are to be made by
those end-users, whether they are CEOs, marketers, policy-makers or else. The
good news is that the more is at stake the more important your research is. At
the same time stakeholders are more likely to expect specific results.
Surely not everyone working in the analytics,
research or evaluation business has internalised a philosophy of providing
solid empirical evidence. Providing evidence and advice is nothing more than a
service to some researchers. And making the client happy might be considered
more important than delivering an accurate picture of reality (or at least
attempting to). In this case the client’s prejudices (and one’s own of course)
are the benchmark by which findings are judged.
The situations in which you might find yourself
confronted with pressures are multiple. So are possible consequences. For
external research agencies the picture is clearest. They depend on their
clients and are paid for what they deliver. This creates dependency. However,
not being part of the organisation you consult can make you freer in pointing out
difficulties and weaknesses. Additionally, independence can be reduced by
having a large portfolio of different clients. As an in-house researcher you
are closer to the end-users of research. This brings you in the opposite
situation: you are economically more independent than an external service
provider. However, career prospects, personal ties or the identification with
your organisation can make it harder to resist demands of making up findings or
misinterpret them.
Regardless of wether you stand inside or
outside an organisation, it becomes really nasty when conflicts within the
organisation arise. For example, the marketing department might commission
research in order to use it for internal battles with the board. Becoming
subject to such politics probably puts you in the worst possible position,
especially when research findings do not match the expectations of those who
commissioned it. This holds true for the private as for the public or
non-profit sector. Non-profit research needs funding, which often comes with
strings attached. Where research is used for political campaigning, conscious
self-censorship and unconscious self-deception might also play a role due to
one’s own attachment to the cause.
It is also not common to openly admit that you
are subject to subliminal or outspoken pressures. As a matter of
self-protection every researcher better keeps the image of relying on
professional ethics rather than being a mercenary. The manipulation of results
is only worth it to the end-user if the image of accuracy and rigorous
application of methods is preserved. This makes it even harder to withstand
pressures while other seemingly upright researchers do not.
The very nature of research is to
generate evidence that holds true, independently of one’s own presumptions.
Hence, an external benchmark – a test against reality encrypted by the
protocols of rigorous methodology – is what gives research credit over belief.
If you take Karl Popper’s epistemology of critical rationalism seriously, research
is more of an endewer of testing and rejecting claims rather than picking up
pieces of evidence that fit one’s own presumptions. This philosophy of
scepticism, uncertainty of knowledge, and rejection is at odds with the
philosophy of hope for prove and certainty and therefore with the philosophy of
most end-users of research.
A good and popular institutional answer to this
problem is to let research agencies and research departments be certified and
regularly controlled by external auditors, as for example required by ISO
20252. Giving researchers a strong institutional incentive of keeping such a
certificate helps to stand up against dubious pressures. This is because losing
such a certificate would mean losing credibility. And only researchers and
agencies with an image of credibility are of use to those who want to make up
results. However, pressures normally arise below the radar of any certificate
or auditor. They are normally related to a specific project and a specific
researcher. Here the only thing that helps is personal integrity and the
courage to stand up and defend it.
Wednesday, 1 May 2013
An “Experimental Agency” for testing policies
Jim Manzi's book on real-world experimentation for policy making and
business brings methodology back where it belongs: to the center of decision
making processes. Ideas worth considering and a book worth reading!
Sunday, 28 April 2013
Big data, politics, and more
Kenneth
Cukier on Big Data and Obama’s micro-targeting strategy:
“I think politics is
forever changed because of this”
Subscribe to:
Posts (Atom)