snorkackcatcher: (Default)
[personal profile] snorkackcatcher
Two recent stories make me despair at the inability of our journos to handle anything mathematical with basic competence ...


First up was a story in The People by Political Editor Nigel Nelson which claimed "EXCLUSIVE: TORIES 325 LABOUR 272, 2009 ELECTION RESULT ANNOUNCED EARLY". No, Dodgy Dave hasn't managed to stage a coup d'etat (really, he should master the organisation of piss-ups in breweries first and then move on to more complex tasks) -- what we have here is a story written in breathless (but definitely not deathless) prose about "an amazing new computer programme" that predicts the outcome of the next election from opinion poll data and "features on a website that has caused a sensation in Westminster" MPs are stated to be "studying this avidly as if our jobs depend on it -- which of course they do".

The really depressing things about the article were the sentences "It is the brainchild of City whizkid Martin Baxter, who used a fiendishly complex mathematical system called Calculus" and "Maths geniuses can use it to make their own forecast"

WTF? We can presume that our Nigel was so overwhelmed by the introduction of Big Scary Maths that he didn't do any complex research of his own into the story, such as asking his kids what 'calculus' was -- it's so fiendishly complex my maths class started to learn it at fourteen. Worse, you don't even need to know calculus to work things out using the formula quoted in the article:

A(i, k) = C(i, k) x P(i) / E(i)

A and C are the predicted and previous election votes for party i in constituency k, P is the opinion poll rating of party i, and E is the previous election result for party i. In other words, it's a simple plug-in-the-numbers formula that any halfway competent GCSE student should be able to handle in their sleep. Moreover, all it does is adjust the previous totals in the light of new national polling data, just like the BBC were doing last night -- and thus its actual predictive capacity is limited by the fact that it's only as good as the last rolling average of polls, which often have little to do with what the state of the parties will be come next Election Day.


Next was a front-page headline in the Daily Mail claiming that GPs 'refuse to sign abortion forms' -- "Almost a quarter of GPs [local practice doctors] are refusing to sign abortion referral forms, a survey reveals. According to the poll by the doctors' newspaper Pulse, nearly one in five GPs do not believe abortion should be legal".

These figures struck me as being so high as to be extremely fishy, especially in a story from this strongly conservative source. This being the British tabloid press, it's necessary to deconstruct such a politically convenient story in order to decide what, if anything, is actually going on. You usually have a sporting chance of finding the raw data buried in the last two paragraphs, but here all we got at that point was a dissenting quote from one Ann Furedi, CEO of "a sexual healthcare charity specialising in abortion services" who "does not believe the survey accurately reflected GPs' opinions as it polled less than 1% of the UK's 40,000 GPs" because "Pulse's findings differ from weighted, representative UK public opinion poll results"

Should pro-choice people start to panic? I don't think so, because Ms Furedi was right, even if her view was presented as merely the Other Side Of The Story. If you actually go and look up the survey in Pulse magazine (note -- you have to create an account to look at it) you find the expected magic phrase: "Nearly one in five of more than 300 GPs who responded to a survey on medical ethics said they did not believe abortion should be legal, with 24% saying they would not sign abortion referral forms".

Well, what do you know. Can you say 'self-selected sample'?

This is basic statistics, and basic psychology. If you rely on figures obtained from any kind of questionnaire where the participants are chosen on the basis of whoever wants to fill it in, you will very likely get skewed results, because those who feel strongly about the issue are much more likely to respond than those who don't. To borrow an example from Darrell Huff's classic How To Lie With Statistics (highly recommended), suppose you mailed out a questionnaire that included the question "Do you like to fill out questionnaires?". The results would probably allow you to claim that practically everybody liked to do so, but of course the vast majority of people who didn't would have selected themselves out of your sample by flinging the questionnaire in the nearest bin.

So what we have is that approximately 75 GPs out of however many were asked (presumably limited to the circulation of Pulse magazine -- figure not given, but up to 40,000) thought abortion should be illegal. As a counterpoint, suppose we assume that the others didn't say so simply because they didn't think so and didn't think it worth writing in to say so; then, instead of a figure of 24%, we have about 0.2% -- far less headline-worthy. As Huff says in showing the unreliability of a similar survey, "our crude methods have produced a very dubious figure, but it is at least as worthy of trust as the one that was published nationally".

Of course, we can assume the Mail have an axe to grind here, but what do you bet that this kind of smell test never even occurred to the headline writers?
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

snorkackcatcher: (Default)
snorkackcatcher

January 2020

S M T W T F S
   1 234
567891011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 21st, 2026 11:20 pm
Powered by Dreamwidth Studios