The stupid, it burns. Do you feel the heat?

We certainly need to be able to reach that audience. The Wellcome Trust has a standing policy on disregarding journal names and, by association, impact factors. But they are the only funding organisation that does so. We need to shame them into action.

I need to thank David Colquhoun for inadvertently this post.

It’s time to declare war on statistical illiteracy.
Photo provided by
Flickr

Not yet… But thanks for the inspiration (and example).

This crowd of which I speak certainly does exist. From it come all the people who give their time freely to do pre-publication peer-review. I am not suggesting that there should be a formal post-publication procedure but that we find means to capture the activity and interest that is sparked following publication. This happens normally at conferences; we just need to find a way to apply it to all publications.

Great article Stephen, as usual.

Regarding “sifting”, two points. First, PubMed. Second, evaluating the quality of extant data is one of the main jobs of a scientist. People who want to farm this out to professional editors or anyone not in their laboratory astonish me.

I hope you sent a copy to the , University of London, and every “research manager” in the country.
Photo provided by
Flickr

“A smear campaign? Welcome aboard….”

Can someone please explain to my why this is a problem. The reason given here – that most papers have fewer citations than the arithmetic mean – is only a problem if you think a mean is a median.

Has someone published an age-correction chart for h-index?

The long tails of barely referenced papers in the citation distributions of all journals — even those of high rank — are evidence enough that pre-publication peer review is an unreliable determinant of ultimate worth.

It will be interesting to see when this will change.

I also wonder, is it impact factor that’s preventing OA or is it the drive to publish in journals that are perceived as better quality? I suspect it’s the latter – I’m sure most ecologists would rather publish in than in , despite the latter having a higher impact factor (and a stunningly brilliant executive editor), because Am. Nat. has a higher reputation.

There’s nothing intrinsically ‘wrong’ with the Impact Factor

To be fair, Jason is at least fostering the development of alternatives, though I agree that, ultimately, we need to be very careful about how we deploy them. It seems reasonable to suppose that having a broader range of indicators should be helpful.

Sheesh – you’ll be trashing the h-index next… 🙂

@steve
That’s very disappointing. Perhaps your head of faculty got that job because he/she wasn’t much good at research. It seems to be quite common for people who get high on the administrative ladder to loose (or never to have had) critical faculties.

Will have to leave your other points for later – gotta run.

If there is a problem with skewed data, it’s at the other end, with a few papers that are cited a lot. I think this is only a problem if a journal only publishes a few papers every year (e.g. the Bulletin of the AMNH), as the effect of the upper tail becomes diluted fairly quickly.