This post is about a citation analysis that didn’t quite work out. I liked this blackboard project by Manuel Théry looking at the influence of each paper authored by David Pellman’s lab on the future directions of the Pellman lab.
This post is about a citation analysis that didn’t quite work out. I liked this blackboard project by Manuel Théry looking at the influence of each paper authored by David Pellman’s lab on the future directions of the Pellman lab.
Over the past week this tweet was doing the rounds. I’m not sure where it comes from or precisely what its original context was, but it appeared in my feed from folks in various student analytics and big data crowds. The message I took was “measurement looks complicated until you pin it down”. But what I took from this was something a bit different. Once upon a time the idea of temperature was a complex thing.
Shallow Impact. Tis the season. In case people didn’t know— the world of scientific publishing has seasons: There is the Inundation season, which starts in November as authors rush to submit their papers before the end of year. Then there is the Recovery season beginning in January as editors come back from holidays to tackle the glut.
I have written previously about Journal Impact Factors (here and here). The response to these articles has been great and earlier this year I was asked to write something about JIFs and citation distributions for one of my favourite journals. I agreed and set to work. Things started off so well. A title came straight to mind.
I was interested in the analysis by Frontiers on the lack of a correlation between the rejection rate of a journal and the “impact” (as measured by the JIF). There’s a nice follow here at Science Open.
There have been calls for journals to publish the distribution of citations to the papers they publish (1 2 3). The idea is to turn the focus away from just one number – the Journal Impact Factor (JIF) – and to look at all the data. Some journals have responded by publishing the data that underlie the JIF (EMBO J, Peer J, Royal Soc, Nature Chem). It would be great if more journals did this.
A few days ago, Retraction Watch published the top ten most-cited retracted papers. I saw this post with a bar chart to visualise these citations. It didn’t quite capture what the effect (if any) a retraction has on citations. I thought I’d quickly plot this out for the number one article on the list. The plot is pretty depressing. The retraction has no effect on citations.
It’s been a while since I did some navel-gazing about who reads this blog and where they come from. This week, quantixed is close to 25K views and there was a burst of people viewing an old post, which made me look again at the visitor statistics. Where do the readers of quantixed come from? Well, geographically they come from all around the world.
A couple of years ago, a colleague sent me this picture* to say “who put J Cell Biol on a diet?”. I joked that maybe they publish too many autophagy papers and didn’t think much more of it. Recently, Ron Vale put up this very interesting piece on bioRxiv discussing what it takes to publish a paper in the field of cell biology these days. In the main, he questions whether this is now out of reach of many trainees in our labs.
I was talking to a speaker visiting our department recently. While discussing his postdoc work from years ago, he told me about the identification of the sperm factor that causes calcium oscillations in the egg at fertilisation. It was an interesting tale because the group who eventually identified the factor – now widely accepted as PLCzeta – had earlier misidentified the factor, naming it oscillin.