Published November 18, 2024 | https://doi.org/10.59350/nsgwq-1mb74

Web of Science can't handle innovation

Feature image

So I read Jeffrey Brainard's piece in Science magazine on Clarivate's decision to punish eLife for innovating – by stripping eLife of a proprietary Journal Impact Factor™ number, that Clarivate itself awards (sidenote: to be clear, I see no value in Journal Impact Factors as they are statistically illiterate, irreproducible, and easily gameable, amongst many other issues that have long been documented). With the greatest of respect to Jeffrey, I don't think it quite fully covered what's going on here. I think it's Clarivate that has problems, not eLife.

From my perspective, the tension between innovation in research publishing and Clarivate's wish to demonstrate some kind of arbitrary 'selectivity' has been bubbling along for over a decade now. Researchers & research funders are getting to grips with the affordances of digitality, trying new processes and discovering that some processes & workflows result in better outcomes in terms of rigour, financial cost, transparency, and/or a reduction in research waste. I'm thinking particularly about transparency of process and timing of process. Digitality allows for the traditional pre-digital publishing process to be decoupled and re-ordered.

In 2024, whilst not yet "the default", it is not uncommon in some disciplines to find research being published before peer-review, and then sent for some kind of peer-review after being published online. This is sometimes referred to as the "Publish->Review->Curate" model. ASAPbio have a nice explainer of the PRC model.

I infer that Clarivate / Web of Science doesn't seem to like the PRC model. I don't base this on wild supposition, it is based on what I can observe from the growing fleet of journals that are now operating a PRC-only model and are excluded from Clarivate indexing:

  • F1000Research (est. 2012), not in Clarivate's "Master Journal List"
  • Journal of Open Source Software (est. 2016), not in the "Master Journal List"
  • Wellcome Open Research (est. 2016), not in the "Master Journal List"
  • MedEdPublish (est. 2016), not in the "Master Journal List"
  • Gates Open Research (est. 2017), not in the "Master Journal List"
  • The Journal of Open Source Education (est. 2018), not in the "Master Journal List"
  • HRB Open Research (est. 2018), not in the "Master Journal List"
  • The Open Journal of Astrophysics (est. 2019), not in the "Master Journal List"
  • Health Open Research (est. 2019), not in the "Master Journal List"
  • Open Research Europe (est. 2021), not in the "Master Journal List"
  • NIHR Open Research (est. 2021), not in the "Master Journal List"
  • Digital Twin (est. 2021), not in the "Master Journal List"
  • Open Research Africa (est. 2022), not in the "Master Journal List"
  • Routledge Open Research (est. 2022), not in the "Master Journal List"
  • Stosunki Międzynarodowe (est. 2022), not in the "Master Journal List"

Some would say that 'ah – Open Research Europe never applied to Clarivate to join Web of Science and be included in the "Master Journal List" ' – this is correct, but I don't think it defeats my point. Others in this above list definitely _have_ applied, and have actively been rejected by Clarivate more than once e.g. the Journal of Open Source Software.

The fleet of PRC journals isn't the only problem that Clarivate faces. Clarivate also seems to struggle to recognise the excellence of many diamond open access journals & communities such as (but certainly not limited to):

Clarivate also doesn't index good journals from Latin America, Asia, or Africa as much – a well-discussed Anglophone / colonial bias.

If I could boil-down the thinking behind this post into a 'graphical abstract' it would be this (below). Clarivate no longer indexes the best research publishing venues as far as I’m concerned. The research community has moved beyond the narrow confines of Clarivate’s rigid tunnel-vision for how research should be published.

* you may notice I’ve quoted “Master Journal List” throughout this post. That’s deliberate. I don’t have a PhD in post-colonial theory but even I’m surprised Clarivate continue to choose to use language like “Master” when selecting which journals they think are the ‘best’ and ?most superior? to others. Yikes.

References:

Brainard, Jeffrey (2024) "Open-access journal elife will lose its 'impact factor' over controversial publishing model" Science. doi: 10.1126/science.zycyo78

Curry, Stephen (2012) "Sick of Impact Factors" Occam's Typewriter https://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/

Ioannidis JPA, Thombs BD. A user's guide to inflated and manipulated impact factors. Eur J Clin Invest. 2019; 49:e13151. https://doi.org/10.1111/eci.13151

Khanna, Saurabh;  Ball, J ;  Alperin J P;  Willinsky J (2022) Recalibrating the scope of scholarly publishing: A modest step in a vast decolonization process. Quantitative Science Studies 3 (4): 912–930. doi: 10.1162/qss_a_00228

Mills, D., & Asubiaro, T. (2024) “Does the African academy need its own citation index?” Global Africa, (7), pp. 115-125. https://doi.org/10.57832/18yw-xv96 

Priem, Jason & Hemminger, Bradley M. (2012) "Decoupling the scholarly journal" Front. Comput. Neurosci. doi: 10.3389/fncom.2012.00019

Royle, S. (2016). “The Great Curve II: Citation distributions and reverse engineering the JIF”. In quantixed. https://doi.org/10.59350/yfcja-a4x31

Additional details

Description

So I read Jeffrey Brainard's piece in Science magazine on Clarivate's decision to punish eLife for innovating – by stripping eLife of a proprietary Journal Impact Factor™ number, that Clarivate itself awards (sidenote: to be clear, I see no value in Journal Impact Factors as they are statistically illiterate, irreproducible, and easily gameable, amongst many other issues that have long been documented). With the

Identifiers

UUID
7622c604-e623-4594-89de-54bd36174506
GUID
https://rossmounce.co.uk/?p=2574
URL
https://rossmounce.co.uk/2024/11/18/web-of-science-cant-handle-innovation

Dates

Issued
2024-11-18T16:15:55
Updated
2024-11-18T17:11:58

Citations

  1. Fenner, M. (2024, November 27). Improving Rogue Scholar references. Front Matter. https://doi.org/10.53731/gqjqv-ntp57