Archive for November, 2023

Today in The Hill: Science is littered with zombie studies. Here’s how to stop their spread.

November 26th, 2023

My newest piece is in The Hill today: Science is littered with zombie studies. Here’s how to stop their spread.

Many people think of science as complete and objective. But the truth is, science continues to evolve and is full of mistakes. Since 1980, more than 40,000 scientific publications have been retracted. They either contained errors, were based on outdated knowledge or were outright frauds. 

Identifying these inaccuracies is how science is supposed to work. …Yet these zombie publications continue to be cited and used, unwittingly, to support new arguments. 

Why? Almost always it’s because nobody noticed they had been retracted. 

Science is littered with zombie studies. Here’s how to stop their spread. Jodi Schneider in The Hill

Thanks to The OpEd Project, the Illinois’ Public Voices Fellowship, and my coach Luis Carrasco. Editorial writing is part of my NSF CAREER: Using Network Analysis to Assess Confidence in Research Synthesis. The Alfred P. Sloan Foundation funds my retraction research in Reducing the Inadvertent Spread of Retracted Science including the NISO Communication of Retractions, Removals, and Expressions of Concern (CREC) Working Group.

Tags: , , ,
Posted in information ecosystem, scholarly communication | Comments (0)

Last call for public comments: NISO RP-45-202X, Communication of Retractions, Removals, and Expressions of Concern

November 26th, 2023

I’m pleased that the draft Recommended Practice, NISO RP-45-202X, Communication of Retractions, Removals, and Expressions of Concern (CREC) is open for public comment through December 2, 2023. I’m a member of the NISO Working Group which is funded in part by the Alfred P. Sloan Foundation in collaboration with my Reducing the Inadvertent Spread of Retracted Science project.

The NISO CREC Recommended Practice will address the dissemination of retraction information (metadata & display) to support a consistent, timely transmission of that information to the reader (machine or human), directly or through citing publications, addressing requirements both of the retracted publication and of the retraction notice or expression of concern. It will not address the questions of what a retraction is or why an object is retracted.

NISO CREC

Tags: , , , , ,
Posted in future of publishing, information ecosystem, Information Quality Lab news, library and information science, scholarly communication | Comments (0)

Towards a better process for scoping review updates: Empirical Retraction Lit

November 19th, 2023

Part 1 of an occasional series on the Empirical Retraction Lit bibliography

In 2020, my team released the first version of the Empirical Retraction Lit bibliography, updated a number of times. The last updates are July 2021 (content); September 2021 (taxonomy); December 2022 (JSON/web design giving access to the taxonomy).

The bibliography is part of my Alfred P. Sloan-funded project, Reducing the Inadvertent Spread of Retracted Science, and it has also been an avenue for me to experiment with literature review automation and bibliography web tools. Since August members of my lab have been writing up a review on post-retraction citation, building on work a number of people have done on the review over the past several years. To update the content, we’re also working on a systematic search and screening process.

I expect a substantial number of new items. In July 2021 we had 385 items. After that I’d been estimating perhaps 7 new papers a month, which would mean ~175 new items July 2021-August 2023 (since our systematic search was September 5, 2023). That ballpark number seems plausible now that I’m in the middle of full-text screening. 2+ years is a very long time in retraction research, especially with the attention retraction has been receiving in the past few years!

A number of questions arise in trying to optimize a scoping review update process. Here are just a few:

  • Is the search I used last time adequate? Should it be updated or changed in any way?
    • Is date-based truncation appropriate for my search?
    • Is it appropriate to exclude certain items from the initial search (e.g., data, preprints)?
    • Is there a high-precision way to exclude retraction notices and retracted publications when the database indexing is insufficient?
    • Could citation-based searching in one or several sources replace multi-database searching on this topic? What are its precision and recall?
    • Are there additional databases that should be added to the search?
    • Is additional author-based searching relevant?
  • What is the most painless and effective way to deduplicate items? (Complicated in my case by retraction notices; retracted publications; and non-English language items that have multiple translations.)
  • Which items without abstracts may be relevant in this topic?
  • What is the shortest item that can make a research contribution in this topic?
  • Is original, “empirical” research a clear and appropriate scope?

Ideally the Empirical Retraction Lit bibliography will become a living systematic review that relies on as much automation and as little human effort as appropriate, with an eye towards monthly updates. My experimentation with existing automation tools makes this plausible. Routinely looking at a few papers a month seems feasible as well, especially since I could repurpose the time I spend in ad-hoc tracking of the literature, which has missed a number of items compared to systematic searching (even some high-profile items in English!).

Automation is also becoming more palatable now that I’ve found errors from the laborious human-led review: at least 2 older, in-scope items that were not included in the July 2021 version of the review, presumably because they were poorly indexed at the time of previous searches; and an informally published item that appears to have been erroneously excluded, presumably due to confusion that we were only excluding data and preprints when the bibliography included a related item.

Of course, for the website, there are a number of open questions:

  • How can the bibliography website be made useful for authors, reviewers, and editors? Awareness of related publications becomes even more important because the retraction literature is very scattered and diffuse!
  • How can the complex taxonomy of topics be made clear?
  • Would “suggest a publication” be a useful addition?

My aims in writing are:

  • To share my experience about the tools and processes I’m using.
  • To documenting the errors I make and the problems I have. This:
    • will remind me in the future (so I make different mistakes or try different tools).
    • can inform tool development.
    • can inspire systematic analysis of pragmatic information retrieval.
  • To identify possible collaborators for the Empirical Retraction Lit bibliography scoping review and website; for finalizing the post-retraction citation review; and for writing other reviews from the scoping review.
  • To solicit feedback from various communities working on pragmatic information retrieval, systematic review automation, retraction, library technology, scholarly publishing metadata workflows, literature review methodology, publication ethics, science of science,… more generally.

Tags: , , , ,
Posted in Empirical Retraction Lit, literature reviewing | Comments (0)

QOTD: What policymakers need…

November 17th, 2023

What policymakers need are things that we [researchers] don’t value so much. Meta-analyses. What do we know about a given topic. If we survey 1000 papers on a given topic, what does the preponderance of the evidence say about a thing? Obviously the incentive structures in academia are about smaller mechanisms, or about making those smaller distinctions in the body of knowledge and that’s how knowledge advances and research advances, but for the policymaking space, you need to be able to translate that, like hundreds of years, half a century, decades of what we know about education, about inequality, about the STEM fields, about the research ecosystem, into that.

Alondra Nelson, discussion after the 2023 Sage-CASBS Award Lecture at the Center for Advanced Study in the Behavioral Sciences, November 16, 2023 [video]

Tags: , , ,
Posted in policymaking, scholarly communication | Comments (0)

Information Quality Lab at the 2023 iSchool Research Showcase

November 14th, 2023

My Information Quality Lab presents 14 posters as part of the iSchool Research Showcase 2023, Wednesday noon to 4:30 PM in the Illini Union. View posters from 12 to 1; during the break between presentation sessions 2:15-2:45; and 3:30-4:30 PM.

Visualizing Race in Medicine
Chris Wiley

Three-Dimensional Archiving of Native American Artifacts at the Spurlock Museum
David Eby

Harold Baron Digital Archival Research and Publication Project
Divya Pathak

Disinformation Tactics and Designing to Deceive
Emily Wegrzyn

Who Needs a Main Entry, Anyway?
Liliana Giusti Serra, José Fernando Modesto da Silva

Epistemological Responsibility in Law and Science: Sharing the burden
Ted Ledford

How Computable is Scientific Knowledge?
Yuanxi Fu

Unified Framework for Evaluating Confidence in Research Synthesis
Hannah Smith, Yuanxi Fu, Jodi Schneider

Using argument graphs to audit reasoning in empirical scientific publications
Heng Zheng, Yuanxi Fu, Jodi Schneider

Activist Organizations and Their Strategies to Influence the Legalization of Medical Cannabis in Brazil
Janaynne Carvalho do Amaral, Jodi Schneider

Assessing Citation Integrity in Biomedical Publications: Annotation and NLP Models
Janina Sarol, Shufan Ming, Jodi Schneider, Halil Kilicoglu

Can ChatGPT Augment PDF-to-Text Conversion Errors in Scientific Publications?
Janina Sarol, Xuhan Zhang, Tanisha Roman, Jodi Schneider

Analyzing Retraction Indexing Quality in Subject-Specific and Multidisciplinary Databases
Malik Salami, Jodi Schneider

How Knowledge Intermediaries Gather and Make Sense of COVID-19 Information: An Interview Study
Togzhan Seilkhanova, Jodi Schneider

[Updated: 14!]

Tags: , , ,
Posted in Information Quality Lab news | Comments (0)