How to Become an Integrity Sleuth in the Library
Open access agreement management creates an opportunity for library staff to help identify publishing integrity problems. Here are some techniques to try.
Open access agreement management creates an opportunity for library staff to help identify publishing integrity problems. Here are some techniques to try.
Note: this is the second of a three-part series about how libraries can respond to publishing integrity problems in the context of open access agreement management. Part One describes the role libraries could play and discusses strategies for managing OA agreements in light of these problems. Part Two drills down into specific practices through which libraries can identify and respond to publishing integrity problems and explores the significant challenges ahead. Part Three will review several online tools being developed to assist such efforts.
If a history of publishing integrity in this era is ever published, it will tell the story of the science integrity sleuths who have worked tirelessly and often without remuneration to draw attention to the damage that fake papers are doing to the scientific record: Anna Abalkina, digging into paper mill connections. The eagle eyes of Elisabeth Bik. The visionary infrastructure of PubPeer and the Retraction Watch Database. Guillaume Cabanac’s Problematic Paper Screener. The unflinching snark of Leonid Schneider and the pseudonymous Smut Clyde blogging on For Better Science. The analytic insights of Alexander Magazinov, Nick Wise, Csaba Szabo, Sholto David, and the many anonymous sleuths reading, posting, sending emails, and raising alarms to institutions, publishers, and editors.
Much of this work requires specialized technical skills or scientific knowledge that are not often found among library staff. But academic libraries do cultivate expertise in nearly every aspect of scholarly communication itself, which is clearly transferable to publishing integrity efforts.
In this article, we’ll describe indicators of publishing integrity issues that can be identified using tools and abilities already at hand in the library. Rather than attempt to provide a comprehensive list of methods currently in use, we’ll detail some methods with particular relevance to this new direction in library work.
When article submissions come to the library for open access funding approval under our OA publishing agreements, staff typically receive limited information from the publisher: corresponding author name, email address, a claim of affiliation, names of co-authors, article title, journal title, and a date. But even this minimal metadata can create opportunities to spot potential misconduct.
The red flags for research misconduct that we describe below have been collected from a variety of sources, discussions, and collaborations. We have benefited from conversations with the research integrity staff at major publishers, interactions with our institution’s research integrity officer, presentations by vendors of new research tools, the Fostering Accountability for the Integrity of Research Studies (FAIRS) Conference held at St. John’s College, Oxford, in April 2025, PubPeer threads and Retraction Watch articles, and the hands-on experience of investigating retracted papers and networks of authors. This work by library staff to identify indicators of research misconduct is running to catch up with work that has come before, like that of Abalkina (2023), Parker, Boughton, Bero, & Byrne (2024), and Matusz, Abalkina, & Bishop (2025), among others. The excellent analysis of paper mills and mapping of evidence in these papers shed light on misconduct that would otherwise remain invisible by design.
Author Checks
Within open access agreement workflows, verification of the corresponding author’s affiliation is perhaps the simplest means to protect the integrity of our OA agreements. The opportunity to verify affiliation is also a point in the OA workflow where we can glean useful information:
Co-Author Checks
When an anomaly is spotted, it can also be revealing to ask who else could be benefitting. Who are the co-authors?
Bibliographic Anomalies
Librarians are especially well-situated to evaluate bibliographies, which they can examine regardless of their particular expertise in the article’s subject matter. Excessive self-citation, coerced citation, and citation boosting all leave marks on the bibliography.
Textual Indicators
Of course, there is always the strategy of reading the article in question after it has been published. Doing a close reading should always be in a librarian’s toolbox.
Strategies Summary
Detection of papermilled papers is a cat-and-mouse game; new ways to identify fraudulent papers are always made less effective by clever fraudsters. AI’s ability to rapidly create natural-sounding text is likely to add frantic energy to this process. It may be that fraudulent papers will become even more difficult to identify and that fabricated data will require even more expert evaluation. But human networks of co-authors will continue to be traceable, and humans will continue leaving evidence in the public record.
While library staff have a real opportunity and at least some degree of responsibility to be more involved in promoting publishing integrity, this is undeniably a challenging endeavor. One of those challenges is money. With budgets often flat or declining, research grant funding under threat, and AI raising uncomfortable questions, very few libraries are searching for broad new mandates. But as with open access itself, not to mention AI, the historical moment makes demands of its own, and libraries have always risen to meet them. The current challenges to scholarly communication provide, whether we are ready for it or not, just such a moment.
While we can imagine some well-resourced libraries hiring a full-time “publishing integrity librarian,” the investment in staff time at most other institutions can start small and eventually scale up in proportion to the library’s open access investments, as a complement to work that is already being done, ensuring the value of investments that are already being made. At the very least, academic library leadership should consider how to cultivate greater awareness and engagement with these issues among their staff.
Even so, the path ahead for this work is not well-trod or signposted, and it is important to be honest about the challenges it brings. First of all, the librarian who conducts all of the checks suggested above will discover very few smoking guns. Evidence gathering and interpreting is an unavoidably subjective process. Deciding what is sufficient evidence to justify conducting a deeper investigation or raising an alarm requires careful consideration, for the sake both of the people involved and of the time and resources of the library.
When it comes to publishing integrity, publishers, authors, and universities are not the only entities with reputations on the line. Libraries entering the fray could have unexpected repercussions. Not everyone will apply the same hierarchy of values when ranking protection of the scientific record against institutional esteem, career progression, and profit. Is one image duplication worth marring the record of a star researcher? Two? Does this calculus change at all relative to researcher’s level of grant funding? These questions can be fraught.
Contributing to these difficulties is the novelty of this approach. Universities have particular conceptions of the role of their libraries that are notoriously hard to budge. While the scale of the problem surely merits “all hands on deck,” libraries’ involvement may not be welcome in all quarters.
It is therefore important to be clear within the library and with our campus partners about the boundaries of our role, which is not judge, jury, or counsel. It is, rather, to gather evidence to be passed on to others without a verdict. These others—in particular publishers and our own institutions’ research integrity officers—can conduct fuller investigations. This division of labor is an important safeguard on the objectivity of these processes. Library staff must maintain a genuine disinterest in the outcomes of these investigations.
A variety of pressures encourage silence on these issues; the threshold for speaking up can feel quite high. The great value of PubPeer, a post-publication peer review platform we’ll discuss in Part Three, is that it lowers this threshold by making it easy to point out anomalies anonymously to a wide audience. But while PubPeer (and the sleuthing community more broadly) have their place, library staff have two more direct alternatives: direct lines to the research integrity staff of our institutions and to our publishing partners. We should intentionally cultivate these connections, building relationships of trust that naturally lower the bar for raising issues and establish the firm expectation of confidentiality and due consideration. It would be beneficial for these communications to become more routine, even formalized. Science, after all, thrives in an environment of continued testing and inquiry, not fear of asking questions.
Abalkina, A. (2023). Publication and collaboration anomalies in academic papers originating from a paper mill: Evidence from a Russia-based paper mill. Learned Publishing, 36(4), 689–702. https://doi.org/10.1002/leap.1574
Ansede, M. (2024, December 5). Dozens of the world’s most cited scientists stop falsely claiming to work in Saudi Arabia. El Pais. https://english.elpais.com/science-tech/2024-12-05/dozens-of-the-worlds-most-cited-scientists-stop-falsely-claiming-to-work-in-saudi-arabia.html
De Castro, P., Herb, U., Rothfritz, L., Schmal, W. B., & Schöpfel, J. (2024). Galvanising the open access community: A study on the impact of Plan S. Zenodo. https://doi.org/10.5281/zenodo.13738478
Catanzaro, M. (2023, May 5). Saudi universities entice top scientists to switch affiliations–sometimes with cash. Nature, 617, 446–447. https://doi.org/10.1038/d41586-023-01523-x
Magazinov, A. (2023, July 31). The Vickers curse: secret revealed! For Better Science. https://forbetterscience.com/2023/07/31/the-vickers-curse-secret-revealed/
Mallapaty, S. (2025, March 4). China’s supreme court calls for crack down on paper mills. Nature, 639, 285–286, https://doi.org/10.1038/d41586-025-00612-3
Matusz, P., Abalkina, A., & Bishop, D. V. M. (2025). The threat of paper mills to biomedical and social science journals: The case of the Tanu.pro paper mill in Mind, Brain, and Education. Mind, Brain, and Education. https://doi.org/10.1111/mbe.12436
Meho, L. I., & Akl, E. A. (2025). Using bibliometrics to detect questionable authorship and affiliation practices and their impact on global research metrics: A case study of 14 universities. Quantitative Science Studies, 6, 63–98. https://doi.org/10.1162/qss_a_00339
Parker, L., Boughton, S., Bero, L., & Byrne, J. A. (2024). Paper mill challenges: past, present, and future. Journal of Clinical Epidemiology 176, 111549. https://doi.org/10.1016/j.jclinepi.2024.111549
Porter, S. J., & McIntosh, L. D. (2024). Identifying fabricated networks within authorship-for-sale enterprises. Scientific Reports, 14, 29569. https://doi.org/10.1038/s41598-024-71230-8
Richardson, R., Hong, S., Nunes Amaral, L. A. (2024, October 1). Hidden hydras: Uncovering the massive footprint of one paper mill’s operations. Retraction Watch. https://retractionwatch.com/2024/10/01/hidden-hydras-uncovering-the-massive-footprint-of-one-paper-mills-operations/
Schneider, L. (2024). Retraction blackmail – new service by Iranian papermills. For Better Science. https://forbetterscience.com/2024/08/05/retraction-blackmail-new-service-by-iranian-papermills/
10.1146/katina-07102025-1