To Build Trust in the Open Knowledge Era, Think Accountability, Not Disclosure
In today’s scholarly communication ecosystem, transparency risks becoming exposure. It’s time to rethink our approach to openness in the face of integrity threats.
In today’s scholarly communication ecosystem, transparency risks becoming exposure. It’s time to rethink our approach to openness in the face of integrity threats.
In scholarly publishing, transparency has long been our instinctive response to crisis. When misconduct surfaces, when retractions multiply, or when peer review is compromised, the community calls for openness. Disclose everything. Explain the process. Show the evidence. Transparency is assumed to produce accountability, and accountability to restore trust.
Yet as scholarly communication has evolved into a global, digitally accelerated ecosystem, a deeper tension has emerged, particularly within the open knowledge landscape. What if transparency alone is no longer sufficient to sustain trust? And what if, in certain contexts, full operational openness may even weaken the systems it is meant to protect?
Over the past decade, scholarly publishing has expanded in scale, geography, and participation. Open access publishing, repository networks, preprint platforms, and open data infrastructures have collectively broadened access to knowledge and diversified who can produce and engage with research (Fyfe et al., 2017; Legge, 2025). This expansion represents one of the most significant shifts in the history of scholarly communication.
Yet scale brings complexity. Systems originally designed for slow, localized scholarly exchange now operate within high-volume, digitally networked environments. Editorial and peer review workflows face industrial submission pressure. Incentive structures tied to publication output intensify demand. In such an ecosystem, trust remains foundational but increasingly difficult to operationalize.
Integrity threats have scaled alongside participation. Coordinated fraud networks, commercial authorship markets, and industrialized misconduct operations have exposed vulnerabilities that were not designed for adversarial exploitation (Kamel & Barghi, 2025). Detection systems have had to evolve rapidly in response. Publishers now rely on layered screening processes, cross-journal intelligence sharing, authorship pattern analysis, and AI-assisted anomaly detection.
But this evolution introduces a new tension. The more openly detection systems are described, the more easily they can be studied and evaded.
Peer review systems illustrate this dilemma clearly. Evidence suggests that even robust review procedures face structural limits in identifying sophisticated manipulation, particularly when misconduct networks operate across journals and publishers (Horbach & Halffman, 2019). Disclosure of detection triggers or reviewer verification methods may reassure stakeholders, but it can also create operational vulnerabilities. Transparency, in this context, risks becoming exposure.
Recent disruptions across the publishing ecosystem have made this tension visible. Coordinated manipulation of editorial systems and large-scale integrity investigations have revealed strain on verification infrastructures operating at global scale. Yet from the outside, stakeholders often see only outcomes such as retractions, investigations, or journal actions, without visibility into the protective work behind them. This perception gap is particularly acute within the open knowledge community.
Librarians, as stewards of access and credibility, now operate at the intersection of openness and accountability. As negotiators of transformative agreements, managers of institutional repositories, and advocates for equitable dissemination, they are increasingly asked to assess not only access models but integrity safeguards. Open knowledge discourse has consistently positioned librarians as active agents in shaping the systems that sustain scholarly trust, not merely service providers within them (Legge, 2025)
When transparency around publishing governance is limited, librarians face a dual challenge. They must defend institutional investments in open access while responding to concerns about reputational risk. They become intermediaries in a trust conversation they do not fully control.
Editors experience similar tensions. Large-scale integrity investigations may unfold at the publisher level, leaving journal leadership with limited visibility into processes affecting their own titles. Researchers, meanwhile, may interpret institutional discretion as inaction rather than protection.
Compounding this perception gap is a deeper, longer-standing erosion of trust in the publishing system itself. Skepticism toward publishers is not rooted solely in integrity failures. It is also shaped by concerns around pricing structures, licensing restrictions, legal aggressiveness, and perceived asymmetries of power within scholarly communication. Historical analyses of academic publishing have shown how commercial interests and prestige economies influence credibility perceptions and governance expectations (Fyfe et al., 2017).
Transparency, therefore, operates within a confidence deficit.
The conversation must evolve. The issue is no longer simply how much to disclose, but how trust is governed.
Transparency in the form of disclosure, policy statements, workflow descriptions, and post hoc reporting cannot alone guarantee accountability. Adjusting the degree of openness does not resolve the structural question of who verifies that integrity systems are effective, and by what standards.
In other words, sustaining trust in the open knowledge era requires a shift from transparency as disclosure to transparency as accountability.
Accountability implies measurable standards, independent verification, and visible consequence structures (Resnik & Dinse, 2013). It moves beyond declarations toward demonstrable assurance. Integrity frameworks and correction systems have long emphasized the importance of transparent misconduct handling, but the scale and coordination of contemporary threats demand more systemic governance responses (Resnik & Dinse, 2013).
This raises an important institutional question. Can publishers credibly certify their own integrity systems in a climate of structural distrust?
Collaborative initiatives led by publishers are essential. Cross-industry intelligence sharing, detection tool development, and coordinated investigations represent meaningful progress. Yet where confidence gaps persist, self-regulation alone may not fully resolve trust deficits. One pathway forward may lie in independent, multi-stakeholder oversight. An independently governed framework, inclusive of publishers, librarians, editors, funders, and research institutions, could define baseline integrity standards and certify compliance across the scholarly ecosystem.
Such oversight would not make public investigative methodologies or compromise detection systems. Instead, it would shift assurance from internal declaration to external validation. Certification of integrity infrastructure, much like accreditation in higher education or compliance auditing in other sectors, could provide credible reassurance without operational exposure.
This is not a call for surveillance, but for shared governance. Universities, funders, scholarly societies, editors, and librarians all participate in the trust architecture of scholarly communication. Accountability must therefore be collective.
This shared responsibility extends across open knowledge infrastructures as well. Repositories, preprint platforms, data archives, and community-led publishing initiatives are equally embedded within the trust ecosystem. As openness expands, integrity assurance must extend across the full lifecycle of knowledge creation and dissemination.
Here again, librarians and infrastructure stewards play a critical role. Their work in metadata curation, repository screening, preservation, and access governance positions them as essential partners in designing scalable accountability systems for open scholarship. Technology has further reshaped this landscape. Artificial intelligence now supports manuscript screening, image forensics, reviewer network mapping, and anomaly detection. These tools strengthen oversight capacity but also heighten the stakes of operational transparency. The more openly such systems are described, the more strategically they may be evaded.
Protecting the scholarly record now requires a calibrated balance between openness and discretion.
Transparency alone is not the cure. Sometimes discretion is part of protection. Yet discretion must not become opacity. The same confidentiality that safeguards investigative systems can erode confidence if not paired with visible accountability.
In an era where knowledge flows more openly and misconduct adapts more rapidly, trust must be structurally governed, collectively assured, and independently validated. Long-term confidence in open knowledge will require demonstrable integrity infrastructures capable of protecting the credibility of the scholarly record.
Openness remains a foundational scholarly value. Preserving it requires systems designed not only to share knowledge, but to defend its trustworthiness at scale.
Fyfe, A., Coate, K., Curry, S., Lawson, S., Moxham, N., & Røstvik, C. M. (2017). Untangling academic publishing: A history of the relationship between commercial interests, academic prestige and the circulation of research. Zenodo. https://doi.org/10.5281/zenodo.546100
Horbach, S.P.J.M., & Halffman, W. (2019). The ability of different peer review procedures to flag problematic publications. Scientometrics, 118, 339–373. https://doi.org/10.1007/s11192-018-2969-2
Kamel, R., & Barghi, N.G. (2025). Paper mills in scholarly publishing: A systematic review of their prevalence, detection, and impact on research integrity. Trends in Scholarly Publishing, 4(1):81–87. https://doi.org/10.21124/tsp.2025.81.87
Legge, M. (2025). To complete the open access transition, first ask the right questions. Katina Magazine. https://katinamagazine.org/content/article/open-knowledge/2025/open-access-transition-right-questions
Resnik, D.B., & Dinse, G.E. (2013). Scientific retractions and corrections related to misconduct findings. Journal of Medical Ethic, 39(1):46–50. https://doi.org/10.1136/medethics-2012-10076610.1146/katina-032526-1
Copyright © 2025 by the author(s).
This work is licensed under a Creative Commons Attribution Noncommerical 4.0 International License, which permits use, distribution, and reproduction in any medium for noncommercial purposes, provided the original author and source are credited. See credit lines of images or other third-party material in this article for license information.