AI Tools Are Changing Academic Publishing. How Can We Adopt Them Responsibly?
In scholarly publishing, AI has transformative potential, but also entails real risks. Balancing the two will require a fundamental cultural shift.
In scholarly publishing, AI has transformative potential, but also entails real risks. Balancing the two will require a fundamental cultural shift.
Scholarly publishing is at a crossroads. The rapid integration of artificial intelligence into scholarly workflows promises unprecedented efficiency yet threatens to erode the very qualities that make research compelling: originality, human voice, and authenticity. With AI tools everywhere in research environments, the critical question is not whether to adopt them, but how to do so without sacrificing the human elements that give scholarship its power.
In September 2025, an online roundtable hosted by SciFlow, of which one of us—Carsten Borchert—is founder and CEO, convened researchers, authors, and innovators in the research ecosystem to explore the ethical, integrity, and interoperability challenges of AI’s rapid adoption in publishing and to ask how we can preserve the human voice in an AI-enhanced publishing world. Their consensus was clear: the time to establish robust, transparent, human-centered frameworks is now.
To this point, academic publishing’s response to AI has been defensive. Publishers invest in detection technologies. Journals implement disclosure requirements. Universities draft guidelines that often amount to prohibition. But this approach misses the point.
The reality is that AI tools are already embedded in researchers’ workflows, from grammar checkers using machine learning to generative models that restructure arguments and suggest literature connections. In this context, the challenge is not whether to engage with AI, but how to do so responsibly, prompting the academic community to ask deeper questions about originality, voice and authenticity as human judgment evolves alongside increasingly capable technologies.
This shift from policing to partnership would open the door to AI’s transformative potential. But with that promise comes significant peril.
AI tools can address established barriers in scholarly communication. For researchers fluent in ideas but not in English, these technologies can be transformative. As Hannah-Sophie Braun, cofounder of ReportAssistant, noted, “You no longer need to be a great writer to publish. You can get help with grammar and structure.”
This democratizing potential is significant. AI-powered language enhancement, automated citation management, and AI-generated templates for manuscript preparation free researchers to focus on their core contributions rather than the technical mechanics of writing.
But this efficiency comes with risks. Nausikaä El-Mecky, associate professor of the history of art and visual culture at Universitat Pompeu Fabra, Spain, cautioned, “If you see papers as a conveyor belt for information, generative AI is fantastic when used correctly. My worry is that academic writing becomes a vessel of information like a user manual.” She is particularly concerned about humanities scholars. “The special thing about essays in the humanities is the form and the voice,” she said.
The tension is real. When AI suggests similar phrasings across thousands of papers, its use threatens homogenization. Furthermore, as Braun pointed out, “LLMs tend to give you the bias of what is expected, not real innovation.”
The solution lies not in rejecting the technology, but in using it intentionally and intelligently. “AI can be a sparring partner,” said Braun. “You stay in the driver's seat.”
The challenges facing today’s researchers are multifaceted: language barriers, publication pressure, time constraints, and gaps in technical writing skills. During the roundtable, Manish Kumar, professor of biophysics at the University of Delhi (South Campus), highlighted a fundamental divide: “There are two kinds of students. Some can do experiments. Some can write. Very few can do both.”
AI tools can bridge this gap, but only when used judiciously. To preserve the centrality of their voices and ideas, researchers should write their own first drafts, using AI only for editing. Early-career researchers especially need guidance on this distinction: use AI to support, not supplant, your writing; always review and add personal insight; and maintain transparency about your methods.
While researchers navigate these personal challenges, publishers face systemic pressures that complicate the picture even further. At the panel, Nikesh Gosalia, chief partnership officer at Cactus Communications, described the “surge in submissions” and “rise in questionable manuscripts” publishers have contended with in recent years. “Open access pushed the industry into a volume game,” he said.
AI offers efficiency and scalability but creates new risks. As Gosalia warned: “Readability does not mean robustness.” A smoothly written paper may lack scientific rigor, and publishers must maintain human oversight for final decisions on quality, relevance, and ethics.
The path forward requires cultural change. Gosalia urged the publishing community to “normalize responsible AI use and disclose it.” Transparency builds trust, while human judgment ensures standards are met.
This raises a fundamental question: what types of AI edits are useful or appropriate? The answer likely lies in finding the right balance for different contexts and disciplines.
But this balancing act will require reshaping academic culture.
If we can streamline routine aspects of academic writing, researchers could have more time to focus on what really matters: significance, originality, and impact. Tools that reduce the burden of formatting, language polishing, and administrative detail should free researchers to spend more energy on framing meaningful questions, interpreting results, and developing original arguments.
In response to these shifts, co-author Carsten Borchert proposes reframing the long-standing mantra of “publish or perish” as “publish and cherish.” Rather than treating publication as a high-pressure endpoint driven by metrics and volume, the concept emphasizes care for scholarship, authorship, and integrity and the value of thoughtful engagement, human voice, and long-term impact in an era where AI increasingly shapes how research is produced and communicated.
Time savings alone won’t fix the problem. In our “publish or perish” culture, greater efficiency risks reinforcing the very dynamics it aims to disrupt. If we continue to measure success largely by output volume, then AI-enabled efficiency may simply allow more papers to be produced faster, intensifying competition rather than improving quality.
This is why reframing as “publish and cherish” is not merely aspirational, but a necessary correction. AI does not magically transform academic culture toward impact; instead, it makes the limitations of existing incentive structures more visible and more urgent.
When the technical barriers to producing text are removed, we can more easily focus on the question of what and why we publish.
To realize the positive potential of AI integration, we need systemic change. Institutions and funders must reward depth, relevance, and contribution over sheer productivity. Publishers need to resist equating growth with submission volume and instead prioritize significance, clarity, and editorial judgment. And researchers need to be supported and motivated to use efficiency gains not to publish more but to publish better.
The stakes are high. Without cultural transformation, AI risks accelerating a system already under strain. With it, we could build something worth having: a scholarly ecosystem that values depth over breadth, quality over quantity, and meaningful contribution over productivity.
The roundtable participants were clear: human judgment, creativity, and ethical oversight must remain central to scholarly publishing. El-Mecky’s emphasis on form and voice in humanities writing highlights what’s at stake. But even in the sciences, human expertise is irreplaceable. As Kumar put it, “The research is not the paper. The reality is the data.” AI cannot evaluate whether an argument is scientifically sound, whether methodology is appropriate, or whether data warrant conclusions. These judgments require domain expertise that no computational power can replicate.
To safeguard these irreplaceable qualities, we need clear principles and practical rules for responsible AI integration.
A framework for responsible AI integration in academic publishing should include several key principles:
At the same time, we should abandon ineffective practices, including blanket bans on AI use, overreliance on detection as the primary solution, publication quantity as the main metric, language privilege that perpetuates inequality, and one-size-fits-all AI policies.
Implementation of this framework will require action from all stakeholders. Institutions should develop clear educational AI policies and provide training. Publishers must set transparent disclosure policies and invest in editorial training. Researchers need to engage critically with AI tools and advocate responsible AI use. The broader academic community should create shared resources, conduct research on AI’s impact, and establish forums for ongoing discussion.
These skills should be integrated into research training at all levels, ensuring the next generation of scholars can navigate AI tools thoughtfully and effectively. Ultimately, these skills will form the foundation for a publishing ecosystem where technology amplifies human creativity rather than diminishes it.
The integration of AI into academic publishing represents a pivotal moment.
Our goal should be neither to ban nor blindly embrace AI, but to harness it in service of what makes scholarship valuable—originality, rigor, and authentic human insight—connecting authors, institutions, and publishers in an ecosystem that enhances academic writing while keeping human judgment central.
Originality in the AI age means more than producing something new. It means applying unique perspectives and authentic voices to important questions. AI can help some researchers express their insights more clearly and make the task of writing more accessible, breaking down traditional barriers in scholarly communication.
But technology will only enhance scholarship if used thoughtfully, guided by clear values and strong community norms. The question is not whether AI will shape academic publishing, but what kind of scholarly community we will use it to build.
The roundtable made clear that creating this future will require active participation from all stakeholders. Researchers, publishers, institutions, and technology providers must work together to establish practices that preserve what’s valuable in scholarly communication while embracing tools that can make it more accessible and effective.
The challenge ahead is substantial, but so is the opportunity. By maintaining focus on human judgment, creativity, and ethical oversight while leveraging AI’s capabilities, the academic community can create a publishing ecosystem that is both more efficient and more authentic than what came before.
10.1146/katina-012126-1
Copyright © 2025 by the author(s).
This work is licensed under a Creative Commons Attribution Noncommerical 4.0 International License, which permits use, distribution, and reproduction in any medium for noncommercial purposes, provided the original author and source are credited. See credit lines of images or other third-party material in this article for license information.