1932
Man standing in front of giant book with letters falling down

How Should We Evaluate Open Access Book Publishing?

We held a workshop to discuss the possibilities and potential tensions that emerge when setting criteria for open access book publishing. Here’s what we learned.

LAYOUT MENU

Insert PARAGRAPH
Insert H2
Insert H3
Insert Unordered List
Insert Ordered List
Insert IMAGE CAPTION
Insert YMAL WITH IMAGES
Insert YMAL NO IMAGES
Insert NEWSLETTER PROMO
Insert QUOTE
Insert VIDEO CAPTION
Insert Horizontal ADVERT
Insert Skyscrapper ADVERT

LAYOUT MENU

A wide group of stakeholders in the open access (OA) book publishing ecosystem—including funders, university libraries, and open access service providers—have been considering the criteria for evaluating OA publishers and their books. Part of the context for this inquiry is skepticism about the quality and sustainability of OA books—including from authors, libraries, funders, and aggregators. In response, some of these stakeholders are developing a range of criteria to evaluate scholarly books and their publishers. The assumption is that such criteria can help set standards and respond to appropriate concerns.

In this context, we, on behalf of the Directory of Open Access Books (DOAB) and the Open Book Collective (OBC), co-convened an online workshop in April 2024 titled “Evaluating Evaluation Criteria for OA Book Publishers.”

Staging a Conversation on Criteria-Setting Practices

In our roles as managing director of the Open Book Collective (Joe Deville) and managing director and project manager at OAPEN (Niels Stern and Jordy Findanis, respectively), we weren’t familiar with any similar attempts to address the nuances of criteria-setting practices within OA book publishing. As such, we invited a mix of organizations already involved in such work to participate in the workshop. Alongside our colleagues involved in Copim’s Open Book Futures project (funded by Arcadia and the Research England Development Fund), including representatives from Opening the Future and Thoth Open Metadata, we invited partners of the DOAB Trusted Platform Network—including the African Platform for Open Scholarship, Fulcrum, JSTOR, OpenEdition, Project MUSE, and SciELO Books—as well as a selection of other participants, including the AG Universitätsverlage and AUPresses. These initiatives are all involved in different forms of criteria-setting.

The workshop had a number of aims. One was to understand how different initiatives address the challenges that criteria setting creates. Another, to explore the practicalities and politics of criteria setting—including amongst publishers, platforms, and libraries—in order to reflect on how criteria setting relates to the aims and ambitions of different initiatives. And a third was to examine similarities and differences in criteria-setting practices in different national and regional contexts.

To help structure conversations, we asked invitees to respond to prompt questions, including how they developed criteria, whether they consulted with criteria elsewhere, and to what extent, if any, discrepancies between their own criteria and those of partners they worked with gave rise to challenges. The workshop was designed to be exploratory and didn’t impose a rigid format. Rather, we encouraged participants to engage with the topic through open and frank discussions. To support this, we proposed that any resulting report would focus on the overall themes that emerged from the workshop, rather than on the practices of individual participating organizations.

A Diversity of Criteria-Setting Practices

Participants were particularly interested in discussing the procedures for reviewing publishers, focusing on admissibility criteria and recommendations for acceptance. The discussion of admissibility criteria included mentions of peer review, open licenses, editorial process and policy, archiving, licensing and copyright, and output (for example, setting a minimum number of published titles and/or proportion of OA titles). Some participants also mentioned other requirements that they impose which, while not strictly criteria, would be easier for some publishers to meet than others. For example, some attendees raised the question of transparency, such as what type of information publishers are expected or advised to provide on their websites.

Amongst platforms that work directly with publishers, criteria vary in terms of how “hard” they are on OA publications. While some initiatives seek to develop relatively simple criteria, others have to navigate guidelines anchored within, for example, a national framework or multilayered bodies of institutional governance. Take, for instance, criteria around peer review. Some platforms have explicitly stated, detailed criteria for editorial processes and peer-review practices (for example, the selection of reviewers and the handling of conflict of interests), while others allow for some flexibility and leeway. Also, participants noted significant differences in assessment procedures: while one platform would assess a new applicant at the level of the publisher as a whole, another would assess at the level of individual books (while also recognizing the challenges this brings in terms of scalability) or at the level of book series.

Many participants appreciated the potential tensions that disparities within criteria-setting practices can give rise to. For instance, peer review has long been seen as a key mechanism for building trust in scholarship and for upholding standards. While questionable or even “predatory” publishing practices are not as common in academic book publishing as compared to journals, ensuring high standards continues to be a top priority. Publishers sometimes struggle to balance peer-review requirements with a recognition that these reviews come in many forms and flavors, many of which can be said to equally support high-quality scholarly publishing. However, this discrepancy is a challenge that DOAB is taking on, via the Peer Review Information Service for Monographs (PRISM). PRISM is a service that aims to build transparency around peer review practices by enabling publishers to describe what kind of review has been performed for a particular book or book series (for example, open, single or double blind, or post-publication review). The service has been designed to acknowledge the many varieties of peer review that in turn reflect the rich diversity of subject-specific practices and research cultures around the world.

Criteria-Setting and Diversity, Equity, and Inclusion

In addition to the many practical aspects of evaluation, the workshop encouraged a conversation about how criteria setting within the higher education systems more widely informs discussions about diversity, equity, and inclusion. Participants agreed that it is vitally important that those involved in developing and deploying criteria recognize this wider context before criteria are uncritically put to work.

Contributors pointed out that the assumption that criteria are essential can sometimes stifle reflection about these criteria, such as how they are being driven, by whom, why, and with what effects. What happens, for example, when criteria are imposed or (reluctantly) adopted broadly in a particular community to the exclusion of many authors and publishers? Participants noted that this has happened in certain Global South contexts. In addition, criteria are often developed in the Global North and were originally designed to support the work of commercial publishing. The heritage of certain criteria can then mean they exclude particular cultural and contextual nuances.

One such issue concerns how criteria come to intersect with tenure, promotions, and reward systems. Many universities still require researchers to demonstrate a record of publishing “internationally,” which inherently favors legacy publishers and particular languages, while causing scholarship from beyond the publishing hegemony to sometimes be viewed with reservation, if not suspicion. Some criteria and assessment practices are supported by metrics developed within this hegemony, hierarchically excluding authors and publishers who are unable to access or be included on indexing platforms, which in turn are incapable of capturing the impact of research produced by publishers in many regions and sectors. The lack of recognition and valuing of different forms of research outputs within the Global South, for instance, stifles the growth of local scholarship and reinforces and perpetuates inequities in the wider OA landscape.

Participants unanimously agreed that responding to concerns about criteria-setting practices is vital to building more equitable book publishing futures, as well as to supporting bibliodiversity and inclusivity in publishing—whether this be in terms of subject areas, languages, diverse typology of formats, or different editorial and peer review practices. Practically, this could involve working to continuously revisit criteria to ensure they adapt to changing contexts and norms or remaining open to complementary types of research outputs and formats. Criteria could also actively be adapted to needs and technical requirements in local research communities. They could, for example, change depending on the type of output: an open educational resource, a textbook, or an experimental publishing practice might need to be assessed in different ways.

The group recommended fostering a collaborative approach to developing a clearer and more systematic understanding of how evaluative practices affect OA book publishing. Attendees stressed that stakeholders should be wary of letting book publishing slip into the same monopolized trajectory that journal publishing has followed. In this context, stakeholders must continue to advocate for the variegated and inherently diverse nature of longform book publishing, while at the same time clearly articulating the core values underpinning the work being done by OA book publishers and service providers.

Similarly, participants discussed the need for stakeholders within this community to share experience, expertise, and good practices. Rather than simply imposing criteria to which applicants and potential collaborators must adhere, OA book publishers could advise smaller or newer publishers, pointing them to providers and resources. Workshop participants agreed it is vital that criteria do not become arbitrary barriers for publishers and initiatives that have the potential to perform vital functions within the global scholarly community.

Criteria-Setting and Library Evaluation

The workshop also touched on the tensions and challenges that arise when different criteria-setting practices encounter each other. A key example is differences between the criteria used by publishing initiatives and those used by libraries, especially in evaluating Diamond OA initiatives.

Given the diverse range of criteria showcased across the relatively few initiatives featured in our workshop, it is not difficult to imagine the complexities that emerge when multiple stakeholders and multiple evaluation practices start to interact with each other. This heterogeneity is further exacerbated with the acceleration of Diamond OA initiatives around the globe. Diamond OA initiatives such as scholar- and/or community-led initiatives represented within the OBF project—Open Book Collective, Open Book Publishers, punctum books and Thoth Open Metadata—work to secure direct funding for their publishing work from libraries. For OA book publishers, such funding is more sustainable than the common alternative: charging book processing charges. For open infrastructure providers, it provides a funding option more achievable than the vanishingly rare opportunities for grant funding, and more in line with their values than seeking commercial investment. However, in a rapidly shifting landscape, there is the potential risk that a library adopts a set of criteria that is inadvertently exclusionary, with adverse consequences for the publisher or initiative trying to solicit library support.

An example could be a small, born-open academic publisher that meets almost all the specific criteria used by a particular library, but falls short on a fixed technical, licensing, or archiving requirement. For Diamond OA initiatives working to achieve support from a diversity of libraries, this can add significant extra hurdles, as there may be considerable disparities in criteria amongst libraries even within a relatively homogeneous research area. Of course, with the proliferation of OA models, libraries, particularly understaffed ones and those lacking in resources, can be hard-pressed to keep abreast of developments, or to make necessary adjustments, for example by constantly revisiting their own criteria.

Toward a Community of Practice

Throughout the workshop, participants generously shared their practices and voiced the possibilities and potential tensions that emerge in the interstices of criteria-setting work. They recognized that as stakeholders in OA publishing seek to promote and safeguard scholarship, they also must address the obvious need to foster diversity, equity, and inclusion. Participants expressed a genuine interest in exploring concrete steps to take the conversation forward, but also stressed the importance of developing a coherent and inclusive approach to criteria setting that could be presented to the community at large.

The workshop’s participants support, in principle, the idea of developing a collaborative, ideally global approach to tackling such challenges. However, organizing such a conversation has not happened yet. The creation of a coordinated community that upholds the value of difference and diversity in evaluation practices would be helpful, as long as these evaluation practices support high-quality scholarship. Moreover, participants agreed it is possible to develop shared objectives, principles, and values that could support a diversity of evaluation practices. Indeed, attendees made a strong case for insisting on the necessity for diverse evaluation criteria—establishing transparent and fair approaches to evaluation that recognize the equally diverse nature of research outputs, languages, and disciplinary traditions.

To help deliver a fair, equitable and inclusive OA book publishing landscape, operating across diverse regions and disciplines, we and our colleagues at DOAB, Open Book Collective and Copim will be exploring opportunities to pursue this work further with a wider community of researchers, publishers, platforms, libraries, funders, and others. In our view, it is only through developing such a community of practice that we can hope to develop criteria-setting approaches fully capable of responding to the complexities of the globally diverse, but still all too unequal, scholarly system.

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error