Searching for a Suitable Journal? Check Here First.
Cabell’s Journalytics and Predatory Reports helps researchers—and the librarians who support them—find appropriate venues for publication and avoid those known to exhibit predatory behaviors.
Cabell’s Journalytics and Predatory Reports helps researchers—and the librarians who support them—find appropriate venues for publication and avoid those known to exhibit predatory behaviors.
Researchers need details about journals in order to select the most appropriate venues for submitting manuscripts. While a wide variety of tools exist to provide such information, perhaps none curates in one place as many details as Cabell’s Journalytics and Predatory Reports.
Cabell’s presents researchers—and the librarians who support them—with a user-friendly interface to discover journals publishing in specific disciplines and topics; compare characteristics such as scope, peer review model, acceptance rate, average timeframes for review and publication, and metrics of engagement; and avoid journals known to exhibit predatory behaviors.
In addition to basic details about a journal’s publisher, scope, and audience, Cabell’s provides details about submission requirements; peer review processes; acceptance rate; speed of review and publishing; open access options; citation; engagement beyond citation; and, where relevant, additional details such as alignment with the United Nations Sustainable Development Goals. In order to help authors avoid predatory publishing venues, Cabell’s also uses specific criteria to flag journals as predatory or concerning.
To search, a user will generally begin by entering a specific journal title or subject keywords; for example, I searched for “Journal of Academic Librarianship.” You can also search for an ISSN or publisher.
If an exact title match is found, it will be pinned to the top of the search results under the Exact Matches heading, followed by the full list of all search results (Figure 1).
The search crawls all areas of a journal’s card, including the journal description, which means irrelevant results often appear unless quotation marks are used to search for an exact title or a keyword phrase. Cabell’s does not support the use of Boolean operators such as AND and OR. Search results may include both credible journals and those flagged as likely predatory.
The integration of credible and predatory journals in search results is a recent change in Cabell’s approach—part of a 2023 platform release (Bisaccio, 2023). Previously, the two categories of journals had to be searched separately, which meant a user had to search for a given title twice to determine whether it was listed as credible, listed as predatory, or simply not listed at all. Users who searched only the credible category might mistakenly conclude that a title was not indexed and miss the predatory listing. Now that search results from both categories are combined, users can feel more confident of a journal’s status in the database after a single search.
In the search results, each journal appears as a “card” displaying a few characteristics, including acceptance rate, frequency of publication, and open-access publication model. Some of the data points displayed—such as the Scite index score and Altmetric media mentions score—may not offer valuable information for most users, particularly since they represent an oversimplification of more robust data into a single “score” that lacks context. The card can be expanded for more detail, including additional context for these two scores (Figure 2).
The expanded card opens on the Dashboard tab, the first of several tabs in a category I’ll call descriptive. The Dashboard tab indicates the journal’s publisher, sponsoring organization (if applicable), year of launch, ISSNs, disciplines, and—perhaps most crucially—a text block labeled Publisher Notes, which summarizes the journal’s scope, mission, and publishing interests. Here a researcher can not only verify that a journal embraces the subject of their research, but also the journal’s intended audience; whether the journal seeks theoretical or applied research; and what other types of submissions it accepts in addition to empirical research articles, such as case studies, review articles, critiques, or book reviews.
The descriptive category also includes the Submissions tab, which informs the researcher of a maximum submission length (if applicable), required style manual, acceptance rate, average length of time until publication, and any submission, review, or publication fees that the journal charges. Although it is often hidden below the scroll bar, this tab also includes a convenient link to the journal’s submission portal.
The next descriptive tab, Peer Review, reveals the journal’s review models—such as Double Anonymized, Single Anonymized, or Open—how many internal and external reviewers generally read a submission, whether manuscripts are submitted to a plagiarism checking tool, and the average length of time from submission until a review decision is made.
Finally, the Open Access tab indicates a journal’s model of open access (OA) publishing (none/Subscription, Hybrid, Gold, Bronze, or Diamond). This tab also indicates when Green OA is supported as an additional option, although a researcher would still want to reference the journal’s website or a tool such as the Jisc Open Policy Finder (formerly Sherpa) to understand the exact criteria for Green archival.
A second group of tabs falls into a category I’ll call metrics. The metrics group includes the Disciplines and Topics tabs, which both present “influence” scores based on how often the journal is cited by other journals. A higher percentage, representing more frequent citation, suggests that the journal has a higher influence in that specific discipline or topic, while a lower percentage suggests less citation and thus less influence. Note that frequency of citation as a proxy for a journal’s influence or impact is a perennially debated topic. This metric may be less appropriate for research that primarily targets audiences such as practitioners or policymakers, since a journal’s influence on those groups would not be fairly represented by citations in scholarly journals.
The next tab in the metrics category, Smart Citations, incorporates data from Scite.ai, which, although also citation-based, differs from the influence scores by classifying the types of citations a journal receives. By seeing how often citations are Supporting, Disputing, or merely Mentioning the work in a given journal, a researcher can interpret the nature—rather than simply the frequency—of a journal’s contributions to scholarly conversation.
The Attention tab incorporates data from Altmetric, which records other kinds of engagement with the journal’s contents: references on social media and in traditional news outlets, blogs, policy documents, and Wikipedia, even how many Mendeley users have added content from that journal into their personal reference libraries. While the appropriateness or usefulness of any one of these engagement channels will no doubt vary by discipline, the breadth of the data provided means a given researcher is likely to find some useful information.
Finally, for journals with relevant data available, there is a tab for Sustainable Development Goals (SDGII). When the SDGII tab is present for a given journal, it identifies the top three sustainable development goals that the journal addresses and rates the journal content’s alignment with SDGs using a range of 0–5 SDG “wheel” icons.
After a user searches by title or keyword, they can click on a button to filter the results by disciplines, topics, open access publishing models or fees, acceptance rate, and various other metrics and average timeframes.
From the search results, they can also select multiple journals to compare, a useful tool for a researcher choosing a venue for manuscript submission. Selected journals appear in an overlay pane at the bottom of the screen; when the user clicks Compare These, a table is generated comparing the journals’ key characteristics (see Figure 3).
Unfortunately, there is no simple mechanism to download or export the comparison table or individual journal cards.
The Cabell’s website details the specific criteria used to evaluate and qualify journals for inclusion in Journalytics—ranging from audience, which should include “academics, administrators, businesspersons, counselors, librarians, practicing teachers, and/or practitioners,” and relevance to Cabell’s scope to sponsorship, peer review model, publication practices, and key signs of integrity (Journal Selection Criteria, n.d.). Cabell’s requests additional information from journals when necessary to inform a review and performs annual audits on all included journals to confirm they continue to meet the criteria. Similarly, Cabell’s transparently publishes the specific violation criteria they use to flag potentially predatory journals. These criteria evolve over time; all changes are documented, and previous versions of the criteria are maintained for public reference (Predatory Reports Criteria, 2019).
Cabell’s website did not contain any details pertaining to accessibility, an accessibility roadmap, or a Voluntary Product Accessibility Template (VPAT). I used the Accessibility Cloud Lite bookmarklet to check a results page for compliance with WCAG 2.2 accessibility criteria, and several issues were found, including the lack of landmarks and incorrect increases in heading levels (see Figure 4). Of course, this free tool may have limitations.
Cabell’s offers institutional pricing; individual subscriptions are not available (Cabell’s, n.d.). In response to my email inquiry, Simon Linacre, chief commercial officer of Cabell’s, explained that the company’s exact pricing levels are proprietary, but he shared that they offer four tiers based on FTE (full-time equivalent, a common enrollment metric in higher education) and that they “offer discounts to customers to make access to our data as affordable as possible” (S. Linacre, personal communication, May 12, 2025).
Cabell’s supports authentication via IP address range, OpenAthens, and Auth0 Passwordless Authentication, with which users enter their institutional email address and receive by email a one-time login code each time they wish to access the database (Logging In, n.d.).
Ulrichsweb and MLA International Bibliography perhaps come the closest to being competitors for Cabell’s in the sense that they provide indexes of published journals with an overview of publication details, audience, scope, and a selection of other characteristics. Ulrichsweb omits the many quantitative metrics found in Cabell’s but notes which databases index each journal and, for select titles, includes a qualitative review. Meanwhile, MLA International Bibliography, while not containing as many detailed metrics as Cabell’s, distinguishes itself by covering arts and humanities journals, which are decidedly absent in Cabell’s.
Additionally, several tools overlap with specific elements of Cabell’s data—for example, Cabell’s displays the same Altmetric engagement metrics that are found in Altmetric and Dimensions, and Cabell’s Discipline Influence score can be seen as a competitor for other citation-based metrics, such as Clarivate’s Journal Impact Factor, Elsevier’s CiteScore, and SciMago’s SJR score. But I am unaware of any other tool which compiles as robust and diverse an array of data points about individual journals.
It is essential for users to understand that some “good” journals aren’t included in Cabell’s, and that journals that are of lesser quality yet not predatory will not necessarily be excluded. A title’s exclusion should not be interpreted as negative, but its inclusion doesn’t guarantee anything beyond minimum standards of credible publishing. Even among journals flagged in Predatory Reports, the violations vary in quantity and severity; a small number of minor infractions might indicate a journal that is immature but not ill-intentioned. The database should be treated simply as one source of information that the researcher can interpret and apply according to the contexts of their own discipline and career stage.
Additionally, while Cabell’s focuses on questionable publisher practices, research misconduct can take other forms (for example, see Katina’s recent series on publishing integrity, or Kent Anderson’s article on “gaslight journals”). Even “legitimate,” non-predatory journals can be exploited by paper mills or research fraud by authors or editors. Cabell’s does not include data on article retraction rates or information about journals that may be compromised in ways other than predatory fees; incorporating these additional types of research and publishing integrity issues could increase the tool’s value for researchers and librarians.
I recommend Cabell’s Journalytics and Predatory Reports for researchers in health sciences, business, computer science, and the social sciences who need to discover, evaluate, or choose between scholarly journals in their discipline for publication submissions—as well as the librarians who provide researchers with guidance in scholarly publishing. I have also seen this tool prove valuable for faculty whose departments, during a pre-tenure review, have asked them to justify their selection of a particular publication venue.
Anderson, K. (2025, Jun 12). Welcome to “gaslight journals.” The Geyser. https://www.the-geyser.com/welcome-to-gaslight-journals/
Bisaccio, M. (2023, Aug 11). Introducing the all-new journalytics academic & predatory reports. The Source. https://blog.cabells.com/2023/08/11/introducing-the-all-new-journalytics-academic-predatory-reports/
Bisaccio, M. (2025, Jan 9). Cabells integration with LibKey now live. The Source. https://web.archive.org/web/20250122034953/https://blog.cabells.com/2025/01/09/cabells-integration-with-libkey-now-live/
Cabell’s. (n.d.). Cabell’s. https://cabells.com/
Data Sources. (n.d.). Cabell’s. https://learn.cabells.com/external/manual/journalytics-academic/article/data-sources?p=45bf0b37eb068f380076713fd7aa06dea36e1a2edf6ee9e822b762f49749e903
FAQs. (n.d.). Cabell’s. https://learn.cabells.com/external/manual/journalytics-academic/article/faqs?p=45bf0b37eb068f380076713fd7aa06dea36e1a2edf6ee9e822b762f49749e903
Goddard, M., & Motts, Z. (2025). Series: publishing integrity. Katina Magazine. https://katinamagazine.org/content/collection/series-publishing-integrity
Journal Quality Metrics. (n.d.). Cabell’s. https://cabells.com/metrics
Journal Selection Policy. (n.d.). Cabell’s. https://cabells.com/journal-selection
Logging In. (n.d.). Cabell’s. https://learn.cabells.com/external/manual/journalytics-academic/article/logging-in?p=45bf0b37eb068f380076713fd7aa06dea36e1a2edf6ee9e822b762f49749e903
Predatory Reports Criteria, v1.1. (2019, Mar 13) Cabell’s. https://cabells.com/predatory-criteria-v1.1
Who We Are. (n.d.). Cabell’s. https://cabells.com/about-us
10.1146/katina-082625-1