This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: Need to reduce repetition across sections and make language more concise and avoid advocacy in favor of open access. Also, lots of text on issues that are not specific to OA needs to be removed (inclusiveness of databases, criticism of IF) (May 2018) (Learn how and when to remove this template message)
Open access (OA) is a mechanism by which research outputs are distributed online, free of cost or other access barriers. With open access strictly defined (according to the 2001 definition), or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.
The main focus of the open access movement is "peer reviewed research literature." Historically, this has centered mainly on print-based academic journals. Conventional (non-open access) journals cover publishing costs through access tolls such as subscriptions, site licenses or pay-per-view charges. Open access can be applied to all forms of published research output, including peer-reviewed and non peer-reviewed academic journal articles, conference papers, theses, book chapters, and monographs.
Various studies have investigated the extent of open access. A study published in 2010 showed that roughly 20% of the total number of peer-reviewed articles published in 2008 could be found openly accessible. Another study found that by 2010, 7.9% of all academic journals with impact factors were gold open access journals and showed a broad distribution of Gold Open Access journals throughout academic disciplines. A study of random journals from the citations indexes AHSCI, SCI and SSCI in 2013 came to the result that 88% of the journals were closed access and 12% were open access. In August 2013, a study done for the European Commission reported that 50% of a random sample of all articles published in 2011 as indexed by Scopus were freely accessible online by the end of 2012. A 2017 study by the Max Planck Society put the share of gold access articles in pure open access journals at around 13 percent of total research papers.
In 2009, there were approximately 4,800 active open access journals, publishing around 190,000 articles. As of February 2019, over 12,500 open access journals are listed in the Directory of Open Access Journals.
Walt Crawford's report on Gold Open Access 2013-2018 (GOA4) found that in 2018 over 700,000 articles were published in gold open access in the world, of which 42% was in journals with no author-paid fees. The figure varies significantly depending on region and kind of publisher: 75% if university-run, over 80% in Latin America, but less than 25% in Western Europe. However, Crawford's study did not count open access articles published in "hybrid" journals (subscription journals that allow authors to make their individual articles open in return for payment of a fee). More comprehensive analyses of the scholarly literature suggest that this resulted in a significant underestimation of the prevalence of author-fee-funded OA publications in the literature. Crawford's study also found that although a minority of open access journals impose charges on authors, a growing majority of open access articles are published under this arrangement, particularly in the science disciplines (thanks to the enormous output of open access "mega journals," each of which may publish tens of thousands of articles in a year and are invariably funded by author-side charges--see Figure 10.1 in GOA4).
The Registry of Open Access Repositories (ROAR) indexes the creation, location and growth of open access open access repositories and their contents. As of February 2019, over 4,500 institutional and cross-institutional repositories have been registered in ROAR.
There are a number of variants of open access publishing and different publishers may use one or more of these variants.
Different open access types are currently commonly described using a colour system. The most commonly recognised names are "green", "gold", and "hybrid" open access; however a number of others terms are also used for additional models.
The gold OA model provides full open access by publishers, in exchange for a per-article publication fee from the authors or their institutions. The publisher makes all articles and related content available for free immediately on the journal's website. In such publications, articles are licensed for sharing and reuse via creative commons licenses or similar.
Self-archiving by authors is permitted under green OA. The author posts the work to a website controlled by the author, the research institution that funded or hosted the work, or to an independent central open repository.
If the author posts the near-final version of their work after peer review by a journal, the archived version is called a "postprint". This can be the accepted manuscript as returned by the journal to the author after successful peer review.
Hybrid open access journals, contain a mixture of open access articles and closed access articles. A publisher following this model is partially funded by subscriptions, and only provide open access for those individual articles for which the authors (or research sponsor) pay a publication fee.
Delayed open-access journals publish articles initially as subscription-only, then release them as free to read (but not to reuse, adapt and share, so not open access), typically after an embargo period (varying from months to years). In this way subscribers get early access to content and it is not licensed for reuse.
The journals which publish open access without charging authors article processing charges are sometimes referred to as platinum or diamond OA. Since they do not charge either readers or authors directly, such publishers often require funding from external sources such as academic institutions, learned societies, philanthropists or government grants.
The growth of digital piracy by large-scale copyright infringement has allowed enabled free access to paywalled literature. In some ways this is a large-scale technical implementation of pre-existing practice, whereby those with access to paywalled literature would share copies with their contacts. However the increased ease and scale from 2010 onwards have changed how many people treat subscription publications.
Similar to the free content definition, the terms 'gratis' and 'libre' were used in the BOAI definition to distinguish between free to read versus free to reuse. Gratis open access refers to online access free of charge ("free as in beer"), and libre open access refers to online access free of charge plus some additional re-use rights ("free as in freedom"). Libre open access covers the kinds of open access defined in the Budapest Open Access Initiative, the Bethesda Statement on Open Access Publishing and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. The re-use rights of libre OA are often specified by various specific Creative Commons licenses; almost all of these require attribution of authorship to the original authors. In 2012, the number of works under libre open access was considered to have been rapidly increasing for a few years, though most open access mandates did not enforce any copyright license and it was difficult to publish libre gold OA in legacy journals. However, there are no costs nor restrictions for green libre OA as preprints can be freely self-deposited with a free license, and most open access repositories use Creative Commons licenses to allow reuse.
FAIR is an acronym for 'Findable, Accessible, Interoperable and Reuseable', intended to more clearly define what is meant by the term 'open access' and make the concept easier to discuss. Initially proposed in March 2016, it has subsequently been endorsed by organisations such as the European commission and the G20.
Scholarly publishing invokes various positions and passions. For example, authors may spend hours struggling with diverse article submission systems, often converting document formatting between a multitude of journal and conference styles, and sometimes spend months waiting for peer review results. The drawn-out and often contentious societal and technological transition to Open Access and Open Science/Open Research, particularly across North America and Europe (Latin America has already widely adopted "Acceso Abierto" since before 2000) has led to increasingly entrenched positions and much debate.
The area of (open) scholarly practices increasingly see a role for policy-makers and research funders giving focus to issues such as career incentives, research evaluation and business models for publicly funded research. Plan S and AmeliCA (Open Knowledge for Latin America) caused a wave of debate in scholarly communication around 2019.
The most commons licenses used in open access publishing are Creative Commons. The widely used CC BY license is one of the most permissive, only requiring attribution to be allowed to use the material (and allowing derivations, commercial use). A range of more restrictive creative commons licenses are also used. More rarely, some of the smaller academic journals use custom open access licenses.
Since open access publication does not charge readers, there are many financial models used to cover costs by other means. Open access can be provided by commercial publishers, who may publish open access as well as subscription-based journals, or dedicated open-access publishers such as Public Library of Science (PLOS) and BioMed Central.
Advantages and disadvantages of open access have generated considerable discussion amongst researchers, academics, librarians, university administrators, funding agencies, government officials, commercial publishers, editorial staff and society publishers. Reactions of existing publishers to open access journal publishing have ranged from moving with enthusiasm to a new open access business model, to experiments with providing as much free or open access as possible, to active lobbying against open access proposals. There are many publishers that started up as open access-only publishers, such as PLOS, Hindawi Publishing Corporation, Frontiers in... journals, MDPI and BioMed Central.
Some open access journals (under the gold, and hybrid models) generate revenue by charging publication fees in order to make the work openly available at the time of publication. The money might come from the author but more often comes from the author's research grant or employer. While the payments are typically incurred per article published (e.g. BMC or PLOS journals), some journals apply them per manuscript submitted (e.g. Atmospheric Chemistry and Physics until recently) or per author (e.g. PeerJ).
Charges typically range from $1000-2000 but can be under $10 or over $5000. APCs vary greatly depending on subject and region and are most common in scientific and medical journals (43% and 47% respectively), and lowest in arts and humanities journals (0% and 4% respectively). APCs also can also depend on a journal's impact factor. Some publishers (e.g. eLife and Ubiquity Press) have released estimates of their direct and indirect costs that set their APCs. Hybrid OA generally costs more than gold OA and can offer a lower quality of service.
By comparison, journal subscriptions equate to $3,500-$4,000 per article published by an institution, but are highly variable by publisher (and some charge page fees separately).[failed verification] This has led to the assessment that there is enough money "within the system" to enable full transition to OA. However, there is ongoing discussion about whether the change-over offers an opportunity to become more cost-effective or promotes more equitable participation in publication. Concern has been noted that increasing subscription journal prices will be mirrored by rising APCs, creating a barrier to less financial privileged authors. Some gold OA publishers will waive all or part of the fee for authors from less developed economies. Steps are normally taken to ensure that peer reviewers do not know whether authors have requested, or been granted, fee waivers, or to ensure that every paper is approved by an independent editor with no financial stake in the journal. The main argument against requiring authors to pay a fee, is the risk to the peer review system, diminishing the overall quality of scientific journal publishing.
No-fee open access journals, also known as "platinum" or "diamond" do not charge either readers or authors. These journals use a variety of business models including subsidies, advertising, membership dues, endowments, or volunteer labour. Subsidising sources range from universities, libraries and museums to foundations, societies or government agencies. Some publishers may cross-subsidise from other publications or auxiliary services and products. For example, most APC-free journals in Latin America are funded by higher education institutions and are not conditional on institutional affiliation for publication. Conversely, Knowledge Unlatched crowdsources funding in order to make monographs available open access.
Estimates of prevalence vary, but approximately 10,000 journals without APC are listed in DOAJ and the Free Journal Network. APC-free journals tend to be smaller and more local-regional in scope. Some also require submitting authors to have a particular institutional affiliation.
The "green" route to OA refers to author self-archiving, in which a version of the article (often the peer-reviewed version before editorial typesetting, called "postprint") is posted online to an institutional and/or subject repository. This route is often dependent on journal or publisher policies,[note 1] which can be more restrictive and complicated than respective "gold" policies regarding deposit location, license, and embargo requirements. Some publishers require an embargo period before deposition in public repositories, arguing that immediate self-archiving risks loss of subscription income.
Currently used embargo times (often 6-12 months in STEM and over 12 months in social sciences and humanities), however, do not seem to be based on empirical evidence on the effect of embargoes on journal subscriptions. In 2013 the UK House of Commons Select Committee on Business, Innovation and Skills already concluded that "there is no available evidence base to indicate that short or even zero embargoes cause cancellation of subscriptions".[note 2]
There are some data available[note 3] on the median "usage half life" (the median time it takes for scholarly articles to reach half of their total downloads) and the difference therein across disciplines, but this in itself does not prove that embargo length will affect subscriptions.[note 4]
The argument that immediate self-archiving risks subscription revenue is seen as ironic where archiving of postprints is concerned. If the value publishers add to the publication process beyond peer review (e.g. in typesetting, dissemination and archiving) were worth the price asked, people would still be willing to pay for the journal even if the unformatted postprint is available elsewhere. An embargo can be seen as a statement that in fact the prices levied for individual articles through subscriptions, are not commensurate to the value added to a publication beyond organizing the peer review process.
Publishers have, in the past, lifted embargo periods for specific research topics in times of humanitarian crises, or have been asked to do so (e.g. outbreaks of Zika and Ebola[note 5]). While considered commendable in itself by scholars, this is seen as an implicit acknowledgement that embargoes stifle the progress of science and the potential application of scientific research; particularly when it comes to life-threatening pandemics. While arguably, not all research is potentially critical for saving lives, it is hard to imagine a discipline where fellow researchers and societal partners would not benefit from un-embargoed access to research findings.
Evidence suggests that traditional journals can peacefully coexist with zero-embargo self-archiving policies, and the relative benefits to both publishers and authors via increased dissemination and citations outweigh any putative negative impacts. For publishers, the fact that most preprint repositories encourage authors to link to or upload the published version of record (VOR) is effectively free marketing for the respective journal and publisher.
Plan S has zero-length embargoes on self-archiving as one of its key principles. Where publishers have already implemented such policies, such as the Royal Society, Sage, and Emerald,[note 6] there has been no documented impact on their finances so far. In a reaction to Plan S, Highwire suggested that three of their society publishers make all author manuscripts freely available upon submission and state that they do not believe this practice has contributed to subscription decline.[note 7] Therefore there is little evidence or justification supporting the need for embargo periods.
A "preprint" is typically a version of a research paper that is shared on an online platform prior to, or during, a formal peer review process. Preprint platforms have become popular due to the increasing drive towards open access publishing and can be publisher- or community-led. A range of discipline-specific or cross-domain platforms now exist.
A persistent concern surrounding preprints is that work may be at risk of being plagiarised or "scooped" - meaning that the same or similar research will be published by others without proper attribution to the original source - if publicly available but not yet associated with a stamp of approval from peer reviewers and traditional journals. These concerns are often amplified as competition increases for academic jobs and funding, and perceived to be particularly problematic for early-career researchers and other higher-risk demographics within academia.
However, preprints in fact protect against scooping. Considering the differences between traditional peer-review based publishing models and deposition of an article on a preprint server, "scooping" is less likely for manuscripts first submitted as preprints. In a traditional publishing scenario, the time from manuscript submission to acceptance and to final publication can range from a few weeks to years, and go through several rounds of revision and resubmission before final publication. During this time, the same work will have been extensively discussed with external collaborators, presented at conferences, and been read by editors and reviewers in related areas of research. Yet, there is no official open record of that process (e.g., peer reviewers are normally anonymous, reports remain largely unpublished), and if an identical or very similar paper were to be published while the original was still under review, it would be impossible to establish provenance.
Preprints provide a time-stamp at the time of publication, which helps to establish the "priority of discovery" for scientific claims (Vale and Hyman 2016). This means that a preprint can act as proof of provenance for research ideas, data, code, models, and results. The fact that the majority of preprints come with a form of permanent identifier, usually a Digital Object Identifier (DOI), also makes them easy to cite and track. Thus, if one were to be "scooped" without adequate acknowledgement, this would be a case of academic misconduct and plagiarism, and could be pursued as such.
There is no evidence that "scooping" of research via preprints exists, not even in communities that have broadly adopted the use of the arXiv server for sharing preprints since 1991. If the unlikely case of scooping emerges as the growth of the preprint system continues, it can be dealt with as academic malpractice. ASAPbio includes a series of hypothetical scooping scenarios as part of its preprint FAQ, finding that the overall benefits of using preprints vastly outweigh any potential issues around scooping.[note 8] Indeed, the benefits of preprints, especially for early-career researchers, seem to outweigh any perceived risk: rapid sharing of academic research, open access without author-facing charges, establishing priority of discoveries, receiving wider feedback in parallel with or before peer review, and facilitating wider collaborations.
Open access (mostly green and gratis) began to be sought and provided worldwide by researchers when the possibility itself was opened by the advent of Internet and the World Wide Web. The momentum was further increased by a growing movement for academic journal publishing reform, and with it gold and libre OA.
The premises behind open access publishing are that there are viable funding models to maintain traditional peer review standards of quality while also making the following changes:
An obvious advantage of open access journals is the free access to scientific papers regardless of affiliation with a subscribing library and improved access for the general public; this is especially true in developing countries. Lower costs for research in academia and industry have been claimed in the Budapest Open Access Initiative, although others have argued that OA may raise the total cost of publication, and further increase economic incentives for exploitation in academic publishing. The open access movement is motivated by the problems of social inequality caused by restricting access to academic research, which favor large and wealthy institutions with the financial means to purchase access to many journals, as well as the economic challenges and perceived unsustainability of academic publishing.
The intended audience of research articles is usually other researchers. Open access helps researchers as readers by opening up access to articles that their libraries do not subscribe to. One of the great beneficiaries of open access may be users in developing countries, where currently some universities find it difficult to pay for subscriptions required to access the most recent journals. Some schemes exist for providing subscription scientific publications to those affiliated to institutions in developing countries at little or no cost. All researchers benefit from open access as no library can afford to subscribe to every scientific journal and most can only afford a small fraction of them - this is known as the "serials crisis".
Open access extends the reach of research beyond its immediate academic circle. An open access article can be read by anyone - a professional in the field, a researcher in another field, a journalist, a politician or civil servant, or an interested layperson. Indeed, a 2008 study revealed that mental health professionals are roughly twice as likely to read a relevant article if it is freely available.
Research funding agencies and universities want to ensure that the research they fund and support in various ways has the greatest possible research impact. As a means of achieving this, research funders are beginning to expect open access to the research they support. Many of them (including all UK Research Councils) have already adopted open access mandates, and others are on the way to do so (see ROARMAP).
In the US, the 2008 NIH Public Access Policy, an open access mandate was put into law, and required that research papers describing research funded by the National Institutes of Health must be available to the public free through PubMed Central (PMC) within 12 months of publication.
A growing number of universities are providing institutional repositories in which their researchers can deposit their published articles. Some open access advocates believe that institutional repositories will play a very important role in responding to open access mandates from funders.
In May 2005, 16 major Dutch universities cooperatively launched DAREnet, the Digital Academic Repositories, making over 47,000 research papers available. From 2 June 2008, DAREnet has been incorporated into the scholarly portal NARCIS. By 2019, NARCIS provided access to 360,000 open access publications from all Dutch universities, KNAW, NWO and a number of scientific institutes.
In 2011, a group of universities in North America formed the Coalition of Open Access Policy Institutions (COAPI). Starting with 21 institutions where the faculty had either established an open access policy or were in the process of implementing one, COAPI now has nearly 50 members. These institutions' administrators, faculty and librarians, and staff support the international work of the Coalition's awareness-raising and advocacy for open access.
In 2012, the Harvard Open Access Project released its guide to good practices for university open-access policies, focusing on rights-retention policies that allow universities to distribute faculty research without seeking permission from publishers. Rights retention is currently being explored in the UK by UKSCL.
In 2013 a group of nine Australian universities formed the Australian Open Access Support Group (AOASG) to advocate, collaborate, raise awareness, and lead and build capacity in the open access space in Australia. In 2015, the group expanded to include all eight New Zealand universities and was renamed the Australasian Open Access Support Group. It was then renamed the Australasian Open Access Strategy Group, highlighting its emphasis on strategy. The awareness raising activities of the AOASG include presentations, workshops, blogs, and a webinar series on open access issues.
As information professionals, librarians are often vocal and active advocates of open access. These librarians believe that open access promises to remove both the price barriers and the permission barriers that undermine library efforts to provide access to the scholarly record, as well as helping to address the serials crisis. Many library associations have either signed major open access declarations, or created their own. For example, IFLA have produced a Statement on Open Access.
Librarians also lead education and outreach initiatives to faculty, administrators, and others about the benefits of open access. For example, the Association of College and Research Libraries of the American Library Association has developed a Scholarly Communications Toolkit. The Association of Research Libraries has documented the need for increased access to scholarly information, and was a leading founder of the Scholarly Publishing and Academic Resources Coalition (SPARC).
At most universities, the library manages the institutional repository, which provides free access to scholarly work by the university's faculty. The Canadian Association of Research Libraries has a program to develop institutional repositories at all Canadian university libraries.
In 2013, open access activist Aaron Swartz was posthumously awarded the American Library Association's James Madison Award for being an "outspoken advocate for public participation in government and unrestricted access to peer-reviewed scholarly articles". In March 2013, the entire editorial board and the editor-in-chief of the Journal of Library Administration resigned en masse, citing a dispute with the journal's publisher. One board member wrote of a "crisis of conscience about publishing in a journal that was not open access" after the death of Aaron Swartz.
The pioneer of the open access movement in France and one of the first librarians to advocate the self-archiving approach to open access worldwide is Hélène Bosc. Her work is described in her "15-year retrospective".
Open access to scholarly research is argued to be important to the public for a number of reasons. One of the arguments for public access to the scholarly literature is that most of the research is paid for by taxpayers through government grants, who therefore have a right to access the results of what they have funded. This is one of the primary reasons for the creation of advocacy groups such as The Alliance for Taxpayer Access in the US. Examples of people who might wish to read scholarly literature include individuals with medical conditions (or family members of such individuals) and serious hobbyists or 'amateur' scholars who may be interested in specialized scientific literature (e.g. amateur astronomers). Additionally, professionals in many fields may be interested in continuing education in the research literature of their field, and many businesses and academic institutions cannot afford to purchase articles from or subscriptions to much of the research literature that is published under a toll access model.
Even those who do not read scholarly articles benefit indirectly from open access. For example, patients benefit when their doctor and other health care professionals have access to the latest research. As argued by open access advocates, open access speeds research progress, productivity, and knowledge translation. Every researcher in the world can read an article, not just those whose library can afford to subscribe to the particular journal in which it appears. Faster discoveries benefit everyone. High school and junior college students can gain the information literacy skills critical for the knowledge age. Critics of the various open access initiatives claim that there is little evidence that a significant amount of scientific literature is currently unavailable to those who would benefit from it. While no library has subscriptions to every journal that might be of benefit, virtually all published research can be acquired via interlibrary loan. Note that interlibrary loan may take a day or weeks depending on the loaning library and whether they will scan and email, or mail the article. Open access online, by contrast is faster, often immediate, making it more suitable than interlibrary loan for fast-paced research.
In developing nations, open access archiving and publishing acquires a unique importance. Scientists, health care professionals, and institutions in developing nations often do not have the capital necessary to access scholarly literature, although schemes exist to give them access for little or no cost. Among the most important is HINARI, the Health InterNetwork Access to Research Initiative, sponsored by the World Health Organization. HINARI, however, also has restrictions. For example, individual researchers may not register as users unless their institution has access, and several countries that one might expect to have access do not have access at all (not even "low-cost" access) (e.g. South Africa).
Many open access projects involve international collaboration. For example, the SciELO (Scientific Electronic Library Online), is a comprehensive approach to full open access journal publishing, involving a number of Latin American countries. Bioline International, a non-profit organization dedicated to helping publishers in developing countries is a collaboration of people in the UK, Canada, and Brazil; the Bioline International Software is used around the world. Research Papers in Economics (RePEc), is a collaborative effort of over 100 volunteers in 45 countries. The Public Knowledge Project in Canada developed the open-source publishing software Open Journal Systems (OJS), which is now in use around the world, for example by the African Journals Online group, and one of the most active development groups is Portuguese. This international perspective has resulted in advocacy for the development of open-source appropriate technology and the necessary open access to relevant information for sustainable development.
There is increasing frustration amongst OA advocates, with what is perceived as resistance to change on the part of many of the established scholarly publishers. Publishers are often accused of capturing and monetising publicly-funded research, using free academic labour for peer review, and then selling the resulting publications back to academia at inflated profits. Such frustrations sometimes spill over into hyperbole, of which "publishers add no value" is one of the most common examples.
However, scholarly publishing is not a simple process, and publishers do add value to scholarly communication as it is currently designed. Kent Anderson maintains a list of things that journal publishers do which currently contains 102 items and has yet to be formally contested from anyone who challenges the value of publishers.[note 9] Many items on the list could be argued to be of value primarily to the publishers themselves, e.g. "Make money and remain a constant in the system of scholarly output". However, others provide direct value to researchers and research in steering the academic literature. This includes arbitrating disputes (e.g. over ethics, authorship), stewarding the scholarly record, copy-editing, proofreading, type-setting, styling of materials, linking the articles to open and accessible datasets, and (perhaps most importantly) arranging and managing scholarly peer review. The latter is a task which should not be underestimated as it effectively entails coercing busy people into giving their time to improve someone else's work and maintain the quality of the literature. Not to mention the standard management processes for large enterprises, including infrastructure, people, security, and marketing. All of these factors contribute in one way or another to maintaining the scholarly record.
It could be questioned though, whether these functions are actually necessary to the core aim of scholarly communication, namely, dissemination of research to researchers and other stakeholders such as policy makers, economic, biomedical and industrial practitioners as well as the general public. Above, for example, we question the necessity of the current infrastructure for peer review, and if a scholar-led crowdsourced alternative may be preferable. In addition, one of the biggest tensions in this space is associated with the question if for-profit companies (or the private sector) should be allowed to be in charge of the management and dissemination of academic output and execute their powers while serving, for the most part, their own interests. This is often considered alongside the value added by such companies, and therefore the two are closely linked as part of broader questions on appropriate expenditure of public funds, the role of commercial entities in the public sector, and issues around the privatisation of scholarly knowledge.
Publishing could certainly be done at a lower cost than common at present. There are significant researcher-facing inefficiencies in the system including the common scenario of multiple rounds of rejection and resubmission to various venues as well as the fact that some publishers profit beyond reasonable scale. What is missing most from the current publishing market, is transparency about the nature and the quality of the services publishers offer. This would allow authors to make informed choices, rather than decisions based on indicators that are unrelated to research quality, such as the JIF. All the above questions are being investigated and alternatives could be considered and explored. Yet, in the current system, publishers still play a role in managing processes of quality assurance, interlinking and findability of research. As the role of scholarly publishers within the knowledge communication industry continues to evolve, it seen as necessary that they can justify their operation based on the intrinsic value that they add, and combat the perception that they add no value to the process.
This section is empty. You can help by adding to it. (May 2019)
This section relies too much on references to primary sources. (March 2018) (Learn how and when to remove this template message)
The main reason authors make their articles openly accessible is to maximize their research impact. There have been claims of higher citation rates for open access authors. The overall citation rates for a time period of 2 years (2010-2011) were 30% higher for subscription journals, but, after controlling for discipline, journal age and publisher location, the differences largely disappeared in most subcategories, except for those launched prior to 1996. A study in 2001 first reported an open access citation impact advantage. While there is some debate around the impact of open access, most studies conducted show increased citations with open access publications.
Two major studies dispute the claim that open access articles lead to more citations. A randomized controlled trial of open access publishing involving 36 participating journals in the sciences, social sciences, and humanities found that open access articles (n=712) received significantly more downloads and reached a broader audience within the first year, yet were cited no more frequently, nor earlier, than subscription-access control articles (n=2533) within 3 years.
Many other studies, both major and minor and with varying degrees of methodological rigor, find that an open access article is more likely to be used and cited than one behind subscription barriers.
For example, a 2006 study in PLOS Biology found that articles published as immediate open access in PNAS were three times more likely to be cited than non-open access papers, and were also cited more than PNAS articles that were only self-archived. This result has been challenged as an artifact of authors self-selectively paying to publish their higher quality articles in hybrid open access journals, whereas a 2010 study found that the open access citation advantage was equally big whether self-archiving was self-selected or mandated.
A 2010 study of 27,197 articles in 1,984 journals used institutionally mandated open access instead of randomized open access to control for bias on the part of authors toward self-selectively making their better (hence more citeable) articles open access. The result was a replication of the repeatedly reported open access citation advantage, with the advantage being equal in size and significance whether the open access was self-selected or mandated.
A 2016 study reported that the odds of an open access journal being referenced on the English popflock.com resource are 47% higher than for paywalled journals, and suggested that this constitutes a significant "amplifier" effect for science published on such platforms.
Scholars are paid by research funders and/or their universities to do research; the published article is the report of the work they have done, rather than an item for commercial gain. The more the article is used, cited, applied and built upon, the better for research as well as for the researcher's career. Open access can reduce publication delays, an obstacle which led some research fields such as high-energy physics to adopt widespread preprint access.
Some professional organizations have encouraged use of open access: in 2001, the International Mathematical Union communicated to its members that "Open access to the mathematical literature is an important goal" and encouraged them to "[make] available electronically as much of our own work as feasible" to "[enlarge] the reservoir of freely available primary mathematical material, particularly helping scientists working without adequate library access".
The journal impact factor (JIF) was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth subscribing to, as the JIF aggregates the number of citations to articles published in each journal. Since then, the JIF has become associated as a mark of journal "quality", and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviours.
However, critics of the JIF state that use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed. Citation distributions for eight selected journals in, along with their JIFs and the percentage of citable items below the JIF shows that the distributions are clearly skewed, making the arithmetic mean an inappropriate statistic to use to say anything about individual papers within the citation distributions. More informative and readily available article-level metrics can be used instead, such as citation counts or "altmetrics', along with other qualitative and quantitative measures of research "impact'.
Already around 2010, national and international research funding institutions have pointed out that numerical indicators such as the JIF should not be referred to as a measure of quality.[note 10] In fact, the JIF is a highly-manipulated metric, and the justification for its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality.
Empirical evidence shows that the misuse of the JIF - and journal ranking metrics in general - has a number of negative consequences for the scholarly communication system. These include confusion between outreach of a journal and the quality of individual papers and insufficient coverage of social sciences and humanities as well as research outputs from across Latin America, Africa, and South-East Asia. Additional drawbacks include the marginalization of research in vernacular languages and on locally relevant topics, inducement to unethical authorship and citation practices as well as more generally fostering of a reputation economy in academia based on publishers" prestige rather than actual research qualities such as rigorous methods, replicability and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has been shown to have deleterious effects on research quality.
Despite its inappropriateness, JIFs are still regularly used to evaluate research in many countries. In spite of this, a number of outstanding issues remain around the opacity of the metric and the fact that it is often negotiated by publishers. However, these integrity problems appear to have done little to curb its widespread mis-use.
A number of regional focal points and initiatives are now providing and suggesting alternative research assessment systems, including key documents such as the Leiden Manifesto[note 11] and the San Francisco Declaration on Research Assessment (DORA). Recent developments around 'Plan S' call on a broader adoption and implementation of such initiatives alongside fundamental changes in the scholarly communication system.[note 12] Thus, there is little basis for the popular simplification which connects JIFs with any measure of quality, and the ongoing inappropriate association of the two will continue to have deleterious effects. As appropriate measures of quality for authors and research, concepts of research excellence should be remodelled around transparent workflows and accessible research results.
Researchers have peer reviewed manuscripts prior to publishing them in a variety of ways since the 18th century. The main goal of this practice is to improve the relevance and accuracy of scientific discussions. Even though experts often criticize peer review for a number of reasons, the process is still often considered the "gold standard" of science. Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication. Thus, there seems to be an element of discord between the ideology behind and the practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are "true" and that peer review protects the literature from flawed science. A number of well-established criticisms exist of many elements of peer review. In the following we describe cases of the wider impact inappropriate peer review can have on public understanding of scientific literature.
Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example, climate change skeptics have published studies in the Energy and Environment journal, attempting to undermine the body of research that shows how human activity impacts the Earth's climate. Politicians in the United States downplaying the science of climate change have then cited this journal on several occasions in speeches and reports.[note 13]
At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome. The New York Times gained access to confidential peer review documents for studies sponsored by the National Football Leagues (NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players.[note 14] During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore, The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared and that such conflicts of interest are not appropriately addressed by peer review.
Another problem that peer review fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes. These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates.[note 15] Ghostwritten articles have appeared in dozens of journals, involving professors at several universities.[note 16] Recent court documents have found that Monsanto ghost-wrote articles to counter government assessment of the carcinogenicity of the pesticide glyphosate and to attack the International Agency for Research on Cancer.[note 17]
Just as experts in a particular field have a better understanding of the value of papers published in their area, scientists are considered to have better grasp of the value of published papers than the general public and to see peer review as a human process, with human failings, and that "despite its limitations, we need it. It is all we have, and it is hard to imagine how we would get along without it". But these subtleties are lost on the general public, who are often misled into thinking that published in a journal with peer review is the "gold standard" and can erroneously equate published research with the truth. Thus, more care must be taken over how peer review, and the results of peer reviewed research, are communicated to non-specialist audiences; particularly during a time in which a range of technical changes and a deeper appreciation of the complexities of peer review are emerging. This will be needed as the scholarly publishing system has to confront wider issues such as retractions and replication or reproducibility "crisis'.
Peer review is often considered integral to scientific discourse in one form or another. Its gatekeeping role is supposed to be necessary to maintain the quality of the scientific literature and avoid a risk of unreliable results, inability to separate signal from noise, and slow scientific progress.
Shortcomings of peer review have been met with calls for even stronger filtering and more gatekeeping. A common argument in favor of such initiatives is the belief that this filter is needed to maintain the integrity of the scientific literature.
Calls for more oversight have at least two implications that are counterintuitive of what is known to be true scholarship.
Others argue that authors most of all have a vested interest in the quality of a particular piece of work. Only the authors could have, as Feynman (1974)[note 18] puts it, the "extra type of integrity that is beyond not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist." If anything, the current peer review process and academic system could penalize, or at least fail to incentivize, such integrity.
Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the culture of doubt necessary for science to operate a self-correcting, truth-seeking process. The effects of this can be seen in the ongoing replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system. It's common to think that more oversight is the answer, as peer reviewers are not at all lacking in skepticism. But the issue is not the skepticism shared by the select few who determine whether an article passes through the filter. It is the validation, and accompanying lack of skepticism, that comes afterwards.[note 19] Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry.[note 20]
Quality research - even some of our most fundamental scientific discoveries - dates back centuries, long before peer review took its current form. Whatever peer review existed centuries ago, it took a different form than it does in modern times, without the influence of large, commercial publishing companies or a pervasive culture of publish or perish. Though in its initial conception it was often a laborious and time-consuming task, researchers took peer review on nonetheless, not out of obligation but out of duty to uphold the integrity of their own scholarship. They managed to do so, for the most part, without the aid of centralised journals, editors, or any formalised or institutionalised process whatsoever. Supporters of modern technology argue that it makes it possible to communicate instantaneously with scholars around the globe, make such scholarly exchanges easier, and restore peer review to a purer scholarly form, as a discourse in which researchers engage with one another to better clarify, understand, and communicate their insights.
Such modern technology includes posting results to preprint servers, preregistration of studies, open peer review, and other open science practices. In all these initiatives, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication, but critics argue that a proper, real-world implementation could test and disprove this assumption; demonstrate researchers' desire for more that traditional journals can offer; show that researchers can be entrusted to perform their own quality control independent of journal-coupled review. Jon Tennant also argues that the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly-understood process. Thus, the assumption that journals and peer review are required to protect scientific integrity seems to undermine the very foundations of scholarly inquiry.
To test the hypothesis that filtering is indeed unnecessary to quality control, many of the traditional publication practices would need to be redesigned, editorial boards repurposed if not disbanded, and authors granted control over the peer review of their own work. Putting authors in charge of their own peer review is seen as serving a dual purpose. On one hand, it removes the conferral of quality within the traditional system, thus eliminating the prestige associated with the simple act of publishing. Perhaps paradoxically, the removal of this barrier might actually result in an increase of the quality of published work, as it eliminates the cachet of publishing for its own sake. On the other hand, readers know that there is no filter so they must interpret anything they read with a healthy dose of skepticism, thereby naturally restoring the culture of doubt to scientific practice.
In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by those with a vested interest in certain issues. A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature. Allowing such "junk" to be published could make individual articles less reliable but render the overall literature more robust by fostering a "culture of doubt".
One initiative experimenting in this area is Researchers.One, a non-profit peer review publication platform featuring a novel author-driven peer review process. Other similar examples include the Self-Journal of Science, PRElights, and The Winnower, which do not yet seem to have greatly disrupted the traditional peer review workflow. Supporters conclude that researchers are more than responsible and competent enough to ensure their own quality control; they just need the means and the authority to do so.
Predatory publishing does not refer to a homogenous category of practices. The name itself was coined by American librarian Jeffrey Beall who created a list of "deceptive and fraudulent" Open Access (OA) publishers which was used as reference until withdrawn in 2017. The term has been reused since for a new for-profit database by Cabell's International. On the one hand, Beall's list as well as Cabell's International database do include truly fraudulent and deceptive OA publishers, that pretend to provide services (in particular quality peer review) which they do not implement, show fictive editorial boards and/or ISSN numbers, use dubious marketing and spamming techniques or even hijacking known titles. On the other hand, they also list journals with subpar standards of peer review and linguistic correction. The number of predatory journals thus defined has grown exponentially since 2010,. The demonstration of existing unethical practices in the OA publishing industry also attracted considerable media attention.
Nevertheless, papers published by predatory publishers still represent only a small proportion of all published papers in OA journals. Most OA publishers ensure their quality by registering their titles in the DOAJ (Directory of Open Access Journals) and comply to a standardised set of conditions.[note 21] A recent study has shown that Beall's criteria of "predatory" publishing were in no way limited to OA publishers and that, applying them to both OA and non-OA journals in the field of Library and information science, even top tier non-OA journals could be qualified as predatory (; see also  on difficulties of demarcating predatory and non-predatory journals in Biomedicine). If a causative connection is to be made in this regard, it is thus not between predatory practices and OA. Instead it is between predatory publishing and the unethical use of one of the many OA business models adopted by a minority of DOAJ registered journals. This is the author-facing article-processing charge (APC) business model in which authors are charged to publish rather than to read. Such a model may indeed provide conflicting incentives to publish quantity rather than quality, in particular once combined with the often unlimited text space available online. APCs have gained increasing popularity in the last two decades as a business model for OA due to the guaranteed revenue streams they offer, as well as a lack of competitive pricing within the OA market which allows vendors full control over how much they choose to charge. However, in subscription-based systems there can be an incentive to publish more papers and use this as a justification for raising subscription prices - as is demonstrated by Elsevier's statement on "double-dipping'.[note 22] Ultimately, quality control is not related to the number of papers published, but to editorial policies and standards and their enforcement. In this regard, it is also important to note the emergence of journals and platforms that select purely on (peer-reviewed) methodological quality, often enabled by the APC-model and the lack of space restrictions in online publishing. In this way, OA also allows more high-quality papers to be published.
The majority of predatory OA publishers and authors publishing in these appear to be based in Asia and Africa, as well as Europe and the Americas. It has been argued that authors who publish in predatory journals may do so unwittingly without actual unethical perspective, due to concerns that North American and European journals might be prejudiced against scholars from non-western countries, high publication pressure or lack of research proficiency. Hence predatory publishing also questions the geopolitical and commercial context of scholarly knowledge production. Nigerian researchers, for example, publish in predatory journals due to the pressure to publish internationally while having little to no access to Western international journals, or due to the often higher APCs practiced by mainstream OA journals. More generally, the criteria adopted by high JIF journals, including the quality of the English language, the composition of the editorial board or the rigour of the peer review process itself tend to favour familiar content from the "centre" rather than the "periphery". It is thus important to distinguish between exploitative publishers and journals - whether OA or not - and legitimate OA initiatives with varying standards in digital publishing, but which may improve and disseminate epistemic contents. In Latin America a highly successful system of free of charge OA publishing has been in place for more than two decades, thanks to organisations such as SciELO and REDALYC.[note 23]
Published and OA review reports are one of a few simple solutions to allow any reader or potential author to directly assess both quality and efficiency of the review system of any given journal, and the value for money of the requested APCs; thus whether or not a journal operates "deceptive" or predatory practices. Associating OA with predatory publishing is therefore deceptive. The real issue with predatory publishing lies a particular business practice, and can largely be resolved with more transparency in the peer review and publication process.
Multiple databases exist for open access articles, journals and datasets. These databases overlap, however each has different inclusion criteria, which typically include extensive vetting for journal publication practices, editorial boards and ethics statements. The main databases of open access articles and journals are DOAJ and PMC. In the case of DOAJ, only fully gold open access journals are included, whereas PMC also hosts articles from hybrid journals.
There are also a number of preprint servers which host articles that have not yet been reviewed as open access copies. These articles are subsequently submitted for peer review by both open access or subscription journals, however the preprint always remains openly accessible. A list of preprint servers is maintained at ResearchPreprints.
For articles that are published in closed access journals, some authors will deposit a postprint copy in an open access repository, where it can be accessed for free. Most subscription journals place restrictions on which version of the work may be shared and/or require an embargo period following the original date of publication. What is deposited can therefore vary, either a preprint or the peer-reviewed postprint, either the author's refereed and revised final draft or the publisher's version of record, either immediately deposited or after several years. Repositories may be specific to an institution, a discipline (e.g.arXiv), a scholarly society (e.g. MLA's CORE Repository), or a funder (e.g. PMC). Although the practice was first formally proposed in 1994, self-archiving was already being practiced by some computer scientists in local FTP archives in the 1980s (later harvested by CiteSeer). The SHERPA/RoMEO site maintains a list of the different publisher copyright and self-archiving policies and the ROAR database hosts an index of the repositories themselves.
Like the self-archived green open access articles, most gold open access journal articles are distributed via the World Wide Web, due to low distribution costs, increasing reach, speed, and increasing importance for scholarly communication. Open source software is sometimes used for open access repositories,open access journal websites, and other aspects of open access provision and open access publishing.
Access to online content requires Internet access, and this distributional consideration presents physical and sometimes financial barriers to access.
There are various open access aggregators that list open access journals or articles. ROAD (the Directory of Open Access scholarly Resources) synthesizes information about open access journals and is a subset of the ISSN register. SHERPA/RoMEO lists international publishers that allow the published version of articles to be deposited in institutional repositories. The Directory of Open Access Journals (DOAJ) contains over 12,500 peer-reviewed open access journals for searching and browsing.
Open access articles can be found with a web search, using any general search engine or those specialized for the scholarly and scientific literature, such as Google Scholar, OAIster, openaccess.xyz,base-search.net, and CORE Many open-access repositories offer a programmable interface to query their content. Some of them use a generic protocol, such as OAI-PMH (e.g., base-search.net). In addition, some repositories propose a specific API, such as the arXiv API, the Dissemin API, the Unpaywall/oadoi API, or the base-search API.
In 1998, several universities founded the Public Knowledge Project to foster open access, and developed the open-source journal publishing system Open Journal Systems, among other scholarly software projects. As of 2010, it was being used by approximately 5,000 journals worldwide.
Several initiatives provide an alternative to the English language dominance of existing publication indexing systems, including Index Copernicus (Polish), SciELO (Portuguese, Spanish) and Redalyc (Spanish).
Clarivate Analytics' Web of Science (WoS) and Elsevier's Scopus databases are synonymous with data on international research, and considered as the two most trusted or authoritative sources of bibliometric data for peer-reviewed global research knowledge across disciplines. They are both also used widely for the purposes of researcher evaluation and promotion, institutional impact (for example the role of WoS in the UK Research Excellence Framework 2021[note 24]), and international league tables (Bibliographic data from Scopus represents more than 36% of assessment criteria in the THE rankings[note 25]). But while these databases are generally agreed to contain rigorously-assessed, high quality research, they do not represent the sum of current global research knowledge.
It is often mentioned in popular science articles that the research outputs of researchers in South America, Asia, and Africa is disappointingly low. Sub-Saharan Africa is often singled out and chastised for having "13.5% of the global population but less than 1% of global research output".[note 26] This oft-quoted factoid is based on data from a World Bank/Elsevier report from 2012 which relies on data from Scopus.[note 27] Research outputs in this context refers to papers specifically published in peer-reviewed journals that are indexed in Scopus. Similarly, many others have analysed "global" or international collaborations and mobility using the even more selective WoS database. Research outputs in this context refers to papers specifically published in peer-reviewed journals that are indexed either in Scopus or WoS.
Both WoS and Scopus are considered highly selective. Both are commercial enterprises, whose standards and assessment criteria are mostly controlled by panels of gatekeepers in North America and Western Europe. The same is true for more comprehensive databases such as Ulrich's Web which lists as many as 70,000 journals, while Scopus has fewer than 50% of these, and WoS has fewer than 25%. While Scopus is larger and geographically broader than WoS, it still only covers a fraction of journal publishing outside North America and Europe. For example, it reports a coverage of over 2,000 journals in Asia ("230% more than the nearest competitor"),[note 28] which may seem impressive until you consider that in Indonesia alone there are more than 7,000 journals listed on the government's Garuda portal[note 29] (of which more than 1,300 are currently listed on DOAJ);[note 30] whilst at least 2,500 Japanese journals listed on the J-Stage platform.[note 31] Similarly, Scopus claims to have about 700 journals listed from Latin America, in comparison with SciELO's 1,285 active journal count;[note 32] but that's just the tip of the iceberg judging by the 1,300+ DOAJ-listed journals in Brazil alone.[note 33] Furthermore, the editorial boards of the journals contained in Wos and Scopus databases are integrated by researchers from western Europe and North America. For example, in the journal Human Geography, 41% of editorial board members are from the United States, and 37.8% from the UK. Similarly,) studied ten leading marketing journals in WoS and Scopus databases, and concluded that 85.3% of their editorial board members are based in the United States. It comes as no surprise that the research that gets published in these journals is the one that fits the editorial boards' world view.
Comparison with subject-specific indexes has further revealed the geographical and topic bias - for example Ciarli found that by comparing the coverage of rice research in CAB Abstracts (an agriculture and global health database) with WoS and Scopus, the latter "may strongly under-represent the scientific production by developing countries, and over-represent that by industrialised countries", and this is likely to apply to other fields of agriculture. This under-representation of applied research in Africa, Asia, and South America may have an additional negative effect on framing research strategies and policy development in these countries. The overpromotion of these databases diminishes the important role of "local" and "regional" journals for researchers who want to publish and read locally-relevant content. Some researchers deliberately bypass "high impact" journals when they want to publish locally useful or important research in favour of outlets that will reach their key audience quicker, and in other cases to be able to publish in their native language.
Furthermore, the odds are stacked against researchers for whom English is a foreign language. 95% of WoS journals are English consider the use of English language a hegemonic and unreflective linguistic practice. The consequences include that non-native speakers spend part of their budget on translation and correction and invest a significant amount of time and effort on subsequent corrections, making publishing in English a burden. A far-reaching consequence of the use of English as the lingua franca of science is in knowledge production, because its use benefits "worldviews, social, cultural, and political interests of the English-speaking center" ( p. 123).
The small proportion of research from South East Asia, Africa, and Latin America which makes it into WoS and Scopus journals is not attributable to a lack of effort or quality of research; but due to hidden and invisible epistemic and structural barriers (Chan 2019[note 34]). These are a reflection of "deeper historical and structural power that had positioned former colonial masters as the centers of knowledge production, while relegating former colonies to peripheral roles" (Chan 2018[note 35]). Many North American and European journals demonstrate conscious and unconscious bias against researchers from other parts of the world.[note 36] Many of these journals call themselves "international" but represent interests, authors, and even references only in their own languages.[note 37] Therefore, researchers in non-European or North American countries commonly get rejected because their research is said to be "not internationally significant" or only of "local interest" (the wrong "local"). This reflects the current concept of "international" as limited to a Euro/Anglophone-centric way of knowledge production. In other words, "the ongoing internationalisation has not meant academic interaction and exchange of knowledge, but the dominance of the leading Anglophone journals in which international debates occurs and gains recognition" (, p. 8).
Clarivate Analytics have made some positive steps to broaden the scope of WoS, integrating the SciELO citation index - a move not without criticism[note 38] - and through the creation of the Emerging Sources Index (ESI), which has allowed database access to many more international titles. However, there is still a lot of work to be done to recognise and amplify the growing body of research literature generated by those outside North America and Europe. The Royal Society have previously identified that "traditional metrics do not fully capture the dynamics of the emerging global science landscape", and that academia needs to develop more sophisticated data and impact measures to provide a richer understanding of the global scientific knowledge that is available to us.
Academia has not yet built digital infrastructures which are equal, comprehensive, multi-lingual and allows fair participation in knowledge creation. One way to bridge this gap is with discipline- and region-specific preprint repositories such as AfricArXiv and InarXiv. Open access advocates recommend to remain critical of those "global" research databases that have been built in Europe or Northern America and be wary of those who celebrate these products act as a representation of the global sum of human scholarly knowledge. Finally, let us also be aware of the geopolitical impact that such systematic discrimination has on knowledge production, and the inclusion and representation of marginalised research demographics within the global research landscape.
Many universities, research institutions and research funders have adopted mandates requiring their researchers to make their research publications open access. For example, Research Councils UK spent nearly £60m on supporting their open access mandate between 2013 and 2016.
The idea of mandating self-archiving was raised at least as early as 1998. Since 2003 efforts have been focused on open access mandating by the funders of research: governments, research funding agencies, and universities. Some publishers and publisher associations have lobbied against introducing mandates.
In 2002, the University of Southampton's School of Electronics & Computer Science became one of the first schools to implement a meaningful mandatory open access policy, in which authors had to contribute copies of their articles to the school's repository. More institutions followed suit in the following years. In 2007, Ukraine became the first country to create a national policy on open access, followed by Spain in 2009. Argentina, Brazil, and Poland are currently in the process of developing open access policies. Making master's and doctoral theses open access is an increasingly popular mandate by many educational institutions.
In order to chart which organisations have open access mandates, the Registry of Open Access Repository Mandates and Policies (ROARMAP) provides a searchable international database. As of February 2019, mandates have been registered by over 700 universities (including Harvard, MIT, Stanford, University College London, and University of Edinburgh) and over 100 research funders worldwide.
As these sorts of mandates and policies increase in prevalence, researchers may be affected by multiple policies. New tools, such as SWORD (protocol), are being developed to help authors manage sharing between repositories. UNESCO's policy document says, "In response to increasing incidents of this type, technical development work has been carried out to provide tools that enable the author to deposit an article once and for it to be copied into other repositories." There is a push to make more specific policy about allowed embargoes, rather than leaving it up to publishers.
Compliance rates with voluntary open access policies remain low. According to UNESCO's Policy guidelines for the development and promotion of open access, "Evidence has unequivocally demonstrated that to have real effect policies must be mandatory, whether institutional or funder policies. Mandatory policies at institutions succeed in accumulating content in their repositories, averaging 60% of total output after a couple of years of the policy being in place."
Traditional methods of scholarly publishing require complete and exclusive copyright transfer from authors to the publisher, typically as a precondition for publication. This process transfers control and ownership over dissemination and reproduction from authors as creators to publishers as disseminators, with the latter then able to monetise the process. The transfer and ownership of copyright represents a delicate tension between protecting the rights of authors, and the interests - financial as well as reputational - of publishers and institutes. With OA publishing, typically authors retain copyright to their work, and articles and other outputs are granted a variety of licenses depending on the type.
The timing of the process of rights transfer is in itself problematic for several reasons. Firstly, copyright transfer usually being conditional for publication means that it is rarely freely transferred or acquired without pressure. Secondly, it becomes very difficult for an author to not sign a copyright transfer agreement, due to the association of publication with career progression (publish or perish/publication pressure), and the time potentially wasted should the review and publication process have to be started afresh. There are power dynamics at play that do not benefit authors, and instead often compromise certain academic freedoms. This might in part explain why authors in scientific research, in contrast to all other industries where original creators get honoraria or royalties, typically do not receive any payments from publishers at all. It also explains why many authors seem to continue to sign away their rights while simultaneously disagreeing with the rationale behind doing so.
It remains unclear if such copyright transfer is generally permissible.Research funders or institutes, public museums or art galleries might have over-ruling policies that state that copyright over research, content, intellectual property, employs or funds is not allowed to be transferred to third parties, commercial or otherwise. Usually a single author is signing on behalf of all authors, perhaps without their awareness or permission. The full understanding of copyright transfer agreements requires a firm grasp of "legal speak" and copyright law, in an increasingly complex licensing and copyright landscape,[note 39][note 40] and for which a steep learning curve for librarians and researchers exists. Thus, in many cases, authors might not even have the legal rights to transfer full rights to publishers, or agreements have been amended to make full texts available on repositories or archives, regardless of the subsequent publishing contract.
This amounts to a fundamental discord between the purpose of copyright (i.e., to grant full choice to an author/creator over dissemination of works) and the application of it, because authors lose these rights during copyright transfer. Such fundamental conceptual violations are emphasised by the popular use of sites such as ResearchGate and Sci-Hub for illicit file sharing by academics and the wider public. Factually, widespread, unrestricted sharing helps to advance science faster than paywalled articles, thus it can be argued that copyright transfer does a fundamental disservice to the entire research enterprise. It is also highly counter-intuitive when learned societies such as the American Psychological Association actively monitor and remove copyrighted content they publish on behalf of authors,[note 41] as this is seen as not being in the best interests of either authors or the reusability of published research and a sign of the system of copyright transfer being counterproductive (because original creators lose all control over, and rights to, their own works).
Some commercial publishers, such as Elsevier, engage in "nominal copyright" where they require full and exclusive rights transfer from authors to the publisher for OA articles, while the copyright in name stays with the authors. The assumption that this practice is a condition for publication is misleading, since even works that are in the public domain can be repurposed, printed, and disseminated by publishers. Authors can instead grant a simple non-exclusive license to publish that fulfils the same criteria. However, according to a survey from Taylor and Francis in 2013, almost half of researchers surveyed answered that they would still be content with copyright transfer for OA articles.
Therefore, critics argue that in scientific research, copyright is largely ineffective in its proposed use, but also wrongfully acquired in many cases, and goes practically against its fundamental intended purpose of helping to protect authors and further scientific research. Plan S requires that authors and their respective institutes retain copyright to articles without transferring them to publishers; something also supported by OA2020.[note 42] Researchers failed to find proof that copyright transfer is required for publication, or any case where a publisher has exercised copyright in the best interest of the authors. While one argument of publishers in favor of copyright transfer might be that it enables them to defend authors against any copyright infringements,[note 43] publishers can take on this responsibility even when copyright stays with the author, as is the policy of the Royal Society.[note 44]
It was just days after Aaron Swartz' death, and I was having a crisis of conscience about publishing in a journal that was not open access
Cite error: A list-defined reference named "Blackmore 2011" is not used in the content (see the help page).
Cite error: A list-defined reference named "Tickell 2018" is not used in the content (see the help page).