“Today, Representatives Eddie Bernice Johnson (D-TX) and Jim Sensenbrenner (R-Wis.) sent the following letter to the Government Accountability Office, asking it to evaluate the status, effectiveness, and benefits of current federal public access policies. This letter builds upon previous legislative efforts between these Members to ensure taxpayers, who are footing the bill for federal research, have adequate access to the published results free of charge….Increased access and increased use of technology to enable and promote discovery across the body of scientific literature will advance the frontiers of science, medicine, and innovation across all sectors of our economy….Understanding how federal agencies create and implement their guidelines for covered works of publicly funded research is essential to improving and modernizing our public access policies. We made progress with the previous administration, and I look forward to working with our federal agencies, as well as…fellow congressional colleagues to continue moving forward on this effort….”
Monthly Archives: March 2017
Novel processes and metrics for a scientific evaluation: preliminary reflections
Reflections on Michaël Bon, Michael Taylor, Gary S. McDowell. “Novel processes and metrics for a scientific evaluation rooted in the principles of science – Version 1”. SJS (26 Jan. 2017)
<http://www.sjscience.org/article?id=580>
Following are my initial reflections on what I would describe as a ground-breaking effort toward articulating a radically transformation of scholarly communication, a transformation that I regard as much needed and highly timely as the current system is optimized for the technology of the 17th century (printing press and postal system) and is far from taking full advantage of the potential of the internet.
The basic idea described by the authors is to replace the existing practices of evaluation of scholarly work with a more collaborative and open system they call the Self-Journals of Science
Comments
The title Self-journals of science: I recommend coming up with a new name. The name is likely to give the impression of vanity publishing, even though this is not what the authors are suggesting, which appears to be more along the lines of a new form of collaborative organization of peer review.
Section 1 Introduction: the inherent shortcomings of an asymetric evaluation system appears to attempt to describe how scientific communication works, its purpose, and critique, with citations, in just a few pages. This is sufficient to tell the reader where the authors are coming from, but too broad in scope to have much depth or accuracy. I am not sure that it makes sense to spend a lot of time further developing this section. For example, the second paragraph refers to scientific recognition as artificially turned into a resource of predetermined scarcity. I am pretty sure that further research could easily yield evidence to back up this statement – e.g. Garfield’s idea of the “core journals” to eliminate the journals one needn’t bother buying or reading, and the apparently de facto assumption that a good journal is one with a high rejection rate. On page 3, first paragraph, 4 citations are given for one statement. A quick glance at the reference list suggests that this may be stretching what the authors of the cited works have said. For example, at face value it seems unlike that reference 4 with a title of “Double-blind review favours increased representation of female authors” actually supports the author’s assertion that “Since peer-trials necessarily involve a degree of confidentiality and secrecy..many errors, biases and conflicts of interest may arise without the possibility of correction”. It seems that the authors of the cited article are making exactly the opposite argument, arguing that semi-open review results in bias. If I was doing a thorough review, I would look up at least a few of the cited works and if the arguments cited are not justified in the cited works I hand the work of reading the works cited and citing appropriately back to the authors.
The arguments presented are provocative and appropriate for initiating an important scholarly discussion. Like any provocative work, the arguments may be relatively stronger for the task of initiating needed discussion but somewhat weak due to lack of counter-argument. For example, the point of Section 1.4 is that “scientific conservatism is placing a brake on the pace of change”. Whether anything is placing a brake on the pace of change in 2017 is, I believe, arguable. However, the authors also do not address the benefits of scientific conservatism here, although the arguments made elsewhere e.g. “The validity of an article is established by a process of open and objective debate by the whole community” are arguments for scientific conservatism (or so I argue). The potential benefits of scientific conservatism are not addressed. For example, one needs to understand this tendency of science to fully appreciate the current consensus on climate change.
Section 2 defines scientific value as validity and importance
There are some interesting ideas here, however the authors conflate methodological soundness with validity. A research study can reflect the very best practices in today’s methodology and present logical conclusions based on existing knowledge while still being incorrect or invalid (lacking external validity) for such reasons as limitations on our collective knowledge. A logical argument based on a premise incorrectly perceived to to be true can lead to logical but incorrect conclusions.
The authors state that “the validity of an article is established by a process of open and objective debate by the whole community”. This is one instance of what I see as overstatement of both current and potential future practice. Only in a very small scholarly community would it be possible for every member of the community to read every article, never mind have an open and objective debate about each article. I think the authors have a valid point here, but direct this at the wrong level. This kind of debate occurs with the big picture paradigmatic issues such as climate change, not at the level of the individual article.
Perceived importance of an article is given along with validity as the other measure for evaluation of an article. This argument needs work and critique. I agree with the author (and Kuhn) about the tendency towards scientific conservatism, and I think we should be aware of bias in any new system, especially one based on open review. People are likely to perceive articles as more important if they advance work that falls within an existing paradigm or a new one that is gaining traction than truly pioneering work. With open review, I expect that authors with existing high status are more likely to be perceived to be writing important work while new, unknown, female authors or those from minority groups are more likely to have their work perceived as unimportant.
I do not wish to dismiss the idea of importance, rather I would like to suggest that this needs quite a bit of work to define appropriately. For example, if I understand correctly replication of previous experiments is perceived as a lesser contribution than original work. This is a disincentive to replication that seems likely to increase the likelihood of perpetuating error. Assuming this is correct, and we wish to change the situation, what is needed is something like a consensus that replication should be more highly valued, otherwise if we rely on perceived importance this work is likely to continue to be undervalued.
Section 2.2 Assessing validity by open peer review
This section presents some very specific suggestions for a review system. One comment that I have is that this approach reflects a particular style. The idea of embedded reviews likely appeals more to some people than to others. Journals often provide reviewers with a variety of options depending on their preferred style; a written review like this, or go through the article and track changes. The + / – vote system for reviews strikes me as a tool very likely to reflect the personal popularity of reviewers and/or particular viewpoints rather than adding to the quality of reviews. There are advantages and disadvantages to authors being able to select the reviews that they feel are of most value to them. The disadvantage is that authors with a blind spot or conscious bias are free to ignore reviews that a really good editor would force them to address before a work could be published.
Section 3 Benefits of this evaluation system
Here the authors argue that this evaluation system can be transformed into metrics for the purpose of evaluation (number of scholars engaged in peer review, fraction that consider the article is up to scientific standards) and for importance (the number of authors that have curated the article in their self-journal). Like the authors, I think we need to move away from publishing in high impact factor journals as a surrogate of quality. However, I argue against metrics-based evaluation, period. This is a topic that I will be writing about more over the coming months. For now, suffice it to say that quickly moving to new metrics-based evaluation systems appears to me likely to create worse problems than such a move is meant to solve. For example, if we assume that scientific conservatism is a thing and is a problem, isn’t a system where people are evaluated based on the number of people who review one’s work and find it up to standards likely to increase conservatism?
Some strengths of the article:
- recognizing the need for change and hopefully kick-starting an important discussion
- starting with the idea that we scholars can lead the transformation ourselves
- focus on collaboration rather than competition
To think about from my perspective:
- researcher time: realism is needed. An article that is reviewed by two or three people who are qualified to judge soundness of method, logic of arguments and clarity of writing should be enough. It isn’t a good use of the time of researchers to have a whole lot of people looking at whether a particular statistical test was appropriate or not.
- this is work for scholarly communities, not individuals. The conclusion speaks to the experience of arXiv. arXiv is a service shared by a large community and supported by a partnership of libraries that has staff and hosting support.
- the Self-Journals of Science uses the CC-BY license as a default. Many OA advocates regard this license as the best option for OA, however I argue that this is a major strategic error for the OA movement. My arguments on the overlap between open licensing and open access are complex and can be found in my series Creative Commons and Open Access Critique. To me this is a central issue that the OA movement must deal with, and so I raise it here and continue to avoid participating in services that require me to use this license for my work.
Key take-aways I hope people will get out of this review:
- forget metrics – don’t come up with a replacement for impact factor, let’s get out of metrics-based evaluation altogether
- look for good models, like arXiv because communities are complicated. What works?
- let’s talk – some of us may want immediate radical transformation of scholarly communication, but doing this well is going to take some time, to figure out the issues, come up with potential solutions, let people try stuff and see what works and what doesn’t, and research too
- be realistic about time and style – researchers have limited time, and people have different preferred styles. New approaches need to take this into account.
For more on this topic, watch for my keynote at the What do rankings tell us about higher education? roundtable at UBC this May.
Novel processes and metrics for a scientific evaluation: preliminary reflections
Reflections on Michaël Bon, Michael Taylor, Gary S. McDowell. “Novel processes and metrics for a scientific evaluation rooted in the principles of science – Version 1”. SJS (26 Jan. 2017)
<http://www.sjscience.org/article?id=580>
Following are my initial reflections on what I would describe as a ground-breaking effort toward articulating a radically transformation of scholarly communication, a transformation that I regard as much needed and highly timely as the current system is optimized for the technology of the 17th century (printing press and postal system) and is far from taking full advantage of the potential of the internet.
The basic idea described by the authors is to replace the existing practices of evaluation of scholarly work with a more collaborative and open system they call the Self-Journals of Science
Comments
The title Self-journals of science: I recommend coming up with a new name. The name is likely to give the impression of vanity publishing, even though this is not what the authors are suggesting, which appears to be more along the lines of a new form of collaborative organization of peer review.
Section 1 Introduction: the inherent shortcomings of an asymetric evaluation system appears to attempt to describe how scientific communication works, its purpose, and critique, with citations, in just a few pages. This is sufficient to tell the reader where the authors are coming from, but too broad in scope to have much depth or accuracy. I am not sure that it makes sense to spend a lot of time further developing this section. For example, the second paragraph refers to scientific recognition as artificially turned into a resource of predetermined scarcity. I am pretty sure that further research could easily yield evidence to back up this statement – e.g. Garfield’s idea of the “core journals” to eliminate the journals one needn’t bother buying or reading, and the apparently de facto assumption that a good journal is one with a high rejection rate. On page 3, first paragraph, 4 citations are given for one statement. A quick glance at the reference list suggests that this may be stretching what the authors of the cited works have said. For example, at face value it seems unlike that reference 4 with a title of “Double-blind review favours increased representation of female authors” actually supports the author’s assertion that “Since peer-trials necessarily involve a degree of confidentiality and secrecy..many errors, biases and conflicts of interest may arise without the possibility of correction”. It seems that the authors of the cited article are making exactly the opposite argument, arguing that semi-open review results in bias. If I was doing a thorough review, I would look up at least a few of the cited works and if the arguments cited are not justified in the cited works I hand the work of reading the works cited and citing appropriately back to the authors.
The arguments presented are provocative and appropriate for initiating an important scholarly discussion. Like any provocative work, the arguments may be relatively stronger for the task of initiating needed discussion but somewhat weak due to lack of counter-argument. For example, the point of Section 1.4 is that “scientific conservatism is placing a brake on the pace of change”. Whether anything is placing a brake on the pace of change in 2017 is, I believe, arguable. However, the authors also do not address the benefits of scientific conservatism here, although the arguments made elsewhere e.g. “The validity of an article is established by a process of open and objective debate by the whole community” are arguments for scientific conservatism (or so I argue). The potential benefits of scientific conservatism are not addressed. For example, one needs to understand this tendency of science to fully appreciate the current consensus on climate change.
Section 2 defines scientific value as validity and importance
There are some interesting ideas here, however the authors conflate methodological soundness with validity. A research study can reflect the very best practices in today’s methodology and present logical conclusions based on existing knowledge while still being incorrect or invalid (lacking external validity) for such reasons as limitations on our collective knowledge. A logical argument based on a premise incorrectly perceived to to be true can lead to logical but incorrect conclusions.
The authors state that “the validity of an article is established by a process of open and objective debate by the whole community”. This is one instance of what I see as overstatement of both current and potential future practice. Only in a very small scholarly community would it be possible for every member of the community to read every article, never mind have an open and objective debate about each article. I think the authors have a valid point here, but direct this at the wrong level. This kind of debate occurs with the big picture paradigmatic issues such as climate change, not at the level of the individual article.
Perceived importance of an article is given along with validity as the other measure for evaluation of an article. This argument needs work and critique. I agree with the author (and Kuhn) about the tendency towards scientific conservatism, and I think we should be aware of bias in any new system, especially one based on open review. People are likely to perceive articles as more important if they advance work that falls within an existing paradigm or a new one that is gaining traction than truly pioneering work. With open review, I expect that authors with existing high status are more likely to be perceived to be writing important work while new, unknown, female authors or those from minority groups are more likely to have their work perceived as unimportant.
I do not wish to dismiss the idea of importance, rather I would like to suggest that this needs quite a bit of work to define appropriately. For example, if I understand correctly replication of previous experiments is perceived as a lesser contribution than original work. This is a disincentive to replication that seems likely to increase the likelihood of perpetuating error. Assuming this is correct, and we wish to change the situation, what is needed is something like a consensus that replication should be more highly valued, otherwise if we rely on perceived importance this work is likely to continue to be undervalued.
Section 2.2 Assessing validity by open peer review
This section presents some very specific suggestions for a review system. One comment that I have is that this approach reflects a particular style. The idea of embedded reviews likely appeals more to some people than to others. Journals often provide reviewers with a variety of options depending on their preferred style; a written review like this, or go through the article and track changes. The + / – vote system for reviews strikes me as a tool very likely to reflect the personal popularity of reviewers and/or particular viewpoints rather than adding to the quality of reviews. There are advantages and disadvantages to authors being able to select the reviews that they feel are of most value to them. The disadvantage is that authors with a blind spot or conscious bias are free to ignore reviews that a really good editor would force them to address before a work could be published.
Section 3 Benefits of this evaluation system
Here the authors argue that this evaluation system can be transformed into metrics for the purpose of evaluation (number of scholars engaged in peer review, fraction that consider the article is up to scientific standards) and for importance (the number of authors that have curated the article in their self-journal). Like the authors, I think we need to move away from publishing in high impact factor journals as a surrogate of quality. However, I argue against metrics-based evaluation, period. This is a topic that I will be writing about more over the coming months. For now, suffice it to say that quickly moving to new metrics-based evaluation systems appears to me likely to create worse problems than such a move is meant to solve. For example, if we assume that scientific conservatism is a thing and is a problem, isn’t a system where people are evaluated based on the number of people who review one’s work and find it up to standards likely to increase conservatism?
Some strengths of the article:
- recognizing the need for change and hopefully kick-starting an important discussion
- starting with the idea that we scholars can lead the transformation ourselves
- focus on collaboration rather than competition
To think about from my perspective:
- researcher time: realism is needed. An article that is reviewed by two or three people who are qualified to judge soundness of method, logic of arguments and clarity of writing should be enough. It isn’t a good use of the time of researchers to have a whole lot of people looking at whether a particular statistical test was appropriate or not.
- this is work for scholarly communities, not individuals. The conclusion speaks to the experience of arXiv. arXiv is a service shared by a large community and supported by a partnership of libraries that has staff and hosting support.
- the Self-Journals of Science uses the CC-BY license as a default. Many OA advocates regard this license as the best option for OA, however I argue that this is a major strategic error for the OA movement. My arguments on the overlap between open licensing and open access are complex and can be found in my series Creative Commons and Open Access Critique. To me this is a central issue that the OA movement must deal with, and so I raise it here and continue to avoid participating in services that require me to use this license for my work.
Key take-aways I hope people will get out of this review:
- forget metrics – don’t come up with a replacement for impact factor, let’s get out of metrics-based evaluation altogether
- look for good models, like arXiv because communities are complicated. What works?
- let’s talk – some of us may want immediate radical transformation of scholarly communication, but doing this well is going to take some time, to figure out the issues, come up with potential solutions, let people try stuff and see what works and what doesn’t, and research too
- be realistic about time and style – researchers have limited time, and people have different preferred styles. New approaches need to take this into account.
For more on this topic, watch for my keynote at the What do rankings tell us about higher education? roundtable at UBC this May.
Self-Archiving Journal Articles: A Case Study of Faculty Practice and Missed Opportunity
Abstract: Carnegie Mellon faculty Web pages and publisher policies were examined to understand self-archiving practice. The breadth of adoption and depth of commitment are not directly correlated within the disciplines. Determining when self-archiving has become a habit is difficult. The opportunity to self-archive far exceeds the practice, and much of what is self-archived is not aligned with publisher policy. Policy appears to influence neither the decision to self-archive nor the article version that is self-archived. Because of the potential legal ramifications, faculty must be convinced that copyright law and publisher policy are important and persuaded to act on that conviction.
Automatically Determining Versions of Scholarly Articles | Rothchild | Scholarly and Research Communication
Abstract: Background: Repositories of scholarly articles should provide authoritative information about the materials they distribute and should distribute those materials in keeping with pertinent laws. To do so, it is important to have accurate information about the versions of articles in a collection.
Analysis: This article presents a simple statistical model to classify articles as author manuscripts or versions of record, with parameters trained on a collection of articles that have been hand-annotated for version. The algorithm achieves about 94 percent accuracy on average (cross-validated). Conclusion and implications: The average pairwise annotator agreement among a group of experts was 94 percent, showing that the method developed in this article displays performance competitive with human experts.
NMC Horizon Report: 2017 Library Edition
“Six key trends, six significant challenges, and six developments in technology profiled in this report are poised to impact library strategies, operations, and services….These top 10 highlights capture the big picture themes of organizational change that underpin the 18 topics: …[Highlight 3:] In the face of financial constraints, open access is a potential solution. Open resources and publishing models can combat the rising costs of paid journal subscriptions and expand research accessibility. Although this idea is not new, current approaches and implementations have not yet achieved peak efficacy….”
Biodiversity Heritage Library – Expanding Access to Biodiversity Literature
“Expanding Access to Biodiversity Literature, which is funded generously by the Institute of Museum and Library Services (IMLS), will significantly increase online access to biodiversity material by positioning BHL as an on-ramp for biodiversity content providers that would like to contribute to the national digital library infrastructure through the Digital Public Library of America (DPLA). The grant proposes to address challenges facing content providers—including insufficient amounts of content, indexing of scientific names, and metadata creation—and make necessary digital infrastructure enhancements by creating an innovative model for open access to data and to support collaboration among these institutions. The project would meet the goals of the IMLS National Leadership Grants for Libraries Program by increasing access to digital services, expanding the range and types of digital content available, improving discoverability, and supporting open access….The project runs from October 1, 2015 to September 30, 2017 and will be conducted by the New York Botanical Garden in partnership with Harvard University, the Missouri Botanical Garden, and the Smithsonian Institution Libraries….”
The transformation of scientific journal publishing: Open access after the Berlin 12 Conference – IOS Press
“In the last 10–15 years, Open Access has become a shared vision of many if not most of the world’s national and international research councils. Open Access as a principle is very well established in the international discourse on research policies; however, Open Access as a practice has yet to transform the traditional subscription-based publishing system, which is as vigorous and prosperous as ever, despite its inherent restrictions on access and usage and its remarkable detachment from the potentials of a 21st century web-based publishing system. OA2020 is a transformative initiative trying to bring a new approach to the transactional side of the publishing system and the ways in which its cash flow is organized. Publishing and financial data are brought together in a way to demonstrate that such a switch would indeed be feasible. OA2020 lays out the path for how this transformation could happen so that Open Access to research results would finally be a reality from the moment of their publication.”
Crossing the Field Boundaries: Open Science, Open Data & Open Education | Open World
I’ve worked in open education technology for a long time now, but like most education technologists my background is not originally in either education or technology. In my case I started out as an archaeologist. I studied archaeology at the University of Glasgow and after working there as a field worker and material sciences technician for a number of years, I decided to cross over into another field, and by rather circuitous routes I found myself working in open education technology. Over the intervening years I’ve developed a strong personal commitment to openness in education, and I firmly believe that we have a moral and ethical responsibility to open access to the outputs of publicly funded education, research and science.
KU Libraries publish first open textbook | Libraries
“KU Libraries and the Shulenburger Office of Scholarly Communication & Copyright continue their commitment to open educational resources (OER) by publishing Dr. Razi Ahmad’s open textbook entitled, ‘Tajik Persian: Readings in history, culture and society.’ This book is available through KU ScholarWorks, KU’s open access digital repository, and is also indexed in the Open Textbook Library, a free online collection of more than 360 openly licensed textbooks curated by the University of Minnesota based Open Textbook Network.
‘It has been a great experience working with all the library faculty and staff who helped me on this project,’ said Ahmad. ‘Tajik is one of the critical languages for which there exist very limited pedagogical materials for elementary and intermediate-level and virtually negligible for the advanced-level students. ‘Tajik Persian: Readings in history, culture and society’ is a modest attempt to provide instructors and students free of cost advanced-level textbook that can be used as the primary or supplementary material in classrooms.’
Currently, ‘Tajik Persian: Readings in history, culture and society’ has more than 141 views from nine countries.”
New digital resource to reveal the hidden possibilities for library collections | Jisc
“[M]uch digitisation of special and archival collections has been carried out by academic libraries and heritage organisations with the support of public funding, making content available for everybody to enjoy. However, sustainability of digitisation is still a big problem, especially in the context of providing open access….In creating sustainable digital content, there is a solution that can help bring specialist research to life, one collection at a time; and this is how Reveal Digital have approached the challenge. The support for digitisation of materials through an innovative library crowdfunding model is already underway on the other side of the pond, with collections such as Independent Voices achieving wide popularity and support….Hosted on the Reveal Digital platform, over 100 pledging libraries to date have controlled access until the collection moves to open access (in 2019) following a two-year embargo period, as per its cost recovery-open access model. The platform provides page image-based access with full-text searching, hit-term highlighting, searchable title and issue-level metadata and browsing by series, title and issue….”
Gates, Wellcome Award Nearly $8 Million for Faster TB Diagnosis | News | PND
“The University of Oxford has announced grants totaling nearly $8 million from the Bill & Melinda Gates Foundation, the Wellcome Trust, and others in support of efforts to speed up diagnosis of drug-resistant tuberculosis….The grants will enable Oxford researchers to expand that library [of genomce sequences] by collecting and analyzing a hundred thousand additional samples from around the world….The Oxford team will then assemble the results into a single open-access database….”
Amending Published Articles: Time To Rethink Retractions And Corrections? | bioRxiv
Abstract: Academic publishing is evolving and our current system of correcting research post-publication is failing, both ideologically and practically. It does not encourage researchers to engage in consistent post-publication changes. Worse yet, post-publication “updates” are misconstrued as punishments or admissions of guilt. We propose a different model that publishers of research can apply to the content they publish, ensuring that any post-publication amendments are seamless, transparent and propagated to all the countless places online where descriptions of research appear. At the center, the neutral term “amendment” describes all forms of post-publication change to an article. We lay out a straightforward and consistent process that applies to each of the three types of amendments: insubstantial, substantial, and complete. This proposed system supports the dynamic nature of the research process itself as researchers continue to refine or extend the work, removing the emotive climate particularly associated with retractions and corrections to published work. It allows researchers to cite and share the correct versions of articles with certainty, and for decision makers to have access to the most up to date information.
Open access campaigners toughen stance towards publishers | THE News
“Open access advocates want universities to be prepared to “pull the plug” on their subscription deals with big publishers, in a sign of an escalation in tactics to open up more research.
As the German academy remains locked in a dispute with Dutch publishing giant Elsevier, those campaigning for open access struck a combative tone at a conference in Berlin, which also heard frustrations that the move away from closed journals was not proceeding fast enough.
Gerard Meijer, director of the Fritz Haber Institute in Berlin, who led Dutch universities in their protracted negotiations with Elsevier in 2015, told delegates that in order not to be “held hostage” by publishers during talks, “complete opting-out of the contracts had to be a realistic option. And we are prepared for that.”
The aim was to give publishers two options: “either to go along in the transformation [to open access] or to face cancellation of the contract”, he told delegates at OA2020 on 22 March.
Rationale for Requiring Immediate Deposit Upon Acceptance – Open Access Archivangelism
There are multiple reasons for depositing the AAM (Author Accepted Manuscript) immediately upon acceptance:
1. The date of acceptance is known. The date of publication is not. It is often long after acceptance, and often does not even correspond to the calendar date of the journal. 2. It is when research is refereed and accepted that it should be accessible to all potential users. 3. The delay between the date of acceptance and the date of publication can be anywhere from six months to a year or more. 3. Publishers are already trying to embargo OA for a year from date of publication. The gratuitous delay from acceptance could double that. 4. The date of acceptance is the natural date-stamp for deposit and the natural point in the author’s work-flow for deposit. 5. The AAV at date of acceptance is the version with the least publisher restrictions on it: Many publishers endorse making the AAM OA immediately, but not the PV (Publisher’s Version). 6. Having deposited the AAM, the author can update it if and when they wish, to incorporate any copy-editing and corrections (including the PV). 7. If the author elects to embargo the deposit, the copy-request button is available to authorize the immediate automatic sending of individual copies on request. Authors can make the deposit OA when they choose. (They can also decline to send the AAM till the copy-edited version has been deposited — but most authors will not want to delay compliance with copy requests: refereed AAMs that have not yet been copy-edited can be clearly marked as such.) 8. The acceptance letter provides the means of verifying timely compliance with the deposit mandate. It is the key to making the immediate-deposit policy timely, verifiable and effective. And it is the simplest and most natural way to integrate deposit into the author’s year-long workflow. 9. The above timing and compliance considerations apply to all refereed research, including research published in Gold OA journals. 10. Of the 853 OA policies registered in ROARMAP 96 of the 515 OA policies that require (rather than just request or recommend) deposit have adopted the immediate-deposit upon acceptance requirement.
Below are references to some articles that have spelled out the rationale and advantages of the immediate-deposit requirement.
Stevan Vincent-Lamarre, Philippe, Boivin, Jade, Gargouri, Yassine, Larivière, Vincent and Harnad, Stevan (2016) Estimating Open Access Mandate Effectiveness: The MELIBEA Score. Journal of the Association for Information Science and Technology (JASIST) 67(11) 2815-2828 Swan, Alma; Gargouri, Yassine; Hunt, Megan; & Harnad, Stevan (2015) Open Access Policy: Numbers, Analysis, Effectiveness. Pasteur4OA Workpackage 3 Report. Harnad, Stevan (2015) Open Access: What, Where, When, How and Why. In: Ethics, Science, Technology, and Engineering: An International Resourceeds. J. Britt Holbrook & Carl Mitcham, (2nd edition of Encyclopedia of Science, Technology, and Ethics, Farmington Hills MI: MacMillan Reference) Harnad, Stevan (2015) Optimizing Open Access Policy. The Serials Librarian, 69(2), 133-141 Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2014) Open Access Mandates and the “Fair Dealing” Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.)