Open Access Publishing: A Study of UC Berkeley Faculty Views and Practices

Abstract:  This project focused on open access (OA) publishing, which enhances researcher productivity and impact by increasing dissemination of, and access to, research. The study looked at the relationship between faculty’s attitudes toward OA and their OA publishing practices, including the roles of funding availability and discipline. The project team compared University of California Berkeley (Berkeley) faculty’s answers to questions related to OA from the 2018 Ithaka Faculty Survey with the faculty’s scholarly output in the Scopus database. Faculty Survey data showed that 71% of Berkeley faculty, compared to 64% of faculty nationwide, support a transition to OA publishing. However, when selecting a journal to publish in, faculty indicated that a journal having no cost to publish in was more important than having no cost to read. After joining faculty’s survey responses and their publication output, the data sample included 4,413 articles published by 479 Berkeley faculty from 2016 to 2019. With considerable disciplinary differences, the OA publication output for this sample, using data from Unpaywall, represented 72% of the total publication output. The study focused on Gold OA articles, which usually require authors to pay Article Processing Charges (APCs) and which accounted for 18% of the publications. Overall, the study found a positive correlation between publishing Gold OA and the faculty’s support for OA (no cost to read). In contrast, the correlation between publishing Gold OA and the faculty’s concern about publishing cost was weak. Publishing costs concerned faculty in all subject areas, whether or not their articles reported research funding. Thus, Berkeley Library’s efforts to pursue transformative publishing agreements and prioritize funding for a program subsidizing publishing fees seem like effective strategies to increase OA. 

Library Impact Research Report: Open Access Publishing: A Study of UC Berkeley Faculty Views and Practices – Association of Research Libraries

Overall, the UC Berkeley study found a positive correlation between publishing gold OA and the faculty’s support for OA (no cost to read). In contrast, the correlation between publishing gold OA and the faculty’s concern about publishing cost was weak. Publishing costs concerned faculty in all subject areas, whether or not their articles reported research funding. Therefore, UC Berkeley Library’s efforts to pursue transformative publishing agreements and prioritize funding for a program subsidizing publishing fees seem like effective strategies to increase OA.

Correlating article citedness and journal impact: an empirical investigation by field on a large-scale dataset | SpringerLink

Abstract:  In spite of previous research demonstrating the risks involved, and counsel against the practice as early as 1997, some research evaluations continue to use journal impact alone as a surrogate of the number of citations of hosted articles to assess the latter’s impact. Such usage is also taken up by research administrators and policy-makers, with very serious implications. The aim of this work is to investigate the correlation between the citedness of a publication and the impact of the host journal. We extend the analyses of previous literature to all STEM fields. Then we also aim to assess whether this correlation varies across fields and is stronger for highly cited authors than for lowly cited ones. Our dataset consists of a total of almost one million authorships of 2010–2019 publications authored by about 28,000 professors in 230 research fields. Results show a low correlation between the two indicators, more so for lowly cited authors as compared to highly cited ones, although differences occur across fields.

 

Open Communication Platforms – Google Docs

“The purpose of this document is to organise ideas about open communication platforms for big team science and open research coordination. It can serve as a primer for those looking to set up a platform, or for ideas when developing new platforms. Please add yourself to the Document Contributors list if you make a contribution (feel free to edit anything). …”

Does it pay to pay? A comparison of the benefits of open-access publishing across various sub-fields in Biology | bioRxiv

Abstract:  Authors are often faced with the decision of whether to maximize impact or minimize costs when publishing the results of their research. For example, to potentially improve impact via increased accessibility, many subscription-based journals now offer the option of paying a fee to publish open access (i.e., hybrid journals), but this solution excludes authors who lack the capacity to pay to make their research accessible. Here, we tested if paying to publish open access in a subscriptionbased journal benefited authors by conferring more citations relative to closed access articles. We identified 146,415 articles published in 152 hybrid journals in the field of biology from 2013-2018 to compare the number of citations between various types of open access and closed access articles. In a simple generalized linear model analysis of our full dataset, we found that publishing open access in hybrid journals that offer the option confers an average citation advantage to authors of 17.8 citations compared to closed access articles in similar journals. After taking into account the number of authors, journal impact, year of publication, and subject area, we still found that open access generated significantly more citations than closed access (p < 0.0001). However, results were complex, with exact differences in citation rates among access types impacted by these other variables. This citation advantage based on access type was even similar when comparing open and closed access articles published in the same issue of a journal (p < 0.0001). However, by examining articles where the authors paid an article processing charge, we found that cost itself was not predictive of citation rates (p = 0.14). Based on our findings of access type and other model parameters, we suggest that, in most cases, paying for access does confer a citation advantage. For authors with limited budgets, we recommend pursuing open access alternatives that do not require paying a fee as they still yielded more citations than closed access. For authors who are considering where to submit their next article, we offer additional suggestions on how to balance exposure via citations with publishing costs.

 

Comparison of Clinical Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles | Medical Journals and Publishing | JAMA Network Open | JAMA Network

Abstract:  Importance  Preprints have been widely adopted to enhance the timely dissemination of research across many scientific fields. Concerns remain that early, public access to preliminary medical research has the potential to propagate misleading or faulty research that has been conducted or interpreted in error.

Objective  To evaluate the concordance among study characteristics, results, and interpretations described in preprints of clinical studies posted to medRxiv that are subsequently published in peer-reviewed journals (preprint-journal article pairs).

Design, Setting, and Participants  This cross-sectional study assessed all preprints describing clinical studies that were initially posted to medRxiv in September 2020 and subsequently published in a peer-reviewed journal as of September 15, 2022.

Main Outcomes and Measures  For preprint-journal article pairs describing clinical trials, observational studies, and meta-analyses that measured health-related outcomes, the sample size, primary end points, corresponding results, and overarching conclusions were abstracted and compared. Sample size and results from primary end points were considered concordant if they had exact numerical equivalence.

Results  Among 1399 preprints first posted on medRxiv in September 2020, a total of 1077 (77.0%) had been published as of September 15, 2022, a median of 6 months (IQR, 3-8 months) after preprint posting. Of the 547 preprint-journal article pairs describing clinical trials, observational studies, or meta-analyses, 293 (53.6%) were related to COVID-19. Of the 535 pairs reporting sample sizes in both sources, 462 (86.4%) were concordant; 43 (58.9%) of the 73 pairs with discordant sample sizes had larger samples in the journal publication. There were 534 pairs (97.6%) with concordant and 13 pairs (2.4%) with discordant primary end points. Of the 535 pairs with numerical results for the primary end points, 434 (81.1%) had concordant primary end point results; 66 of the 101 discordant pairs (65.3%) had effect estimates that were in the same direction and were statistically consistent. Overall, 526 pairs (96.2%) had concordant study interpretations, including 82 of the 101 pairs (81.2%) with discordant primary end point results.

Conclusions and Relevance  Most clinical studies posted as preprints on medRxiv and subsequently published in peer-reviewed journals had concordant study characteristics, results, and final interpretations. With more than three-fourths of preprints published in journals within 24 months, these results may suggest that many preprints report findings that are consistent with the final peer-reviewed publications.

The Collection Management System Collection · BLOG Progress Process

“It seems like every couple of months, I get asked for advice on picking a Collection Management System (or maybe referred to as a digital repository, or something else) for use in an archive, special collection library, museum, or another small “GLAMorous” institution. The acronym is CMS, which is not to be confused with Content Management System (which is for your blog). This can be for collection management, digital asset management, collection description, digital preservation, public access and request support, or combinations of all of the above. And these things have to fit into an existing workflow/system, or maybe replace an old system and require a data migration component. And on top of that, there are so many options out there! This can be overwhelming!

What factors do you use in making a decision? I tried to put together some crucial components to consider, while keeping it as simple as possible (if 19 columns can be considered simple). I also want to be able to answer questions with a strong yes/no, to avoid getting bogged down in “well, kinda…” For example, I had a “Price” category and a “Handles complex media?” category but I took them away because it was too subjective of an issue to be able to give an easy answer. A lot of these are still going to be “well, kinda” and in that case, we should make a generalization. (Ah, this is where the “simple” part comes in!)

In the end, though, it is really going to depend on the unique needs of your institution, so the answer is always going to be “well, kinda?” But I hope this spreadsheet can be used as a starting point for those preparing to make a decision, or those who need to jog their memory with “Can this thing do that?”…”

The challenge of preprints for public health The challenge of preprints for public health

“Despite disagreements over whether this form of publication is actually beneficial or not, its advantages and problems present a high degree of convergence among advocates and detractors. On the one hand, preprint is beneficial because it is a quicker way to disseminate scientific content with open access to everyone; on the other hand, the lack of adequate vetting, especially for peer reviews, increases the risk of disseminating bad science and can lead to several problems 2. The dissent lies in considering to what extent possible risks overcome possible benefits (or vice versa).

 

The argument about this rapid dissemination has strong supporting evidence. A study on preprint publication showed that preprint are published on average 14 months earlier than peer-reviewed journal articles 1. This is expected considering that the time-intensive process of peer reviews and revising manuscripts is totally bypassed. However, in this strength lies its very fragility: how to assure that this shorter process will not compromise the quality of the publication?

 

ASAPbio (Accelerating Science and Publication in Biology) 3 is a group of biology researchers that promotes preprint publication and has produced a number of studies that attempt to allay concerns about its quality, claiming, for example, that published articles previously submitted to a preprint server did not show relevant changes for its publication 4. Authors from this group have argued that the current approaches to evaluate research and researchers hold back a more widespread adoption of the preprint methodology 5, which would explain its relatively small participation on the general panorama of scientific publication.

 

Despite claims to the contrary, however, there are examples of poor studies published as preprints, which caused undesirable consequences in public health. Two methodologically flawed studies about a protective effect of tobacco smoking against COVID-19 (one of which has an author with known connections with the tobacco industry), for example, increased the commercialization of tobacco products in France and Iran 6 and a virology study that erroneously stated that the SARS-COV-2 virus had “HIV insertions” fueled conspiracy theories about the former virus being a bioweapon, which lingered on even after the preprint was removed from the server due to its egregious errors 7. Studies have found that much of the public discussion and even policy was indeed driven by what was published in preprints rather than in scientific journals 7,8,9,10, thus, quality issues are a major cause of concern.

 

On the other hand, similar errors have been observed within traditional publishing; the publication of a poor quality paper with undisclosed conflicts of interest in one of the most prestigious medical journals, The Lancet, which became the trigger for the contemporary wave of anti-vaccine activism, is a major, and regretful, example. Understanding to what extent this problem is likely to occur with or without gatekeeping mechanisms is necessary….”

Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

The need to reform research assessment processes related to career advancement at research institutions has become increasingly recognised in recent years, especially to better foster open and responsible research practices. Current assessment criteria are believed to focus too heavily on inappropriate criteria related to productivity and quantity as opposed to quality, collaborative open research practices, and the socio-economic impact of research. Evidence of the extent of these issues is urgently needed to inform actions for reform, however. We analyse current practices as revealed by documentation on institutional review, promotion and tenure processes in seven countries (Austria, Brazil, Germany, India, Portugal, United Kingdom and United States of America). Through systematic coding and analysis of 143 RPT policy documents from 107 institutions for the prevalence of 17 criteria (including those related to qualitative or quantitative assessment of research, service to the institution or profession, and open and responsible research practices), we compare assessment practices across a range of international institutions to significantly broaden this evidence-base. Although prevalence of indicators varies considerably between countries, overall we find that currently open and responsible research practices are minimally rewarded and problematic practices of quantification continue to dominate.

Clinical Trial Data-sharing Policies Among Journals, Funding Agencies, Foundations, and Other Professional Organizations: A Scoping Review – Journal of Clinical Epidemiology

Abstract:  Objectives

To identify the similarities and differences in data-sharing policies for clinical trial data that are endorsed by biomedical journals, funding agencies, and other professional organizations. Additionally, to determine the beliefs, and opinions regarding data-sharing policies for clinical trials discussed in articles published in biomedical journals.

 

Study Design

Two searches were conducted, a bibliographic search for published articles that present beliefs, opinions, similarities, and differences regarding policies governing the sharing of clinical trial data. The second search analyzed the gray literature (non-peer-reviewed publications) to identify important data-sharing policies in selected biomedical journals, foundations, funding agencies, and other professional organizations.

 

Results

A total of 471 articles were included after database search and screening, with 45 from the bibliographic search and 426 from the gray literature search. A total of 424 data-sharing policies were included. Fourteen of the 45 published articles from the bibliographic search (31.1%) discussed only advantages specific to data-sharing policies, 27 (27/45; 60%) discussed both advantages and disadvantages, and 4 (4/45; 8.9%) discussed only disadvantages specific. A total of 216 journals (of 270; 80%) specified a data-sharing policy provided by the journal itself. One hundred industry data-sharing policies were included, and 32 (32%) referenced a data-sharing policy on their website. One hundred and thirty-six (42%) organizations (of 327) specified a data-sharing policy.

 

Conclusion

We found many similarities listed as advantages to data-sharing and fewer disadvantages were discussed within the literature. Additionally, we found a wide variety of commonalities and differences — such as the lack of standardization between policies, and inadequately addressed details regarding the accessibility of research data — that exist in data-sharing policies endorsed by biomedical journals, funding agencies, and other professional organizations. Our study may not include information on all data sharing policies and our data is limited to the entities’ descriptions of each policy.

Elsevier absent from journal cost comparison | Times Higher Education (THE)

“Of the 2,070 titles whose information will become accessible under the JCS, although not directly to researchers, 1,000 belong to the US academic publishing giant Wiley, while another 219 journals owned by Hindawi, which was bought by Wiley last year, also appear on the list.

Several other fully open access publishers will also participate on the comparison site including Plos, the Open Library of Humanities, and F1000, while learned society presses and university publishers, including the Royal Society, Rockefeller University Press, and the International Union of Crystallography, are also part of the scheme.

Other notable participants include the prestigious life sciences publisher eLife, EMBO Press and the rapidly growing open access publisher, Frontiers.

However, the two of the world’s largest scholarly publishers – Elsevier and Springer Nature, whose most prestigious titles charge about £8,000 for APCs – are not part of the scheme….

Under the Plan S agreement, scholarly journals are obliged to become ‘transformative journals’ and gradually increase the proportion of non-paywalled content over a number of years. Those titles that do not make their papers free at the point of publication will drop out of the Plan S scheme, meaning authors cannot use funds provided by any of the 17 funding agencies and six foundations now signed up to Plan S. There are, however, no immediate consequences for a publisher who decides not to share their price and service data through the JCS.  …”

Checklist for Artificial Intelligence in Medical Imaging Reporting Adherence in Peer-Reviewed and Preprint Manuscripts With the Highest Altmetric Attention Scores: A Meta-Research Study – Umaseh Sivanesan, Kay Wu, Matthew D. F. McInnes, Kiret Dhindsa, Fateme Salehi, Christian B. van der Pol, 2022

 

 

Abstract:  Purpose: To establish reporting adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) in diagnostic accuracy AI studies with the highest Altmetric Attention Scores (AAS), and to compare completeness of reporting between peer-reviewed manuscripts and preprints. Methods: MEDLINE, EMBASE, arXiv, bioRxiv, and medRxiv were retrospectively searched for 100 diagnostic accuracy medical imaging AI studies in peer-reviewed journals and preprint platforms with the highest AAS since the release of CLAIM to June 24, 2021. Studies were evaluated for adherence to the 42-item CLAIM checklist with comparison between peer-reviewed manuscripts and preprints. The impact of additional factors was explored including body region, models on COVID-19 diagnosis and journal impact factor. Results: Median CLAIM adherence was 48% (20/42). The median CLAIM score of manuscripts published in peer-reviewed journals was higher than preprints, 57% (24/42) vs 40% (16/42), P < .0001. Chest radiology was the body region with the least complete reporting (P = .0352), with manuscripts on COVID-19 less complete than others (43% vs 54%, P = .0002). For studies published in peer-reviewed journals with an impact factor, the CLAIM score correlated with impact factor, rho = 0.43, P = .0040. Completeness of reporting based on CLAIM score had a positive correlation with a study’s AAS, rho = 0.68, P < .0001. Conclusions: Overall reporting adherence to CLAIM is low in imaging diagnostic accuracy AI studies with the highest AAS, with preprints reporting fewer study details than peer-reviewed manuscripts. Improved CLAIM adherence could promote adoption of AI into clinical practice and facilitate investigators building upon prior works.

Scientific paper recommendation systems: a literature review of recent publications | SpringerLink

Abstract:  Scientific writing builds upon already published papers. Manual identification of publications to read, cite or consider as related papers relies on a researcher’s ability to identify fitting keywords or initial papers from which a literature search can be started. The rapidly increasing amount of papers has called for automatic measures to find the desired relevant publications, so-called paper recommendation systems. As the number of publications increases so does the amount of paper recommendation systems. Former literature reviews focused on discussing the general landscape of approaches throughout the years and highlight the main directions. We refrain from this perspective, instead we only consider a comparatively small time frame but analyse it fully. In this literature review we discuss used methods, datasets, evaluations and open challenges encountered in all works first released between January 2019 and October 2021. The goal of this survey is to provide a comprehensive and complete overview of current paper recommendation systems.