Cross-Sectional Evaluation of Open Science Practices at Imaging Journals: A Meta-Research Study – Mohammed Kashif Al-Ghita, Kelly Cobey, David Moher, Mariska M.G. Leeflang, Sanam Ebrahimzadeh, Eric Lam, Paul Rooprai, Ahmed Al Khalil, Nabil Islam, Hamza Algodi, Haben Dawit, Robert Adamo, Mahdi Zeghal, Matthew D.F. McInnes, 2023

Abstract:  Objective: To evaluate open science policies of imaging journals, and compliance to these policies in published articles. Methods: From imaging journals listed we extracted open science policy details: protocol registration, reporting guidelines, funding, ethics and conflicts of interest (COI), data sharing, and open access publishing. The 10 most recently published studies from each journal were assessed to determine adherence to these policies. We calculated the proportion of open science policies into an Open Science Score (OSS) for all journals and articles. We evaluated relationships between OSS and journal/article level variables. Results: 82 journals/820 articles were included. The OSS of journals and articles was 58.3% and 31.8%, respectively. Of the journals, 65.9% had registration and 78.1% had reporting guideline policies. 79.3% of journals were members of COPE, 81.7% had plagiarism policies, 100% required disclosure of funding, and 97.6% required disclosure of COI and ethics approval. 81.7% had data sharing policies and 15.9% were fully open access. 7.8% of articles had a registered protocol, 8.4% followed a reporting guideline, 77.4% disclosed funding, 88.7% disclosed COI, and 85.6% reported ethics approval. 12.3% of articles shared their data. 51% of articles were available through open access or as a preprint. OSS was higher for journal with DOAJ membership (80% vs 54.2%; P < .0001). Impact factor was not correlated with journal OSS. Knowledge synthesis articles has a higher OSS scores (44.5%) than prospective/retrospective studies (32.6%, 30.0%, P < .0001). Conclusion: Imaging journals endorsed just over half of open science practices considered; however, the application of these practices at the article level was lower.

 

Transparency, openness and reproducible research practices are frequently underused in health economic evaluations

Objective: To investigate the extent to which articles of economic evaluations of healthcare interventions indexed in MEDLINE incorporate research practices that promote transparency, openness and reproducibility. Study design and setting: We evaluated a random sample of health economic evaluations indexed in MEDLINE during 2019. We included articles written in English reporting an incremental cost-effectiveness ratio (ICERs) in terms of costs per life years gained, quality-adjusted life years and/or disability-adjusted life years. Reproducible research practices, openness and transparency in each article was extracted in duplicate. We explored whether reproducible research practices were associated with self-report use of a guideline. Results: We included 200 studies published in 147 journals. Almost half were published as open access articles (n=93; 47%). Most studies (n=150; 75%) were model based economic evaluations. In 109 (55%) studies, authors self-reported use a guideline (e.g., for study conduct or reporting). Few studies (n=31; 16%) reported working from a protocol. In 112 (56%) studies, authors reported the data needed to recreate the ICERs for the base case analysis. This percentage was higher in studies using a guideline than studies not using a guideline (72/109 [66%] with guideline vs. 40/91 [44%] without guideline; risk ratio 1.50, 95% confidence interval 1.15 – 1.97). Only 10 (5%) studies mentioned access to raw data and analytic code for reanalyses. Conclusion: Transparency, openness and reproducible research practices are frequently underused in health economic evaluations. This study provides baseline data to compare future progress in the field. 

Scrutiny for thee but not for me: When open science research isn’t open | Steve Haroz’s blog

“A fundamental property of scientific research is that it is scrutinizable. And facilitating that scrutiny by eliminating barriers that delay or prevent access to research data and replication materials is the major goal for transparent research advocates. So when a paper that actually studies open research practices hides its data, it should raise eyebrows. A recent paper about openness and transparency in Human-Computer Interaction did exactly that.

The paper is titled “Changes in Research Ethics, Openness, and Transparency in Empirical Studies between CHI 2017 and CHI 2022“. It looked at various open practices of papers sampled from the ACM CHI proceedings in 2017 and 2022. Then it compared how practices like open data, open experiment materials, and open access changed between those years. Sounds like a substantial effort that’s potentially very useful to the field! But it doesn’t live up to the very standards it’s researching and advocating for.

The paper’s data and many other replication/reproducibility materials are posted to OSF, which is a good sign! But the data does not state which papers were actually sampled. No titles. No DOIs. Just “paper1”, “paper2”, etc. So the scrutinizability and follow-up research that transparent research is supposed to facilitate is largely absent.

If you want to scrutinize which papers were sampled or excluded, you can’t.
If you want to scrutinize the results by checking a couple papers’ openness yourself, you can’t.
If you want to build upon the research by looking only at papers that weren’t sampled, you can’t.

I contacted the authors over 6 weeks ago to ask about the availability of that data. Over the course of a dozen emails, they revealed that they would only share which papers were studied if I received IRB (ethics) approval to study it….”

Why are these publications missing? Uncovering the reasons behind the exclusion of documents in free-access scholarly databases

Abstract: This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions—non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage. To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases. While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%–46%), and internal requirements that exclude secondary content (6.5%–11.6%). In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.

OpenCorporates.com · datasets · Discussion #386 · GitHub

“OpenCorporates.com founded by Chris Taggart was once a shining exemplar for Open Data – and especially an open data business. No longer seems so as of 2023.

NB: in theory the database is available under and Open Database License (see terms of use).

However, there is no way to actually access the data in bulk without paying – which contradicts the open definition and renders the value of the openness in practice an empty offer….”

Up front and open, shrouded in secrecy, or somewhere in between? A Meta Research Systematic Review of Open Science Practices in Sport Medicine Research | Journal of Orthopaedic & Sports Physical Therapy

Abstract:  OBJECTIVE: To investigate open science practices in research published in the top five sports medicine journals from 01 May 2022 and 01 October 2022.

DESIGN: A meta-research systematic review

LITERATURE SEARCH: Open science practices were searched in MEDLINE.

STUDY SELECTION CRITERIA: We included original scientific research published in one of the identified top-five sports medicine journals in 2022 as ranked by Clarivate ((1) British Journal of Sports Medicine, (2) Journal of Sport and Health Science, (3) American Journal of Sports Medicine, (4) Medicine Science Sport and Exercise, and (5) Sports Medicine-Open). Studies were excluded if they were systematic reviews, qualitative research, grey literature, or animal or cadaver models.

DATA SYNTHESIS: Open science practices were extracted in accordance with the Transparency and Openness Promotion (TOP) guidelines and patient and public involvement (PPI).

RESULTS: 243 studies were included. The median number of open science practices in each study was 2, out of a maximum of 12 (Range: 0-8; IQR: 2). 234 studies (96%, 95% CI: 94-99%) provided an author conflict of interest statement and 163 (67%, 95% CI: 62-73%) reported funding. 21 studies (9%, 95% CI: 5-12%) provided open access data. Fifty-four studies (22%, 95% CI: 17-27%) included a data availability statement and 3 (1%, 95% CI: 0-3%) made code available. Seventy-six studies (32%, 95% CI: 25-37%) had transparent materials and 30 (12%, 95% CI: 8-16) used a reporting guideline. Twenty-eight studies (12%, 95% CI: 8-16%) were pre-registered. Six studies (3%, 95% CI: 1-4%) published a protocol. Four studies (2%, 95% CI: 0-3%) reported an analysis plan a priori. Seven studies (3%, 95% CI: 1-5%) reported patient and public involvement.

CONCLUSION: Open science practices in the sports medicine field are extremely limited. The least followed practices were sharing code, data, and analysis plans.

Evaluation of Transparency and Openness Guidelines in Physical Therapy Journals | Physical Therapy | Oxford Academic

Abstract:  Objective The goals of this study were to evaluate the extent that physical therapy journals support open science research practices by adhering to the Transparency and Openness Promotion guidelines and to assess the relationship between journal scores and their respective journal impact factor. Methods Scimago, mapping studies, the National Library of Medicine, and journal author guidelines were searched to identify physical therapy journals for inclusion. Journals were graded on 10 standards (29 available total points) related to transparency with data, code, research materials, study design and analysis, preregistration of studies and statistical analyses, replication, and open science badges. The relationship between journal transparency and openness scores and their journal impact factor was determined. Results Thirty-five journals’ author guidelines were assigned transparency and openness factor scores. The median score (interquartile range) across journals was 3.00 out of 29 (3.00) points (for all journals the scores ranged from 0–8). The 2 standards with the highest degree of implementation were design and analysis transparency (reporting guidelines) and study preregistration. No journals reported on code transparency, materials transparency, replication, and open science badges. Transparency and openness promotion factor scores were a significant predictor of journal impact factor scores. Conclusion There is low implementation of the transparency and openness promotion standards by physical therapy journals. Transparency and openness promotion factor scores demonstrated predictive abilities for journal impact factor scores. Policies from journals must improve to make open science practices the standard in research. Journals are in an influential position to guide practices that can improve the rigor of publication which, ultimately, enhances the evidence-based information used by physical therapists. Impact Transparent, open, and reproducible research will move the profession forward by improving the quality of research and increasing the confidence in results for implementation in clinical care.

Disappearing repositories — taking an infrastructure perspective on the long-term availability of research data

Abstract:  Currently, there is limited research investigating the phenomenon of research data repositories being shut down, and the impact this has on the long-term availability of data. This paper takes an infrastructure perspective on the preservation of research data by using a registry to identify 191 research data repositories that have been closed and presenting information on the shutdown process. The results show that 6.2 % of research data repositories indexed in the registry were shut down. The risks resulting in repository shutdown are varied. The median age of a repository when shutting down is 12 years. Strategies to prevent data loss at the infrastructure level are pursued to varying extent. 44 % of the repositories in the sample migrated data to another repository, and 12 % maintain limited access to their data collection. However, both strategies are not permanent solutions. Finally, the general lack of information on repository shutdown events as well as the effect on the findability of data and the permanence of the scholarly record are discussed.

 

Publisher Wants $2,500 To Allow Academics To Post Their Own Manuscript To Their Own Repository

As a Walled Culture explained back in 2021, open access (OA) to published academic research comes in two main varieties. “Gold” open access papers are freely available to the public because the researchers’ institutions pay “article-processing charges” to a publisher. “Green” OA papers are available because the authors self-archive their work on a personal Web site or institutional repository that is publicly accessible.

The self-archived copies are generally the accepted manuscripts, rather than the final published version, largely because academics foolishly assign copyright to the publishers. This gives the latter the power to refuse to allow members of the public to read published research they have paid for with their taxes, unless they pay again with a subscription to the journal, or on a per article basis.

You might think that is unfair and inconvenient, but easy to circumvent, because the public will be able to download copies of the peer-reviewed manuscripts that the researchers self-archive as green OA. But many publishers have a problem with the idea that people can access for free the papers in any form, and demand that public access to the green OA versions should be embargoed, typically for 12 months. There is no reason for academics to agree to this other than habit and a certain deference on their part. It’s also partly the fault of the funding agencies. The open access expert and campaigner, Peter Suber, explained in 2005 why they are to blame:

Researchers sign funding contracts with the research councils long before they sign copyright transfer agreements with publishers. Funders have a right to dictate terms, such as mandated open access, precisely because they are upstream from publishers. If one condition of the funding contract is that the grantee will deposit the peer-reviewed version of any resulting publication in an open-access repository [immediately], then publishers have no right to intervene.

Accepting embargoes on green OA at all was perhaps the biggest blunder made by the open access movement and their funders. Even today, nearly 20 years after Suber pointed out the folly of letting publishers tell academics what they can do with their own manuscripts, many publishers still demand – and get – embargoes. Against this background, ACS Publications, the publishing wing of the American Chemical Society, has come up with what it calls “Zero-Embargo Green Open Access” (pointed out by Richard Poynder):

A number of funders and institutions require authors to retain the right to post their accepted manuscripts immediately upon acceptance for publication in a journal, sometimes referred to as zero-embargo green open access (OA). More than 90% of ACS authors under these mandates have a simple and funded pathway to publish gold OA in ACS journals.

For those not covered by an institutional read and publish agreement or through other types of funding, ACS offers the option to post their accepted manuscripts with a CC BY license in open access repositories immediately upon acceptance. This option expands this small subset of authors’ choices beyond the existing option to wait 12 months to post at no cost.

Great news? Well, no, because a hefty new fee must be paid:

The article development charge (ADC) is a flat fee of $2,500 USD and is payable once the manuscript is sent for peer review. The ADC covers the cost of ACS’ pre-acceptance publishing services, from initial submission through to the final editorial decision.

That is, if academics publish a paper with the ACS, their institution must pay $2,500 for the privilege of being allowed to post immediately the accepted manuscript version on their own institutional server – something that should have been a matter of course, but was weakly given up in the early days of open access, as Suber pointed out. There is a feeble attempt to justify the cost, on the basis that the $2,500 is for “pre-acceptance publishing service”. But this apparently refers to things like peer review, which is generally conducted by fellow academics for free, and decisions by journal editors, who are often unpaid too. In general, the costs involved in “pre-acceptance publishing” are negligible.

“Zero-Embargo Green Open Access” sounds so promising. But it turns out to be yet another example of the copyright industry’s limitless sense of entitlement. Publishing is constantly finding new ways to extract money from hard-pressed academic institutions – money that could be used for more research or simply paying underfunded researchers better.

This is a personal issue for me. In 2013, I spoke at a conference celebrating the tenth anniversary of the Berlin declaration on open access. More formally, the “Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities” is one of three seminal formulations of the open access idea: the other two are the Bethesda Statement (2003) and the original Budapest Open Access Initiative (2002) (all discussed in Walled Culture the book, free digital versions available). I entitled my speech “Half a Revolution”, and the slides I used can be freely downloaded from SlideShare, along with many more of my presentations.

My Berlin talk concluded with a call to action under the slogan “Zero Embargo Now” (ZEN). Back then, I looked forward to a world where all academic papers would routinely be available under green OA immediately, without any embargo. I’m still waiting.

Follow me @glynmoody on Mastodon. This post originally appeared on Walled Culture.

ACS Publications provides a new option to support zero-embargo green open access – American Chemical Society

“Beginning Oct. 1, 2023, the Publications Division of the American Chemical Society (ACS) will provide authors with a new option to satisfy funder requirements for zero-embargo green open access. Through this pathway, authors will be able to post accepted manuscripts with a CC BY license in open access repositories immediately upon acceptance.

To ensure a sustainable model of delivering services from submission to final editorial decision, ACS Publications is introducing an article development charge (ADC) as part of this new zero-embargo green open access option. The ADC covers the cost of ACS’ publishing services through the final editorial decision….”

Zero-Embargo Green Open Access – ACS Open Science

“A number of funders and institutions require authors to retain the right to post their accepted manuscripts immediately upon acceptance for publication in a journal, sometimes referred to as zero-embargo green open access (OA). More than 90% of ACS authors under these mandates have a simple and funded pathway to publish gold OA in ACS journals.

For those not covered by an institutional read and publish agreement or through other types of funding, ACS offers the option to post their accepted manuscripts with a CC BY license in open access repositories immediately upon acceptance. This option expands this small subset of authors’ choices beyond the existing option to wait 12 months to post at no cost.

 

An article development charge (ADC) will be applied if the zero-embargo green OA route is requested and the manuscript is recommended to be sent out for peer review. The ADC covers the cost of ACS’ publishing services through the final editorial decision….”

You do not receive enough recognition for your influential science | bioRxiv

Abstract:  During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.

 

arXiv.org is experiencing a DDOS attack – arXiv blog

“arXiv users may be experiencing email disruption due to a DDOS attack. Over the past few days, a small number of users issued over a million email change requests.

These requests originated from over 200 IP addresses – almost all owned by an ISP for a particular province in China. The confirmation emails for this volume of requests overwhelmed our email service. As a result, many arXiv users may not have received their daily emails. And other users may not have received their confirmation emails for registering accounts, or legitimate email change requests.

We are taking measures to mitigate this attack, including temporarily blocking certain IP ranges. Unfortunately, this may mean some legitimate users will be unable to access arXiv until this issue is resolved. We will shortly be reaching out to the abuse desk of the affected ISP for assistance….”

Code sharing increases citations, but remains uncommon | Research Square

Abstract:  Biologists increasingly rely on computer code, reinforcing the importance of published code for transparency, reproducibility, training, and a basis for further work. Here we conduct a literature review examining temporal trends in code sharing in ecology and evolution publications since 2010, and test for an influence of code sharing on citation rate. We find that scientists are overwhelmingly (95%) failing to publish their code and that there has been no significant improvement over time, but we also find evidence that code sharing can considerably improve citations, particularly when combined with open access publication.