pubassistant.ch: consolidating publication profiles of researchers – PubMed

“Online accounts to keep track of scientific publications, such as Open Researcher and Contributor ID (ORCID) or Google Scholar, can be time consuming to maintain and synchronize. Furthermore, the open access status of publications is often not easily accessible, hindering potential opening of closed publications. To lessen the burden of managing personal profiles, we developed a R shiny app that allows publication lists from multiple platforms to be retrieved and consolidated, as well as interactive exploration and comparison of publication profiles. A live version can be found at pubassistant.ch.

Three Paths to Open Access Handout

“Three Paths to Open Access is a handout that can be shared with researchers to provide an overview of three common options for making their work open access. The content can be edited to better reflect your institution’s open access support services. For a more in-depth exploration of this topic, see our YouTube video, Three Routes to Open Access: https://www.youtube.com/watch?v=hkSLywLnS9c …”

Time to recognize authorship of open data

“The open data revolution won’t happen unless the research system values the sharing of data as much as authorship on papers….

Such a practice is neither new nor confined to a specific field. But the result tends to be the same: that authors of openly shared data sets are at risk of not being given credit in a way that counts towards promotion or tenure, whereas those who are named as authors on the publication are more likely to reap benefits that advance their careers.

Such a situation is understandable as long as authorship on a publication is the main way of getting credit for a scientific contribution. But if open data were formally recognized in the same way as research articles in evaluation, hiring and promotion processes, research groups would lose at least one incentive for keeping their data sets closed….”

Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review

Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer’s own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer’s work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.

To ArXiv or not to ArXiv: A Study Quantifying Pros and Cons of Posting Preprints Online

Double-blind conferences have engaged in debates over whether to allow authors to post their papers online on arXiv or elsewhere during the review process. Independently, some authors of research papers face the dilemma of whether to put their papers on arXiv due to its pros and cons. We conduct a study to substantiate this debate and dilemma via quantitative measurements. Specifically, we conducted surveys of reviewers in two top-tier double-blind computer science conferences — ICML 2021 (5361 submissions and 4699 reviewers) and EC 2021 (498 submissions and 190 reviewers). Our two main findings are as follows. First, more than a third of the reviewers self-report searching online for a paper they are assigned to review. Second, outside the review process, we find that preprints from better-ranked affiliations see a weakly higher visibility, with a correlation of 0.06 in ICML and 0.05 in EC. In particular, papers associated with the top-10-ranked affiliations had a visibility of approximately 11% in ICML and 22% in EC, whereas the remaining papers had a visibility of 7% and 18% respectively.

Which solutions best support sharing and reuse of code? – The Official PLOS Blog

“PLOS has released a preprint and supporting data on research conducted to understand the needs and habits of researchers in relation to code sharing and reuse as well as to gather feedback on prototype code notebooks and help determine strategies that publishers could use to increase code sharing.

Our previous research led us to implement a mandatory code sharing policy at PLOS Computational Biology in March 2021 to increase the amount of code shared alongside published articles. As well as exploring policy to support code sharing, we have also been collaborating with NeuroLibre, an initiative of the Canadian Open Neuroscience Platform, to learn more about the potential role of technological solutions for enhancing code sharing. Neurolibre is one of a growing number of interactive or executable technologies for sharing and publishing research, some of which have become integrated with publishers’ workflows….”

Preprint server removes ‘inflammatory’ papers in superconductor controversy | Science | AAAS

A debate over claims of room temperature superconductivity has now boiled over into the realm of scientific publishing. Administrators of arXiv, the widely used physics preprint server, recently removed or refused to post several papers from the opposing sides, saying their manuscripts include inflammatory content and unprofessional language. ArXiv has also banned one of the authors, Jorge Hirsch, a theoretical physicist at the University of California, San Diego (UCSD), from posting papers for 6 months.

Writing up your clinical trial report for a scientific journal: the REPORT trial guide for effective and transparent research reporting without spin | British Journal of Sports Medicine

Abstract:  The REPORT guide is a ‘How to’ guide to help you report your clinical research in an effective and transparent way. It is intended to supplement established first choice reporting tools, such as Consolidated Standards of Reporting Trials (CONSORT), by adding tacit knowledge (ie, learnt, informal or implicit knowledge) about reporting topics that we have struggled with as authors or see others struggle with as journal reviewers or editors. We focus on the randomised controlled trial, but the guide also applies to other study designs. Topics included in the REPORT guide cover reporting checklists, trial report structure, choice of title, writing style, trial registry and reporting consistency, spin or reporting bias, transparent data presentation (figures), open access considerations, data sharing and more.

Editorial misconduct: the case of online predatory journals

 

The number of publishers that offer academics, researchers, and postgraduate students the opportunity to publish articles and book chapters quickly and easily has been growing steadily in recent years. This can be ascribed to a variety of factors, e.g., increasing Internet use, the Open Access movement, academic pressure to publish, and the emergence of publishers with questionable interests that cast doubt on the reliability and the scientific rigor of the articles they publish.

All this has transformed the scholarly and scientific publishing scene and has opened the door to the appearance of journals whose editorial procedures differ from those of legitimate journals. These publishers are called predatory, because their manuscript publishing process deviates from the norm (very short publication times, non-existent or low-quality peer-review, surprisingly low rejection rates, etc.).

The object of this article is to spell out the editorial practices of these journals to make them easier to spot and thus to alert researchers who are unfamiliar with them. It therefore reviews and highlights the work of other authors who have for years been calling attention to how these journals operate, to their unique features and behaviors, and to the consequences of publishing in them.

The most relevant conclusions reached include the scant awareness of the existence of such journals (especially by researchers still lacking experience), the enormous harm they cause to authors’ reputations, the harm they cause researchers taking part in promotion or professional accreditation procedures, and the feelings of chagrin and helplessness that come from seeing one’s work printed in low-quality journals. Future comprehensive research on why authors decide to submit valuable articles to these journals is also needed.

This paper therefore discusses the size of this phenomenon and how to distinguish those journals from ethical journals.

 

Should Indian researchers pay to get their work published?

Abstract:  Paying to publish is an ethical issue. During 2010–14, Indian researchers have used 488 open access (OA) journals levying article processing charge (APC), ranging from US$ 7.5 to 5,000, to publish about 15,400 papers. Use of OA journals levying APC has increased from 242 journals and 2,557 papers in 2010 to 328 journals and 3,634 papers in 2014. We estimate that India is potentially spending about US$ 2.4 million annually on APCs paid to OA journals and the amount would be much more if we add APCs paid to make papers published in hybrid journals open access. It would be prudent for Indian authors to make their work freely available through interoperable repositories, a trend that is growing in Latin America and China, especially when funding is scarce. Scientists are ready to pay APC as long as institutions pay for it and funding agencies are not ready to insist that grants provided for research should not be used for paying APC.

 

The giant plan to track diversity in research journals

 

In the next year, researchers should expect to face a sensitive set of questions whenever they send their papers to journals, and when they review or edit manuscripts. More than 50 publishers representing over 15,000 journals globally are preparing to ask scientists about their race or ethnicity — as well as their gender — in an initiative that’s part of a growing effort to analyse researcher diversity around the world. Publishers say that this information, gathered and stored securely, will help to analyse who is represented in journals, and to identify whether there are biases in editing or review that sway which findings get published. Pilot testing suggests that many scientists support the idea, although not all.