All the Research That’s Fit to Print: Open Access and the News Media

Abstract:  The goal of the open access (OA) movement is to help everyone access the scholarly research, not just those who can afford to. However, most studies looking at whether OA has met this goal have focused on whether other scholars are making use of OA research. Few have considered how the broader public, including the news media, uses OA research. This study sought to answer whether the news media mentions OA articles more or less than paywalled articles by looking at articles published from 2010 through 2018 in journals across all four quartiles of the Journal Impact Factor using data obtained through Altmetric.com and the Web of Science. Gold, green and hybrid OA articles all had a positive correlation with the number of news mentions received. News mentions for OA articles did see a dip in 2018, although they remained higher than those for paywalled articles.

 

 

Assessing number and quality of urology open access journals… : Current Urology

Abstract:  Background/Aims: 

There is clear evidence that publishing research in an open access (OA) journal or as an OA model is associated with higher impact, in terms of number of reads and citation rates. The development of OA journals and their quality are poorly studied in the field of urology. In this study, we aim to assess the number of OA journals, their quality in terms of CiteScore, percent cited and quartiles, and their scholarly production during the period from 2011 to 2018.

Methods: 

We obtained data about journals from www.scopus.com, and we filtered the list for urology journals. We obtained data for all Scopus indexed journals during the period from 2011 to 2018. For each journal, we extracted the following indices: CiteScore, Citations, scholarly output, and SCImago quartiles. We analyzed the difference in quality indices between OA and non-OA urology journals.

Results: 

Urology journals have increased from 66 journals in 2011 to 99 journals in 2018. The number of OA urology journals has increased from only 10 (15.2%) journals in 2011 to 33 (33.3%) journals in 2018. The number of quartile 1 (the top 25%) journals has increased from only 1 journal in 2011 to 5 journals in 2018. Non-OA urology journals had significantly higher CiteScore compared with OA journals till the year 2015, after which the mean difference in CiteScore became smaller with insignificant p-value.

Conclusion: 

Number and quality of OA journals in the field of urology have increased throughout the last few years. Despite this increase, non-OA urology journals still have higher quality and output.

Optimizing the use of twitter for research dissemination: The “Three Facts and a Story” Randomized-Controlled Trial – Journal of Hepatology

Abstract:  Background

Published research promoted on twitter reaches more readers. Tweets with graphics are more engaging than those without. Data are limited, however, regarding how to optimize a multimedia tweets for engagement

Methods

The “Three facts and a Story” trial is a randomized-controlled trial comparing a tweet featuring a graphical abstract to paired tweets featuring the personal motivations behind the research and a summary of the findings. Fifty-four studies published by the Journal of Hepatology were randomized at the time of online publication. The primary endpoint was assessed at 28-days from online publication with a primary outcome of full-text downloads from the website. Secondary outcomes included page views and twitter engagement including impressions, likes, and retweets.

Results

Overall, 31 studies received standard tweets and 23 received story tweets. Five studies were randomized to story tweets but crossed over to standard tweets for lack of author participation. Most papers tweeted were original articles (94% standard, 91% story) and clinical topics (55% standard, 61% story). Story tweets were associated with a significant increase in the number of full text downloads, 51 (34-71) versus 25 (13-41), p=0.002. There was also a non-significant increase in the number of page views. Story tweets generated an average of >1,000 more impressions than standard tweets (5,388 vs 4,280, p=0.002). Story tweets were associated with a similar number of retweets, and a non-significant increase in the number of likes.

Conclusion

Tweets featuring the authors and their motivations may increase engagement with published research.

WILL PODCASTING AND SOCIAL MEDIA REPLACE JOURNALS AND TRADITIONAL SCIENCE COMMUNICATION? NO, BUT… | American Journal of Epidemiology | Oxford Academic

Abstract:  The digital world in which we live is changing rapidly. The changing media environment is having a direct impact on traditional forms of communication and knowledge translation in public health and epidemiology. Openly accessible digital media can be used to reach a broader and more diverse audience of trainees, scientists, and the lay public than traditional forms of scientific communication. The new digital landscape for delivering content is vast and new platforms are continuously being added. We focus on several, including Twitter and podcasting and discuss their relevance to epidemiology and science communication. We highlight three key reasons why we think epidemiologists should be engaging with these mediums: 1) science communication, 2) career advancement, 3) development of a community and public service. Other positive and negative consequences of engaging in these forms of new media are also discussed. The authors of this commentary are all engaged in social media and podcasting for scientific communication and in this manuscript, we reflect on our experience with these mediums as tools to advance the field of epidemiology.

 

Comparison of subscription access and open access obstetrics and gynecology journals in the SCImago database | Özay | Ginekologia Polska

Abstract:  Objectives: The aim of this study is to compare the annual SJR and to evaluate the other parameters that show the scientific effect of journals in terms of open access (OA) or subscription access (SA) in the field of obstetrics and gynecology according to the SCImago database. Material and methods: This study was conducted between September-December 2019 at Near East University. The SCImago Journal & Country Rank database was used to collect information about the journals. We evaluated and compared the changes in the one-year SJR (SCImago Journal Rank) and journal impact factor (JIF) of OA and SA journals. Results: Data from 183 scientific journals in the field of obstetrics and gynecology from the period between 1999 and 2018 were evaluated, where 140 of these journals were SA and 43 were OA. The average SJR of OA journals in 1999 was 0.17, while it was 0.38 for SA journals. In 2018, these values were 0.31 and 0.78 for OA and SA journals, respectively. In the comparison of JIF, the average of the OA journals in 1999 was 0.09, while it was 0.66 for SA journals. In 2018, these values were 0.80 and 1.93 for OA and SA journals, respectively. Conclusions: Access to information has become easier due to technological developments and this will continue to affect the access policies of journals. Despite the disadvantages of predator journals, the rise of OA journals in terms of number and quality is likely to continue. Key words: open access journal; impact factor; subscription access journal; SCImago; obstetrics; gynecology.

Recognition and rewards – Open Science – Universiteit Utrecht

“Open science means action. And the way we offer recognition and reward to academics and university staff is key in bringing about the transition that Utrecht University aims for. Over the course of the past year the working group on Recognition and Rewards, part of the Open Science Programme, has reflected and thoroughly debated a novel approach to ensuring that we offer room for everyone’s talent, resulting in a new vision (pdf)….

In the current system, researchers and their research are judged by journal impact factors, publisher brands and H-indices, and not by actual quality, real use, real impact and openness characteristics….

Under those circumstances, at best open science practices are seen as posing an additional burden without rewards. At worst, they are seen as actively damaging chances of future funding and promotion & tenure. Early career researchers are perhaps the most dependent on traditional evaluation culture for career progression, a culture held in place by established researchers, as well as by institutional, national and international policies, including funder mandates….”

 

 

Utrecht University Recognition and Rewards Vision

“By embracing Open Science as one of its five core principles1, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level….”

Journal Citation Indicator. Just Another Tool in Clarivate’s Metrics Toolbox? – The Scholarly Kitchen

“The JCI has several benefits when compared against the standard Journal Impact Factor (JIF): It is based on a journal’s citation performance across three full years of citation data rather than a single year’s snapshot of a journal’s performance across the previous two years. Clarivate also promises to provide the JCI score to all journals in its Core Collection, even those journals that do not currently receive a JIF score.

The JCI also avoids the numerator-denominator problem of the JIF, where ALL citations to a journal are counted in the numerator, but only “citable items” (Articles and Review) are counted in the denominator. The JCI only focuses on Articles and Reviews.

Finally, like a good indicator, the JCI is easy to interpret. Average performance is set to 1.0, so a journal that receives a JCI score of 2.5 performed two-and-a-half times better than average, while a journal with a score of 0.5 performed only half as well.

To me, JCI’s biggest weakness is Clarivate’s bold claim that it achieved normalization across disciplines….”

Social media platforms: a primer for researchers

Abstract:  Social media platforms play an increasingly important role in research, education, and clinical practice. As an inseparable part of open science, these platforms may increase the visibility of research outputs and facilitate scholarly networking. The editors who ethically moderate Twitter, Facebook, and other popular social media accounts for their journals may engage influential authors in the post-publication communication and expand societal implications of their publications. Several social media aggregators track and generate alternative metrics which can be used by researchers for visualizing trending articles in their fields. More and more publishers showcase their achievements by displaying such metrics along with traditional citations. The Scopus database also tracks both metrics to offer a comprehensive coverage of the indexed articles’ impact.

Understanding the advantages and limitations of various social media channels is essential for actively contributing to the post-publication communication, particularly in research-intensive fields such as rheumatology.

 

Introducing the Journal Citation Indicator: A new, field-normalized measurement of journal citation impact – Web of Science Group

“In a recent blog post we discussed refinements in this year’s forthcoming release of the Journal Citation Reports (JCR)™, describing the addition of new content and hinting at a new metric for measuring the citation impact of a journal’s recent publications.

I’m now pleased to fully introduce the Journal Citation Indicator. By normalizing for different fields of research and their widely varying rates of publication and citation, the Journal Citation Indicator provides a single journal-level metric that can be easily interpreted and compared across disciplines….”

Halt the h-index – Leiden Madtrics

“Sometimes, bringing home a message requires a more visual approach. That’s why recently, we teamed up with a graphic designer to create an infographic on the h-index – or rather, on the reasons why not to use the h-index.

In our experience with stakeholders in research evaluation, debates about the usefulness of the h-index keep popping up. This happens even in contexts that are more welcoming towards responsible research assessment. Of course, the h-index is well-known, as are its downsides. Still, the various issues around it do not yet seem to be common knowledge. At the same time, current developments in research evaluation propose more holistic approaches. Examples include the evaluative inquiry developed at our own centre as well as approaches to evaluate academic institutions in context. Scrutinizing the creation of indicators itself, better contextualization has been called for, demanding to derive them out “in the wild” and not in isolation.
Moving towards more comprehensive research assessment approaches that consider research in all its variants is supported by the larger community of research evaluators as well, making a compelling case to move away from single-indicator thinking.
Still, there is opposition to reconsidering the use of metrics. When first introducing the infographic on Twitter, this evoked responses questioning misuse of the h-index in practice, disparaging more qualitative assessments, or simply shrugging off responsibility for taking action due to a perceived lack of alternatives. This shows there is indeed a need for taking another look at the h-index….”

Can altmetric mentions predict later citations? A test of validity on data from ResearchGate and three social media platforms | Emerald Insight

Abstract:  Purpose

The main purpose of this study is to explore and validate the question “whether altmetric mentions can predict citations to scholarly articles”. The paper attempts to explore the nature and degree of correlation between altmetrics (from ResearchGate and three social media platforms) and citations.

Design/methodology/approach

A large size data sample of scholarly articles published from India for the year 2016 is obtained from the Web of Science database and the corresponding altmetric data are obtained from ResearchGate and three social media platforms (Twitter, Facebook and blog through Altmetric.com aggregator). Correlations are computed between early altmetric mentions and later citation counts, for data grouped in different disciplinary groups.

Findings

Results show that the correlation between altmetric mentions and citation counts are positive, but weak. Correlations are relatively higher in the case of data from ResearchGate as compared to the data from the three social media platforms. Further, significant disciplinary differences are observed in the degree of correlations between altmetrics and citations.

Research limitations/implications

The results support the idea that altmetrics do not necessarily reflect the same kind of impact as citations. However, articles that get higher altmetric attention early may actually have a slight citation advantage. Further, altmetrics from academic social networks like ResearchGate are more correlated with citations, as compared to social media platforms.

Originality/value

The paper has novelty in two respects. First, it takes altmetric data for a window of about 1–1.5 years after the article publication and citation counts for a longer citation window of about 3–4 years after the publication of article. Second, it is one of the first studies to analyze data from the ResearchGate platform, a popular academic social network, to understand the type and degree of correlations.

Triggle et al. (2021) Requiem for impact factors and high publication charges

Chris R Triggle, Ross MacDonald, David J. Triggle & Donald Grierson (2021) Requiem for impact factors and high publication charges, Accountability in Research, DOI: 10.1080/08989621.2021.1909481

Abstract: Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.

Are stakeholders measuring the publishing metrics that matter?: Putting research into context

“Perhaps the most fundamental aspect of compiling and implementing more meaningful research metrics that the NISO panelists discussed is the importance of putting data into context. And, as the speakers noted, there are multiple facets of context to consider, including:

The strengths and limitations of different metrics by discipline/subject matter (e.g., some metrics are better suited to certain types of research)
The intended uses and overall strengths and limitations of particular data points (e.g., altmetrics are “indicators” of impact, not measures of quality and the JIF was never meant to be used to measure the impact of individual articles or scholars)
The cultural context that a researcher is operating within and the opportunities, challenges, and biases they have experienced
How and where a research output fits within scholars’ other professional contributions (e.g., recognizing how individual research outputs are part of broader bodies of work and also measuring the impacts of scholarly outputs that do not fit within traditional publication-based assessment systems) …”