“Scientific articles that get downloaded from the scholarly piracy website Sci-Hub tend to receive more citations, according to a new study published in Scientometrics. The number of times an article was downloaded from Sci-Hub also turned out to be a robust predictor of future citations….”
Published research promoted on twitter reaches more readers. Tweets with graphics are more engaging than those without. Data are limited, however, regarding how to optimize a multimedia tweets for engagement
The “Three facts and a Story” trial is a randomized-controlled trial comparing a tweet featuring a graphical abstract to paired tweets featuring the personal motivations behind the research and a summary of the findings. Fifty-four studies published by the Journal of Hepatology were randomized at the time of online publication. The primary endpoint was assessed at 28-days from online publication with a primary outcome of full-text downloads from the website. Secondary outcomes included page views and twitter engagement including impressions, likes, and retweets.
Overall, 31 studies received standard tweets and 23 received story tweets. Five studies were randomized to story tweets but crossed over to standard tweets for lack of author participation. Most papers tweeted were original articles (94% standard, 91% story) and clinical topics (55% standard, 61% story). Story tweets were associated with a significant increase in the number of full text downloads, 51 (34-71) versus 25 (13-41), p=0.002. There was also a non-significant increase in the number of page views. Story tweets generated an average of >1,000 more impressions than standard tweets (5,388 vs 4,280, p=0.002). Story tweets were associated with a similar number of retweets, and a non-significant increase in the number of likes.
Tweets featuring the authors and their motivations may increase engagement with published research.
“The University’s open access publisher, University of Westminster Press (UWP), has reached an impressive milestone of one million views and downloads of its published titles….
The UWP is an open access publisher of peer-reviewed academic books and journals. Launched in 2015, the publisher exists to provide global public access to academic work in multiple formats, including books, policy briefs and journals. Over one million views and downloads have been achieved by the publisher since publishing its first journal in September 2015….”
Abstract: In April 2020, the OAPEN Library moved to a new platform, based on DSpace 6. During the same period, IRUS-UK started working on the deployment of Release 5 of the COUNTER Code of Practice (R5). This is, therefore, a good moment to compare two widely used usage metrics – R5 and Google Analytics (GA). This article discusses the download data of close to 11,000 books and chapters from the OAPEN Library, from the period 15 April 2020 to 31 July 2020. When a book or chapter is downloaded, it is logged by GA and at the same time a signal is sent to IRUS-UK. This results in two datasets: the monthly downloads measured in GA and the usage reported by R5, also clustered by month. The number of downloads reported by GA is considerably larger than R5. The total number of downloads in GA for the period is over 3.6 million. In contrast, the amount reported by R5 is 1.5 million, around 400,000 downloads per month. Contrasting R5 and GA data on a country-by-country basis shows significant differences. GA lists more than five times the number of downloads for several countries, although the totals for other countries are about the same. When looking at individual tiles, of the 500 highest ranked titles in GA that are also part of the 1,000 highest ranked titles in R5, only 6% of the titles are relatively close together. The choice of metric service has considerable consequences on what is reported. Thus, drawing conclusions about the results should be done with care. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated, and we can only benefit from clarity.
Abstract: An overview is presented of resources and web analytics strategies useful in setting solutions for capturing usage statistics and assessing audiences for open access academic journals. A set of complementary metrics to citations is contemplated to help journal editors and managers to provide evidence of the performance of the journal as a whole, and of each article in particular, in the web environment. The measurements and indicators selected seek to generate added value for editorial management in order to ensure its sustainability. The proposal is based on three areas: counts of visits and downloads, optimization of the website alongside with campaigns to attract visitors, and preparation of a dashboard for strategic evaluation. It is concluded that, from the creation of web performance measurement plans based on the resources and proposals analysed, journals may be in a better position to plan the data-driven web optimization in order to attract authors and readers and to offer the accountability that the actors involved in the editorial process need to assess their open access business model.
Abstract: In the era of digitization and Open Access, article-level metrics are increasingly employed to distinguish influential research works and adjust research management strategies. Tagging individual articles with digital object identifiers allows exposing them to numerous channels of scholarly communication and quantifying related activities. The aim of this article was to overview currently available article-level metrics and highlight their advantages and limitations. Article views and downloads, citations, and social media metrics are increasingly employed by publishers to move away from the dominance and inappropriate use of journal metrics. Quantitative article metrics are complementary to one another and often require qualitative expert evaluations. Expert evaluations may help to avoid manipulations with indiscriminate social media activities that artificially boost altmetrics. Values of article metrics should be interpreted in view of confounders such as patterns of citation and social media activities across countries and academic disciplines.
Abstract: Introduction. This study aimed to analyse the current use status of Korean scholarly papers accessible in the repository of the Korea Institute of Science and Technology Information in order to assess the economic validity of the maintenance and operation of the repository.
Method. This study used the modified historical cost method and performed regression analysis on the use of Korean scholarly papers by year and subject area.
Analysis. The development cost of the repository and the use volumes were analysed based on 1,154,549 Korean scholarly papers deposited in the Institute repository.
Results. Approximately 86% of the deposited papers were downloaded at least once and on average, a paper was downloaded over twenty-six times. Regression analysis showed that the ratio of use of currently deposited papers is likely to decrease by 7.6% annually, as new ones are added.
Conclusions. The need to manage currently deposited papers for at least thirteen years into the future and provide empirical proof that the repository has contributed to Korean researchers conducting research and development in the fields of science and technology. The benefit-cost ratio was above nineteen, confirming the economic validity of the repository.
• The paper examines OA effect when a journal provides two types of link to the same subscription article: OA and paid content.
• OA links perform better than paid content links. When not indicating the OA status of a link, the performance drops greatly.
• OA benefits all countries, but its positive impact is slightly greater for developed countries.
• Combining social media dissemination with OA appears to enhance the reach of scientific information….”
“Over the last two-and-a-half years, we have been working as part of the EU-funded HIRMEOS (High Integration of Research Monographs in the European Open Science Infrastructure) project to create open source software and databases to collectively gather and host usage data from various platforms for multiple publishers. As part of this work, we have been thinking deeply about what the data we collect actually means. Open Access books are read on, and downloaded from, many different platforms – this availability is one of the benefits of making work available Open Access, after all – but each platform has a different way of counting up the number of times a book has been viewed or downloaded.
Some platforms count a group of visits made to a book by the same user within a continuous time frame (known as a session) as one ‘view’ – we measure usage in this way ourselves on our own website – but the length of a session might vary from platform to platform. For example, on our website we use Google Analytics, according to which one session (or ‘view’) lasts until there is thirty minutes of inactivity. But platforms that use COUNTER-compliant figures (the standard that libraries prefer) have a much shorter time-frame for a single session – and such a platform would record more ‘views’ than a platform that uses Google Analytics, even if it was measuring the exact same pattern of use.
Other platforms simply count each time a book is accessed (known as a visit) as one ‘view’. There might be multiple visits by the same user within a short time frame – which our site would count as one session, or one ‘view’ – but which a platform counting visits rather than sessions would record as multiple ‘views’.
Downloads (which we also used to include in the number of ‘views’) also present problems. For example, many sites only allow chapter downloads (e.g. JSTOR), others only whole book downloads (e.g. OAPEN), and some allow both (e.g. our own website). How do you combine these different types of data? Somebody who wants to read the whole book would need only one download from OAPEN, but as many downloads as there are chapters from JSTOR – thus inflating the number of downloads for a book that has many chapters.
So aggregating this data into a single figure for ‘views’ isn’t only comparing apples with oranges – it’s mixing apples, oranges, grapes, kiwi fruit and pears. It’s a fruit salad….”
“Journal articles downloaded from Sci-Hub, an illegal site of pirated materials, were cited nearly twice as many times as non-downloaded articles, reports a new paper published online in the journal, Scientometrics….
Correa and colleagues could have added either one of these sources of usage data to their model to verify whether the Sci-Hub indicator continued to independently predict future citations. That would have confirmed whether Sci-Hub was a cause of — instead of merely associated with — future citations. Without such a control, the authors may have fumbled both their analysis and conclusion.
Sci-Hub may indeed lead to more article citations, although it is impossible to reach that conclusion from this study….”
In January 2016, the three journals of the Association for Research in Vision and Ophthalmology (ARVO) transitioned to gold open access.
Increased author charges were introduced to partially offset the loss of subscription revenue.
Submissions to the two established journals initially dropped by almost 15% but have now stabilized.
The transition has not impacted acceptance rates and impact factors, and article pageviews and downloads may have increased as a result of open access….”
“Open research is fundamentally changing the way that researchers communicate and collaborate to advance the pace and quality of discovery. New and dynamic open research-driven workflows are emerging, thus increasing the findability, accessibility, and reusability of results. Distribution channels are changing too, enabling others — from patients to businesses, to teachers and policy makers — to increasingly benefit from new and critical insights. This in turn has dramatically increased the societal impact of open research. But what remains less clear is the exact nature and scope of this wider impact as well as the societal relevance of the underpinning research….”
“What impact does open research have on society and progressing global societal challenges? The latest results of research carried out between Springer Nature, the Association of Universities in the Netherlands (VSNU) and the Dutch University Libraries and the National Library consortium (UKB), illustrates a substantial advantage for content published via the Gold OA route where research is immediately and freely accessible.
Since the UN’s Sustainable Development Goals (SDGs) were launched in 2015, researchers, their funders and other collaborative partnerships have sought to explore the impact and contribution of open research on SDG development. However – until now – it has been challenging to map, and therefore identify, emerging trends and best practice for the research and wider community. Through a bibliometric analysis of nearly 360,000 documents published in 2017 and a survey of nearly 6,000 readers on Springer Nature websites, the new white paper, Open for All, Exploring the Reach of Open Access Content to Non-Academic Audiences shows not only the effects of content being published OA but more importantly who that research is reaching.”
“Preprint servers offer a means to disseminate research reports before they undergo peer review and are relatively new to clinical research.1-4 medRxiv is an independent, not-for-profit preprint server for clinical and health science researchers that was introduced in June 2019.4 A central question was whether there would be adoption of a new approach to dissemination of pre–peer-review science. Now, a year after its establishment, we report medRxiv’s submissions, posts, and downloads.”
We previously reported that random assignment of scientific articles to a social media exposure intervention did not have an effect on article downloads and citations. In this paper, we investigate whether longer observation time after exposure to a social media intervention has altered the previously reported results.
For articles published in the International Journal of Public Health between December 2012 and December 2014, we updated article download and citation data for a minimum of 24-month follow-up. We re-analysed the effect of social media exposure on article downloads and citations.
There was no difference between intervention and control group in terms of downloads (p?=?0.72) and citations (p=?0.30) for all papers and when we stratified by open access status.
Longer observation time did not increase the relative differences in the numbers of downloads and citations between papers in the social media intervention group and papers in the control group. Traditional impact metrics based on citations, such as impact factor, may not capture the added value of social media for scientific publications.