Abstract: Software and data citation are emerging best practices in scholarly communication. This article provides structured guidance to the academic publishing community on how to implement software and data citation in publishing workflows. These best practices support the verifiability and reproducibility of academic and scientific results, sharing and reuse of valuable data and software tools, and attribution to the creators of the software and data. While data citation is increasingly well-established, software citation is rapidly maturing. Software is now recognized as a key research result and resource, requiring the same level of transparency, accessibility, and disclosure as data. Software and data that support academic or scientific results should be preserved and shared in scientific repositories that support these digital object types for discovery, transparency, and use by other researchers. These goals can be supported by citing these products in the Reference Section of articles and effectively associating them to the software and data preserved in scientific repositories. Publishers need to markup these references in a specific way to enable downstream processes.
Category Archives: oa.citations
The Value of Articles Published in Journals Focused on the Scholarship of Teaching and Learning: A Use of Citations and Altmetrics as Indicators of Value | SpringerLink
Abstract: The value of articles published in journals devoted to the scholarship of teaching and learning constitutes a relatively unexplored topic of inquiry within the broader field of inquiry on the scholarship of teaching and learning. This article addresses this topic using citations and four types of altmetrics as indicators of value. We used a sample of 100 articles published in four SOTL focused journals: two high consensus journals (BioScience: Journal of College Biology Teaching and The Journal of Chemical Education) and two low consensus journals (Teaching History and Teaching Sociology). In addition to the level of consensus of the discipline of these journals, we also measured the institutional type of the first authors of these articles and the type of study of the article. We advanced three conclusions from our data analysis with the first one being of particular significance to SOTL work. This conclusion is that the pattern of findings of this study cry out fairly loudly that articles published in SOTL-focused journals hold value to users of the articles as expressed through citations of them, as well as mentions of them through various altmetrics. Moreover the similar magnitudes of this value transpires regardless of the institutional type of the article’s first author and whether the article recommended a practice or recommended content. However, the value ascribed to articles differ according to the level of consensus of the field of the SOTL journal, which show a difference in article views, Twitter mentions and Mendeley uses.
Controlled experiment finds no detectable citation bump from Twitter promotion | bioRxiv
Abstract: Multiple studies across a variety of scientific disciplines have shown that the number of times that a paper is shared on Twitter (now called X) is correlated with the number of citations that paper receives. However, these studies were not designed to answer whether tweeting about scientific papers causes an increase in citations, or whether they were simply highlighting that some papers have higher relevance, importance or quality and are therefore both tweeted about more and cited more. The authors of this study are leading science communicators on Twitter from several life science disciplines, with substantially higher follower counts than the average scientist, making us uniquely placed to address this question. We conducted a three-year-long controlled experiment, randomly selecting five articles published in the same month and journal, and randomly tweeting one while retaining the others as controls. This process was repeated for 10 articles from each of 11 journals, recording Altmetric scores, number of tweets, and citation counts before and after tweeting. Randomization tests revealed that tweeted articles were downloaded 2.6–3.9 times more often than controls immediately after tweeting, and retained significantly higher Altmetric scores (+81%) and number of tweets (+105%) three years after tweeting. However, while some tweeted papers were cited more than their respective control papers published in the same journal and month, the overall increase in citation counts after three years (+7% for Web of Science and +12% for Google Scholar) was not statistically significant (p > 0.15). Therefore while discussing science on social media has many professional and societal benefits (and has been a lot of fun), increasing the citation rate of a scientist’s papers is likely not among them.
Citation Impact of Institutional Repositories in Selected Higher Learning Institutions in Tanzania | East African Journal of Science, Technology and Innovation
Abstract: Institutional Repositories (IRs) development in Tanzania has made publications readily available, accessible, and retrievable. IRs have increased the visibility of researchers and institutions and have contributed to the University ranking. Several Higher Learning Institutions (HLIs) in Tanzania have developed their IRs hosting institutional publications. This study assessed the citation impact of IR contents of selected Tanzanian HLIs. The study evaluated the citation impact of IR contents using publications indexed in the Scopus database. Four HLIs were purposively selected. The search within reference advanced feature for the Scopus database was conducted. The publications indexed in Scopus citing the selected IR contents from 2018 to 2022 were identified and extracted. Data analysis was carried out using Microsoft Excel and SPSS. The study findings indicated that the Tanzanian IR contents had a low citation impact. The study recommends that Tanzanian HLIs devise strategies for increasing IR content visibility. The strategies may include registering the IRs in online platforms and ensuring the Handle System is implemented to improve the accessibility of the IR content. Furthermore, the HLIs should create awareness of research visibility, enabling researchers to publish and increase their visibility.
You do not receive enough recognition for your influential science | bioRxiv
Abstract: During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.
Why article-level metrics are better than JIF if you value talent over privilege – The Ideophone
“The enormous difference in sheer volume means that an OA megajournal is likely to have quite a few papers with more cites than the Nature median — high impact work that we would miss entirely if we focused only on the JIF. The flip side is where we find the halo effect: there are, in any given year, hundreds of Nature papers that underperform quite a bit relative to the IF (indeed half of them underperform relative to the median). This —the skewed distributions for both the megajournal and the glamour journal— shows why it is a bad idea to ascribe properties to individual papers based on how other papers published under the same flag have been cited….”
Tracing data: A survey investigating disciplinary differences in data citation | Quantitative Science Studies | MIT Press
Abstract: Data citations, or citations in reference lists to data, are increasingly seen as an important means to trace data reuse and incentivize data sharing. Although disciplinary differences in data citation practices have been well documented via scientometric approaches, we do not yet know how representative these practices are within disciplines. Nor do we yet have insight into researchers’ motivations for citing – or not citing – data in their academic work. Here, we present the results of the largest known survey (n = 2,492) to explicitly investigate data citation practices, preferences, and motivations, using a representative sample of academic authors by discipline, as represented in the Web of Science (WoS). We present findings about researchers’ current practices and motivations for reusing and citing data and also examine their preferences for how they would like their own data to be cited. We conclude by discussing disciplinary patterns in two broad clusters, focusing on patterns in the social sciences and humanities, and consider the implications of our results for tracing and rewarding data sharing and reuse.
Expanding the Data Ark: an attempt to make the data from highly cited social science papers publicly available
Abstract: Access to scientific data can enable independent reuse and verification; however, most data are not available and become increasingly irrecoverable over time. This study aimed to retrieve and preserve important datasets from 160 of the most highly-cited social science articles published between 2008-2013 and 2015-2018. We asked authors if they would share data in a public repository — the Data Ark — or provide reasons if data could not be shared. Of the 160 articles, data for 117 (73%, 95% CI [67% – 80%]) were not available and data for 7 (4%, 95% CI [0% – 12%]) were available with restrictions. Data for 36 (22%, 95% CI [16% – 30%]) articles were available in unrestricted form: 29 of these datasets were already available and 7 datasets were made available in the Data Ark. Most authors did not respond to our data requests and a minority shared reasons for not sharing, such as legal or ethical constraints. These findings highlight an unresolved need to preserve important scientific datasets and increase their accessibility to the scientific community.
The Emergence of the Open Research University Through International Research Collaboration
Abstract: In higher education, international research collaboration functions as a visible mechanism of cooperation and competition, serving as a proxy for quality and academic excellence. Open universities use revolutionary education models but are not often associated with quality or academic excellence. To investigate the impact of international research collaboration by active researchers affiliated with open institutions, a bibliometrics analysis was conducted of three open universities and nine traditional, comparative universities between 2000 and 2022. The results indicate that research outputs that are open access, sponsored and funded, and developed with international coauthors have positive and statistically significant effects on citation counts. Moreover, international research collaboration significantly affects all universities, not just open institutions. The results conclude that researchers affiliated with open universities are only 4.3% less cited than their comparative peers, which is attributed to publication factors, research disciplines and subject areas, and journal characteristics. Findings are discussed and imply a strategic shift in the institutional functions and outputs of open universities as collaborative conduits of knowledge production and dissemination.
PreprintResolver: Improving Citation Quality by Resolving Published Versions of ArXiv Preprints using Literature Databases
Abstract: The growing impact of preprint servers enables the rapid sharing of time-sensitive research. Likewise, it is becoming increasingly difficult to distinguish high-quality, peer-reviewed research from preprints. Although preprints are often later published in peer-reviewed journals, this information is often missing from preprint servers. To overcome this problem, the PreprintResolver was developed, which uses four literature databases (DBLP, SemanticScholar, OpenAlex, and CrossRef / CrossCite) to identify preprint-publication pairs for the arXiv preprint server. The target audience focuses on, but is not limited to inexperienced researchers and students, especially from the field of computer science. The tool is based on a fuzzy matching of author surnames, titles, and DOIs. Experiments were performed on a sample of 1,000 arXiv-preprints from the research field of computer science and without any publication information. With 77.94 %, computer science is highly affected by missing publication information in arXiv. The results show that the PreprintResolver was able to resolve 603 out of 1,000 (60.3 %) arXiv-preprints from the research field of computer science and without any publication information. All four literature databases contributed to the final result. In a manual validation, a random sample of 100 resolved preprints was checked. For all preprints, at least one result is plausible. For nine preprints, more than one result was identified, three of which are partially invalid. In conclusion the PreprintResolver is suitable for individual, manually reviewed requests, but less suitable for bulk requests. The PreprintResolver tool (this https URL, Available from 2023-08-01) and source code (this https URL, Accessed: 2023-07-19) is available online.
Influence of Publication Capacity on Journal Impact Factor for International Open Access Journals from China: Insights from Microeconomic Analysis
Abstract: The evolving landscape of open access (OA) journal publishing holds significant importance for policymakers and stakeholders who seek to make informed decisions and develop strategies that foster sustainable growth and advancements in open access initiatives within China. This study addressed the shortcomings of the current journal evaluation system and recognized the necessity of researching the elasticity of annual publication capacity (PUB) in relation to the Journal Impact Factor (JIF). By constructing an economic model of elasticity, a comparative analysis of the characteristics and dynamics of international OA journals from China and overseas was conducted. The analysis categorized OA journals based on their respective elasticity values and provided specific recommendations tailored to each category. These recommendations offer valuable insights into the development and growth potential of both OA journals from China and overseas. Moreover, the findings underscore the importance of strategic decision-making to strike a balance between quantity and quality in OA journal management. By comprehending the dynamic nature of elasticity, China can enhance its OA journal landscape, effectively meet the academic demand from domestic researchers, minimize the outflow of OA publications to overseas markets, and fortify its position within the global scholarly community.
Open Access Effects (OASE) – The influence of structural and author-specific factors on the impact of open access publications from various disciplines
Abstract: This study report describes the qualitative part of the project “Open Access Effects – The influence of structural and author-specific factors on the impact of open access publications from various disciplines” (OASE). The aim of the project was to describe the transformation process from traditional to open access publishing with a bibliometric approach and to analyse existing (if applicable future) publishing strategies and conflicts in the context of open access. Related questions were discussed within three focus group interviews conducted online with researchers from 8 different disciplines and 14 different countries around the world. Interviewees were recruited from participants in a previous survey (Fraser, Mayr & Peters, 2021) and from registrations for a workshop held the day before. Mixed sampling approach (convenience and theoretical sampling) to contrast views from researchers of different career status, discipline and resident country. Group size: 7-8 participants. Interview length: approx. two hours each. Among the participants were PhD students (3), postdoctoral researchers (6) and professors (13). Nine participants had a natural science background and 13 had a social science background. They were located in 14 different countries. Following a mixed sampling procedure, two groups were formed in which career status, field of study and country of residence were contrasted, and one group in which senior researchers were predominantly represented in terms of career status.
Code sharing increases citations, but remains uncommon | Research Square
Abstract: Biologists increasingly rely on computer code, reinforcing the importance of published code for transparency, reproducibility, training, and a basis for further work. Here we conduct a literature review examining temporal trends in code sharing in ecology and evolution publications since 2010, and test for an influence of code sharing on citation rate. We find that scientists are overwhelmingly (95%) failing to publish their code and that there has been no significant improvement over time, but we also find evidence that code sharing can considerably improve citations, particularly when combined with open access publication.
How bibliometrics and school rankings reward unreliable science | The BMJ
“Metrics can reap huge rewards but, unfortunately, they’re also simple to game. And so, following Goodhart’s law—“When a measure becomes a target, it ceases to be a good measure”—citations are gamed,8 in increasingly cunning ways. Authors and editors create citation rings and cartels.9 Companies pounce on expired domains to hijack indexed journals10 and take their names, fooling unsuspecting researchers. Or researchers who are well aware of the game use this vulnerability to publish papers that cite their work.
Universities pay cash bonuses to faculty members who publish papers in highly ranked journals.11 Some institutions have reportedly even schemed to hire prominent academics who either add an affiliation to their papers or move employers outright.12 This means that those researchers’ papers—and citations—count toward the universities’ rankings. Researchers cite themselves, a lot.13 Journals have been found to encourage, or even require, authors to cite other work in the same periodical,14 and they fight over papers they think will be highly cited to win the impact factor arms race.15
Paper mills, which sell everything from authorship to complete articles, have proliferated,16 and while they’re not a new phenomenon, they have industrialised in recent years.17 They have figured out ways to ensure that authors peer review their own papers.18 In the United States, the “newest college admissions ploy” is “paying to make your teen a ‘peer-reviewed’ author.”19…”
Identification and Portraits of Open Access Journals Based on Open Impact Metrics Extracted from Social Activities | Journal of Scholarly Publishing
Abstract: This article focuses on open impact metrics extracted from social media activities that demonstrate the identification and portraits of open access journals based on these alternative forms of open impact metrics. The research sample consists of open access journals from Scopus, with open impact metrics retrieved from Altmetric.com. The open impact metrics extracted from social activities established that an evaluation system based on altmetrics can better reflect the portraits of open access journals than traditional citation-based metrics. This study finds that open access journals strengthen international academic communication and cooperation, build cross-border and cross-regional knowledge-sharing projects, realize the knowledge of interdisciplinary sharing and exchange, and, most importantly, provide a one-stop service for readers. This research indicates that through the use of open impact metrics, it is possible to identify the portraits of open access journals, thus providing a new method to construct and reform open access journal evaluation systems.