Abstract: Opening Up Scholarship in the Humanities: Digital Publishing, Knowledge Translation, and Public Engagement considers the concept of humanistic, open, social scholarship and argues for its value in the contemporary academy as both a set of socially oriented activities and an organizing framework for such activities. This endeavour spans the interrelated areas of knowledge creation, public engagement, and open access, and demonstrates the importance of considering this triad as critical for the pursuit of academic work moving forward—especially in the humanities. Under the umbrella of open social scholarship, I consider open access as a baseline for public engagement and argue for the vitalness of this sort of work. Moreover, I suggest that there is a strong connection between digital scholarship and social knowledge creation. I explore the knowledge translation lessons that other fields might have for the humanities and include a journalist–humanist case study to this end. I also argue for the value of producing research output in many different forms and formats. Finally, I propose that there are benefits to explicitly popularizing the humanities. In sum, this dissertation speculates on past, current, and future scholarly communication activities, and proposes that such activities might be opened up for wider engagement and, thus, social benefit.
“When SMRJ was started, the editors used email and Word docs to track peer review, and they published all articles in PDF format. However, with the journal continuing to expand, the editors realized they were in need of an easier way to track submissions and a new publishing system to improve the journal’s online reading experience and chances of being added to relevant indexes. As a result, Chief Editor William Corser and Assistant Editor Sam Wisniewski began searching for publishing tools and services, focused on three key areas: streamlining peer review, modernizing the journal’s website, and producing XML for all articles.
After considering different options, Corser and Wisniewski chose to use Scholastica’s peer review and open access publishing software, as well as Scholastica’s typesetting service to produce PDF, HTML, and XML article files. Since making the switch, they’ve found that peer review is smoother for editors and authors and they’re making strides towards reaching their article discovery and indexing goals….”
Abstract: During the previous Ebola and Zika outbreaks, researchers shared their data, allowing many published epidemiological studies to be produced only from open research data, to speed up investigations and control of these infections. This study aims to evaluate the dissemination of the COVID-19 research data underlying scientific publications. Analysis of COVID-19 publications from December 1, 2019, to April 30, 2020, was conducted through the PubMed Central repository to evaluate the research data available through its publication as supplementary material or deposited in repositories. The PubMed Central search generated 5,905 records, of which 804 papers included complementary research data, especially as supplementary material (77.4%). The most productive journals were The New England Journal of Medicine, The Lancet and The Lancet Infectious Diseases, the most frequent keyword was pneumonia, and the most used repositories were GitHub and GenBank. An expected growth in the number of published articles following the course of the pandemics is confirmed in this work, while the underlying research data are only 13.6%. It can be deduced that data sharing is not a common practice, even in health emergencies, such as the present one. High-impact generalist journals have accounted for a large share of global publishing. The topics most often covered are related to epidemiological and public health concepts, genetics, virology and respiratory diseases, such as pneumonia. However, it is essential to interpret these data with caution following the evolution of publications and their funding in the coming months.
From the body of the paper: “In global public health emergencies, it should be mandatory to disseminate any information that may be of value in fighting the crisis. For this to be done efficiently, there is a need to develop agreed global standards for sharing data and results for scientists, institutions and governments.”
“As more data is made openly accessible as a part of journal articles or federal funder requirements, the importance of data curation can not be over-emphasized. Data is not intrinsically useful. Furthermore, datasets do not simply become useful because they are publicly available. Data is useful only insofar as it meets the needs of the user. Likewise, more data does not mean more value (Binggeser, 2017). Data is of the highest value for those who collected it. Others who were not involved in the data collection and analysis efforts can find data less useful for their needs, especially if the data is not properly curated. Including as supplemental information a dataset that has not been properly prepared for public use reduces the usefulness of the data. Data must be cleaned and prepared properly for it to be useful. And this process does not happen by accident; it must be purposely conducted by someone trained in properly curating a dataset for public use (Johnston et al, 2018)….
What value does the curation process provide for data? The data curation steps formalized by the DCN in the C.U.R.A.T.E.D. acronym include the following: Check (the files for completeness and viability), Understand (the contents), Request (additional information), Augment (metadata), Transform (to open formats), Evaluate (for FAIRness), and Document (the curation process) (Johnston et al, 2018). …”
Abstract: Electronic theses and dissertations (ETDs) have traditionally taken the form of PDFs and ETD programs and their submission and curation procedures have been built around this format. However, graduate students are increasingly creating non-PDF files during their research, and in some cases these files are just as or more important than the PDFs that must be submitted to satisfy degree requirements. As a result, both graduate students and ETD administrators need training and resources to support the handling of a wide variety of complex digital objects. The Educopia Institute’s ETDplus Toolkit provides a highly usable set of modules to address this need, openly licensed to allow for reuse and adaption to a variety of potential use cases.
“10 years later I ended up working at Cold Spring Harbor myself, and continuing my relationship with HighWire from a new perspective. The arXiv preprint server for physics had launched in 1991, and my colleague John Inglis and I had often talked about whether we could do something similar for biology. I remember saying we could put together some of HighWire’s existing components, adapt them in certain ways and build something that would function as a really effective preprint server—and that’s what we did, launching bioRxiv in 2013. It was great then to be able to take that experiment to HighWire meetings to report back on. Initially there was quite a bit of skepticism from the community, who thought there were cultural barriers that meant preprints wouldn’t work well for biology, but 7 years and almost 100,000 papers later it’s still there, and still being served very well by HighWire.
When we launched bioRxiv we made it very explicit that we would not take clinical work, or anything involving patients. But the exponential growth of submissions to bioRxiv demonstrated that there was a demand and a desire for this amongst the biomedical community, and people were beginning to suggest that a similar model be trialed for medicine. A tipping point for me was an OpEd in the New York Times (Don’t Delay News of Medical Breakthroughs, 2015) by Eric Topol (Scripps Research) and Harlan Krumholz (Yale University), who would go on to become a co-founder of medRxiv….”
“Data sharing was a core principle that led to the success of the Human Genome Project 20 years ago. Now scientists are struggling to keep information free….
So in 1996, the HGP [Human Genome Project] researchers got together to lay out what became known as the Bermuda Principles, with all parties agreeing to make the human genome sequences available in public databases, ideally within 24 hours — no delays, no exceptions.
Fast-forward two decades, and the field is bursting with genomic data, thanks to improved technology both for sequencing whole genomes and for genotyping them by sequencing a few million select spots to quickly capture the variation within. These efforts have produced genetic readouts for tens of millions of individuals, and they sit in data repositories around the globe. The principles laid out during the HGP, and later adopted by journals and funding agencies, meant that anyone should be able to access the data created for published genome studies and use them to power new discoveries….
The explosion of data led governments, funding agencies, research institutes and private research consortia to develop their own custom-built databases for handling the complex and sometimes sensitive data sets. And the patchwork of repositories, with various rules for access and no standard data formatting, has led to a “Tower of Babel” situation, says Haussler….”
“Marking the International Day of Persons with Disabilities on 3 December 2020, UNESCO has released a new publication aiming at assisting stakeholders in the preparation of documentary heritage in accessible formats for persons with disabilities.
The publication, Accessible Documentary Heritage, offers a set of guidelines for parties involved in the digitization of heritage documents, including librarians, archivists, museums workers, curators, and other stakeholders in carefully planning digital platforms and contents with a view to incorporating disability and accessibility aspects….”
Abstract: This interactive panel brings together researchers, practitioners, and educators to explore ways of connecting theory, research, practice, and LIS education around the issue of information format. Despite a growing awareness of the importance of information format to information seeking, discovery, use, and creation, LIS has no sound, theoretically?informed basis for describing or discussing elements of format, with researchers and practitioners alike relying on know?it?when?they?see?it understandings of format types. The Researching Students’ Information Choices project has attempted to address this issue by developing the concept of containers, one element of format, and locating it within a descriptive taxonomy of other format elements based on well?established theories from the field of Rhetorical Genre Studies. This panel will discuss how this concept was developed and implemented in a multi?institutional, IMLS?grant?funded research project and how panelists are currently deploying and planning to deploy this concept in their own practice. Closing the loop in this way creates sustainable concepts that build a stronger field overall.
“ALPSP is delighted to announce that the winners of this year’s ALPSP Awards for Innovation in Publishing, are Jus Mundi and WordToEPUB with the Open Library of Humanities receiving Highly Commended….”