“Students, educators and learners of all ages are invited to interact with select items in the Library’s collections with the launch of Speculative Annotation, the latest experiment from LC Labs.
Created by artist and 2021 Innovator in Residence Courtney McClellan, Speculative Annotation is an open-source dynamic web application and public art project. The app presents a unique mini collection of free-to-use items from the Library for students, teachers and learners to annotate through captions, drawings and other types of mark-making. As a special feature for Speculative Annotation users, the app includes a collection of informative, engaging annotations from Library experts and resources on the Library’s website….”
“Today we’re announcing a coalition, Social Learning Across Content, of educational content creators, technology platforms, service providers, and stakeholder groups that are coming together in support of cross-platform social learning. Moving forward, this coalition will work together to establish user-friendly, interoperable best practices and solutions to bring social learning to all content….
Coalition members will work together to identify the technical challenges standing in the way of interoperability, and to propose and prototype solutions for those challenges. They’ll also work to ensure that solutions are accessible and remain so as different technologies are brought into contact with different contact platforms. A set of technical recommendations that characterize the solution set will be published, including any recommendations for how existing standards like Learning Tools Interoperability (LTI) could be extended if need be….”
Today we’re announcing a coalition, Social Learning Across Content, of educational content creators, technology platforms, service providers, and stakeholder groups that are coming together in support of cross-platform social learning. Moving forward, this coalition will work together to establish user-friendly, interoperable best practices and solutions to bring social learning to all content.
Abstract: The number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface’s usability and the participant’s attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process.
“Open Context reviews, edits, annotates, publishes and archives research data and digital documentation. We publish your data and preserve it with leading digital libraries. We take steps beyond archiving to richly annotate and integrate your analyses, maps and media. This links your data to the wider world and broadens the impact of your ideas….”
Abstract: Maximizing the impact and value of scientific research requires efficient knowledge distribution, which increasingly depends on the integration of standardized published data into online databases. To make data integration more comprehensive and efficient for fission yeast research, PomBase has pioneered a community curation effort that engages publication authors directly in FAIR-sharing of data representing detailed biological knowledge from hypothesis-driven experiments. Canto, an intuitive online curation tool that enables biologists to describe their detailed functional data using shared ontologies, forms the core of PomBase’s system. With 8 years’ experience, and as the author response rate reaches 50%, we review community curation progress and the insights we have gained from the project. We highlight incentives and nudges we deploy to maximize participation, and summarize project outcomes, which include increased knowledge integration and dissemination as well as the unanticipated added value arising from co-curation by publication authors and professional curators.
“Given the growth of preprint servers and alternative platforms, it is increasingly important to describe their disciplinary scope and compare and contrast policies including governance, licensing, archiving strategies and the nature of any screening checks. These practices are important to both researchers and policymakers.
Here we present searchable information about preprint platforms relevant to life sciences, biomedical, and clinical research….”
“Hypothesis just reached its 10 millionth annotation. Half of those have happened in the last year.
This milestone is the achievement of a community: all the scientists, scholars, journalists, authors, publishers, fact-checkers, technologists and, now more than ever, teachers and students who have used and valued collaborative annotation over the years. Thank you all for reaching this momentous number with us, especially during this challenging time….”
“Research Square is a preprint platform that allows you to share your work early, gain feedback and improve your manuscript, and discover emerging science all in one place….
Research Square features all the characteristics of a traditional preprint server, but with some notable differences:
All preprints are displayed in HTML. The full text is indexed and machine-readable so that it is more discoverable by search engines.
Authors can demonstrate to the community they meet established standards in scientific reporting by purchasing assessments in integrity, reproducibility, and statistical rigor. Badge icons are displayed on their article page for assessments they pass. Learn more about our badges here.
Video summaries can be added to the article page to communicate your research to a broader audience.
Readers can comment on a paper using our custom-built commenting system or the hypothes.is annotation tool.
Figures are rendered using a lightbox that allows for zooming and downloading. …”
“Over the past weeks, our contacts at schools, colleges, and universities have been writing to us asking about how they can use Hypothesis in response to campus closures and the move to online courses as a result of the COVID-19 crisis. We’d like to help.
Collaborative annotation can help connect students and teachers while they keep their distance to safeguard their health during the current crisis. Reading alongside and interacting with each other using Hypothesis is about as close to a seminar-style experience as they can have online.
To support the role that collaborative annotation can play in facilitating expanded online classes, Hypothesis is waiving all fees to educational institutions for the remainder of 2020, and will evaluate whether to extend this as the current situation develops. Existing partners can request a refund or apply any fees that they have already paid towards future costs….”
“Cold Spring Harbor Laboratory (CSHL) today announced a new pilot project—Transparent Review in Preprints (TRiP)—that enables journals and peer review services to post peer reviews of submitted manuscripts on CSHL’s preprint server bioRxiv.
“The new project is part of broader efforts by bioRxiv to work with other organizations to help the scholarly publishing ecosystem evolve,” said John Inglis, co-founder of bioRxiv at CSHL.
The project is powered by the web annotation tool Hypothesis and will allow participating organizations to post peer reviews in dedicated Hypothesis groups alongside relevant preprints on the bioRxiv website. Authors must opt-in with the journal/service in advance. The use of restricted Hypothesis groups allows participating organizations to control the process and ensure that only reviews they approve are displayed. Readers will continue to be able to post their own reactions to individual preprints through bioRxiv’s dedicated comment section.
eLife and the EMBO Press journals, together with Peerage of Science and Review Commons, two journal-independent peer review initiatives, will be the first to participate. Several other groups plan to join the pilot later, including the American Society for Plant Biology and the Public Library of Science….”
Abstract: The digital format opens up new possibilities for interaction with monographic publications. In particular, annotation tools make it possible to broaden the discussion on the content of a book, to suggest new ideas, to report errors or inaccuracies, and to conduct open peer reviews. However, this requires the support of the users who might not yet be familiar with the annotation of digital documents. This paper will give concrete examples and recommendations for exploiting the potential of annotation in academic research and teaching. After presenting the annotation tool of Hypothesis, the article focuses on its use in the context of HIRMEOS (High Integration of Research Monographs in the European Open Science Infrastructure), a project aimed to improve the Open Access digital monograph. The general line and the aims of a post-peer review experiment with the annotation tool, as well as its usage in didactic activities concerning monographic publications are presented and proposed as potential best practices for similar annotation activities.
Abstract: This paper describes the use of open Web annotation (OWA) for collaborative learning among online communities. OWA is defined by the open standards, principles, and practices associated with the open Web. Specifically, this case study examines collaborative learning mediated by the OWA technology Hypothesis, a standards-compliant and open-source technology that situates collaboration in texts-as-contexts. Hypothesis OWA supports a repertoire of six collaborative learning practices: Affording multimodal expression, establishing connections across contexts, archiving activity, visualizing expertise and cognition, contributing to open educational resources, and fostering open educational practices. The use of Hypothesis OWA is then described in three online communities associated with scientific research and communication, educator professional development, and Web literacy and fact-checking. The article concludes by advancing three broad questions and related research agendas regarding how OWA as collaborative learning attends to linkages among formal and informal learning environments, the growth of both open educational resources and practices, and the use of open data as learning analytics.
Projects like Hypothesis are extremely difficult to begin, grow and sustain over time. We were fortunate to have had early believers on Kickstarter, and then stalwart supporters in over the last 8 years in foundations like Sloan, Mellon, Shuttleworth, Knight, Helmsley and Omidyar. However, this foundation support is still insufficient to the longer term, larger funding required to bridge to a sustainable future for most open projects, including ours. Foundations tend to support early projects, but that support usually falls off with time. The kind of mezzanine funding that a for-profit technology might find from venture groups in later stages is simply not available within the ecosystem of non-profit, open source projects.
The core problem is that the true consumers of scholarly infrastructure — namely the researchers, scholars and their institutions and agencies which form the gross majority of users — have the means to sustain it, but lack the structure to do so. The libraries know of a few platforms that they need and provide direct support, but there are hundreds of other projects for which there is no visibility at the institutional level, because they’re still early, or because researchers rather than institutions themselves depend on them directly. Projects like Hypothesis, like any technology infrastructure trying to scale over years to maturity, need ongoing funding until sustainability can be achieved.
What is needed is a coordinating system which can identify, track and assess open infrastructure across diverse categories and constituencies and make recommendations to funders who can pool their resources to sustain it. This coordinating system is exactly the idea behind IOI….”