A Coalition for Social Learning Across Content : Hypothesis

“Today we’re announcing a coalition, Social Learning Across Content, of educational content creators, technology platforms, service providers, and stakeholder groups that are coming together in support of cross-platform social learning. Moving forward, this coalition will work together to establish user-friendly, interoperable best practices and solutions to bring social learning to all content….

Coalition members will work together to identify the technical challenges standing in the way of interoperability, and to propose and prototype solutions for those challenges. They’ll also work to ensure that solutions are accessible and remain so as different technologies are brought into contact with different contact platforms. A set of technical recommendations that characterize the solution set will be published, including any recommendations for how existing standards like Learning Tools Interoperability (LTI) could be extended if need be….”

A Coalition for Social Learning Across Content | Hypothesis

Today we’re announcing a coalition, Social Learning Across Content, of educational content creators, technology platforms, service providers, and stakeholder groups that are coming together in support of cross-platform social learning. Moving forward, this coalition will work together to establish user-friendly, interoperable best practices and solutions to bring social learning to all content.

Crowdsourcing Scholarly Discourse Annotations | 26th International Conference on Intelligent User Interfaces

Abstract:  The number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface’s usability and the participant’s attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process.

 

Open Context: Web-based research data publishing

“Open Context reviews, edits, annotates, publishes and archives research data and digital documentation. We publish your data and preserve it with leading digital libraries. We take steps beyond archiving to richly annotate and integrate your analyses, maps and media. This links your data to the wider world and broadens the impact of your ideas….”

ANN: A platform to annotate text with Wikidata IDs | Zenodo

Abstract:  Report of the work done by the Ann team at the eLife Sprint 2020. 

It describes the effort pursued towards a system for universal annotation of biomedical articles using the collaborative knowledge graph of Wikidata.  

The project is currently active at https://github.com/lubianat/ann. 

Community curation in PomBase: enabling fission yeast experts to provide detailed, standardized, sharable annotation from research publications | Database | Oxford Academic

Abstract:  Maximizing the impact and value of scientific research requires efficient knowledge distribution, which increasingly depends on the integration of standardized published data into online databases. To make data integration more comprehensive and efficient for fission yeast research, PomBase has pioneered a community curation effort that engages publication authors directly in FAIR-sharing of data representing detailed biological knowledge from hypothesis-driven experiments. Canto, an intuitive online curation tool that enables biologists to describe their detailed functional data using shared ontologies, forms the core of PomBase’s system. With 8 years’ experience, and as the author response rate reaches 50%, we review community curation progress and the insights we have gained from the project. We highlight incentives and nudges we deploy to maximize participation, and summarize project outcomes, which include increased knowledge integration and dissemination as well as the unanticipated added value arising from co-curation by publication authors and professional curators.

 

Directory of preprint server policies and practices – ASAPbio

“Given the growth of preprint servers and alternative platforms, it is increasingly important to describe their disciplinary scope and compare and contrast policies including governance, licensing, archiving strategies and the nature of any screening checks. These practices are important to both researchers and policymakers.

Here we present searchable information about preprint platforms relevant to life sciences, biomedical, and clinical research….”

Our 10 Millionth Annotation – Hypothesis

“Hypothesis just reached its 10 millionth annotation. Half of those have happened in the last year.

This milestone is the achievement of a community: all the scientists, scholars, journalists, authors, publishers, fact-checkers, technologists and, now more than ever, teachers and students who have used and valued collaborative annotation over the years. Thank you all for reaching this momentous number with us, especially during this challenging time….”

Research Square

“Research Square is a preprint platform that allows you to share your work early, gain feedback and improve your manuscript, and discover emerging science all in one place….

Research Square features all the characteristics of a traditional preprint server, but with some notable differences:

All preprints are displayed in HTML. The full text is indexed and machine-readable so that it is more discoverable by search engines.
Authors can demonstrate to the community they meet established standards in scientific reporting by purchasing assessments in integrity, reproducibility, and statistical rigor. Badge icons are displayed on their article page for assessments they pass. Learn more about our badges here.
Video summaries can be added to the article page to communicate your research to a broader audience.
Readers can comment on a paper using our custom-built commenting system or the hypothes.is annotation tool.
Figures are rendered using a lightbox that allows for zooming and downloading. …”

 

Hypothesis for Instructional Continuity During COVID-19 – Hypothesis

“Over the past weeks, our contacts at schools, colleges, and universities have been writing to us asking about how they can use Hypothesis in response to campus closures and the move to online courses as a result of the COVID-19 crisis. We’d like to help.

Collaborative annotation can help connect students and teachers while they keep their distance to safeguard their health during the current crisis. Reading alongside and interacting with each other using Hypothesis is about as close to a seminar-style experience as they can have online.

To support the role that collaborative annotation can play in facilitating expanded online classes, Hypothesis is waiving all fees to educational institutions for the remainder of 2020, and will evaluate whether to extend this as the current situation develops. Existing partners can request a refund or apply any fees that they have already paid towards future costs….”