“We started running Twitter bots in 2017, when Liberate Science was only a side project. First we launched the PsyArxiv bot. Later, we launched bots for the MetaArxiv (2020) and EdArxiv (2021) preprint servers. Six years in, we are shutting down these Twitter bots. You may have already noticed they are no longer posting any new preprints since February 13th (previously 9th). There are several things that motivate us to stop the preprint bots’ operations. It includes the exodus from Twitter overall; it includes the recent announcement that Twitter API access is no longer free. It includes that the community has taken it upon itself to offer replacement bots on Mastodon.?? We offered preprint bots for free all these years, but that does not mean it was free to run this. We had to run a custom RSS feed service (based on Jeff Spies’ osfpreprints-feed; run on Glitch for $99/year). Automating a bot is free and easy if there is relatively little volume. Especially for PsyArxiv, the amount of preprints grew so rapidly that we had to upgrade our automation and costs went up to ~$600 per year (using Zapier). This is also why the 1,500 free post limit proves too uncertain in the long run….”
Category Archives: oa.apis
Announcing ROR institutional identifier support for all Scholastica products
“Including persistent identifiers (a.k.a. PIDs) in article-level metadata is one of the best ways to improve journal archiving and discovery while promoting interoperability across scholarly communication systems. That’s why Scholastica is so focused on supporting the latest industry-standard PIDs — and this month, we have an exciting announcement! We’ve integrated our peer review system, production service, and OA publishing platform with ROR institutional identifiers.
Scholastica now automatically applies ROR IDs to institutions when authors input them into our peer review submission form and when editors add them to any articles they send to Scholastica’s production service or publish via our OA hosting platform….”
Some of my upcoming projects at Crossref | Martin Paul Eve | Professor of Literature, Technology and Publishing
“As I posted a while ago, from January 2023 I will be working at Crossref while retaining my university Professorship. I wanted, here, to outline a few of the projects that I hope to work on once I get started there. I should say upfront: I am afraid there is no time estimate on these and we can’t guarantee to prioritise any particular project. But if there is one that stands out to you, do let me know, as this serves as a useful community gauge….”
Home – Data Commons
“Publicly available data from open sources (census.gov, cdc.gov, data.gov, etc.) are vital resources for students and researchers in a variety of disciplines. Unfortunately, processing these datasets is often tedious and cumbersome. Organizations follow distinctive practices for codifying datasets. Combining data from different sources requires mapping common entities (city, county, etc.) and resolving different types of keys/identifiers. This process is time consuming, tedious and done over and over. Our goal with Data Commons is to address this problem.
Data Commons synthesizes a single graph from these different data sources. It links references to the same entities (such as cities, counties, organizations, etc.) across different datasets to nodes on the graph, so that users can access data about a particular entity aggregated from different sources without data cleaning or joining. We hope the data contained within Data Commons will be useful to students, researchers, and enthusiasts across different disciplines….
Data Commons can be accessed by anyone via the tools available on datacommons.org. Students, researchers and developers can use the REST, Python and Google Sheets APIs, all of which are free for educational, academic and journalistic research purposes….”
August OpenCon Library Community Call on Using the OpenAlex API | August 9th, 2022
“Inspired by the ancient Library of Alexandria, OpenAlex indexes the world of scholarly research, including works, citations, authors, journals, and institutions. OpenAlex data is completely free and open to all via a web interface, API, and database snapshot. Join us to learn how to use the OpenAlex API for your scholcomm research needs. OpenAlex was created by OurResearch, a nonprofit that makes open scholarly infrastructure including Unpaywall (an index of the world’s Open Access research literature) and Unsub (a tool to help librarians eliminate toll-access journal subscriptions). …”
New Academic Graph Datasets Released From Semantic Scholar | by Semantic Scholar | May, 2022 | AI2 Blog
“Use of our Academic Graph API has accelerated since our upgrade release in 2021. Many users would like to fetch data for a large number of papers according to their own criteria, but making individual API calls for each paper is slow and expensive. We have encouraged users to use our full-corpus S2AG (Semantic Scholar Academic Graph) dataset, but this has previously not always been possible because of discrepancies between the dataset and the API. Our new datasets give users access to the full range of data exposed in our API for the entirety of our corpus….”
New OpenAlex API features! – OurResearch blog
“We’ve got a ton of great API improvements to report! If you’re an API user, there’s a good chance there’s something in here you’re gonna love.
You can now search both titles and abstracts. We’ve also implemented stemming, so a search for “frogs” now automatically gets your results mentioning “frog,” too. Thanks to these changes, searches for works now deliver around 10x more results. This can all be accessed using the new search query parameter.
New entity filters
We’ve added support for tons of new filters, which are documented here. You can now:
get all of a work’s outgoing citations (ie, its references section) with a single query.
search within each work’s raw affiliation data to find an arbitrary string (eg a specific department within an organization)
filter on whether or not an entity has a canonical external ID (works: has_doi, authors: has_orcid, etc) ….”
Usability and Accessibility of Publicly Available Patient Sa… : Journal of Patient Safety
The aims of the study were to identify publicly available patient safety report databases and to determine whether these databases support safety analyst and data scientist use to identify patterns and trends.
An Internet search was conducted to identify publicly available patient safety databases that contained patient safety reports. Each database was analyzed to identify features that enable patient safety analyst and data scientist use of these databases.
Seven databases (6 hosted by federal agencies, 1 hosted by a nonprofit organization) containing more than 28.3 million safety reports were identified. Some, but not all, databases contained features to support patient safety analyst use: 57.1% provided the ability to sort/compare/filter data, 42.9% provided data visualization, and 85.7% enabled free-text search. None of the databases provided regular updates or monitoring and only one database suggested solutions to patient safety reports. Analysis of features to support data scientist use showed that only 42.9% provided an application programing interface, most (85.7%) provided batch downloading, all provided documentation about the database, and 71.4% provided a data dictionary. All databases provided open access. Only 28.6% provided a data diagram.
Patient safety databases should be improved to support patient safety analyst use by, at a minimum, allowing for data to be sorted/compared/filtered, providing data visualization, and enabling free-text search. Databases should also enable data scientist use by, at a minimum, providing an application programing interface, batch downloading, and a data dictionary.
CKAN – The open source data management system
“A fully-featured, mature, and 100% open source DMS [data management system].”
Analyzing Institutional Publishing Output-A Short Course – Google Docs
“This short course provides training materials about how to create a set of publication data, gather additional information about the data through an API (Application Programming Interface), clean the data, and analyze the data in various ways. The API that we’ll use is from Unpaywall and helps gather information related to the open access (OA) status of the item. This short course was created for the Scholarly Communication Notebook. If open access is new to you, we recommend checking out Peter Suber’s book Open Access. It’s concise and well written. Although things have changed since it was published in 2012, it’s a great place to start….”
Major update of CORE search – CORE
“CORE has just released a major update to its search engine, including a sleek new user interface and upgraded search functionality driven by the new CORE API V3.0.
CORE Search is the engine that researchers, librarians, scholars, and others turn to for open access research papers from around the world and for staying up to date on the latest scientific literature….”
Open Access Helper gets CORE API v3 boost – Research
“This time there is a release from our friends at the Open Access Helper. This is a tool that helps everyone discover a legal Open Access version of research outputs around the web.
What is new with this version is the application’s ability to bring to researchers proactive notifications on their iPad and iPhone whenever they are browsing articles behind a paywall.
We are really excited about this release because it is integrating our brand new CORE API (v3). …”
Using R packages to populate IR
“Many institutions have reported that participation rates of article deposit in their IR are low regardless of their various efforts in outreach and engagement. Even when the deposit is mandated, the participation rate can still be quite low.
Once this hurdle was overcome, there is another challenge faced by the IR administrators, ensuring that the version submitted by the researcher is the appropriate version. If it is not, IR administrators would need to take additional steps to correspond with the researcher to obtain the appropriate version. Thus, increasing their administrative work load.
Therefore, some institutions had taken the pro-active initiative to complete the deposit on behalf of their researchers. This certainly is not a small undertaking. However, there are openly available R packages (https://ropensci.org/) that can be used to automate some of the processes. In this page, I will summarize the steps to do that….”
Access the world’s research outputs through the CORE API – Research
“On Thursday 13th January 2022, Petr Knoth, Head of CORE and Matteo Cancellieri, Lead Developer, gave a webinar describing the new CORE APIv3 features. There were 72 attendees. In the first part, we introduced new features in the API, and the second part provided live coding examples followed by answering questions from the audience.
The CORE APIv3 has already been released into production, and we encourage existing and new users of CORE to move to it. At a glance, the new APIv3 offers:
An extended model of the CORE resources to link different versions of a paper. ?
Support for medium-size datasets collection.?
Improved analytical tools?.
User management made easier?.
A gallery to kick start your journey with the API….”
A ROR-some update to our API – Crossref
“Earlier this year, Ginny posted an exciting update on Crossref’s progress with adopting ROR, the Research Organization Registry for affiliations, announcing that we’d started the collection of ROR identifiers in our metadata input schema.
The capacity to accept ROR IDs to help reliably identify institutions is really important but the real value comes from their open availability alongside the other metadata registered with us, such as for publications like journal articles, book chapters, preprints, and for other objects such as grants. So today’s news is that ROR IDs are now connected in Crossref metadata and openly available via our APIs….
Now that this metadata is available, it helps confer the downstream benefits of ROR for different (and interconnected) groups:
It makes it easier for institutions to find and measure their research output by the articles their researchers have published, or perhaps make it easier to track the grants they’ve received.
Funders need to be able to discover and track the research and researchers they have supported.
Academic librarians need to easily find all of the publications associated with their campus.
Journals need to know where authors are affiliated so they can determine eligibility for institutionally sponsored publishing agreements.
Editors can use more accurate information on author and reviewer institutions during the peer review process, which can help avoid potential conflicts of interest….”