“Digital objects are inextricably dependent on their context, the infrastructure of people, processes, and technology that care for them. The FAIR Principles are at the heart of the data ecosystem, but they do not specify how digital objects are made FAIR or for how long they should be kept FAIR. This perspective is provided by the Trustworthy Digital Repository (TDR) requirements by defining long-term digital object preservation expectations. We’re all doing something for someone, and to deliver an effective service at scale, we need a sense of the types of users we have and how we can meet their needs, also in the future.
FAIRsFAIR, SSHOC, and EOSC Nordic are all supporting digital repositories in their journey to achieve TDR status. When sharing experiences, the project teams found out that two fundamental TDR concepts are not always easy to understand: preservation and Designated Community. The draft working paper FAIR + Time: Preservation for a Designated Community was prepared in collaboration with the three projects. It seeks to present key concepts and expand on them to specify the standards and assessments required for an interoperable ecosystem of FAIR (findable, accessible, interoperable and reusable) data preserved for the long term in generalist and specialist FAIR-enabling trustworthy digital repositories (TDR) for a defined designated community of users. It seeks to provide context and define these concepts for audiences familiar with research data and technical data management systems but with less direct experience of digital preservation and trustworthy digital repositories. This is intended to help clarify which organisations are potential candidates to receive CoreTrustSeal TDR status and identify and support the types of organisations that may not be candidates but play a vital role in the data ecosystem. …”
“The good news is that we had more and more data, little by little, the internet was filling up with repositories, APIs and open databases with which to work. The bad news is that, for that very reason, the transfer of this huge data set was increasingly cumbersome, strenuous and expensive….
Cohen and Lo began to think about the problem and came to a conclusion that today may seem obvious: the best tool to transfer large files was BitTorrent. Why not develop a solution based on the world’s best-known p2p exchange protocol? Thus was born Academic Torrents….
The main obstacle was not technical. It was social. In these four years of work, the hardest thing has been convincing the researchers that a technology as demonized as torrents could have legitimate scientific use. And not only that because, once they convinced the researchers, they touched an even tougher bone: convincing the institutions….”
Abstract: With the passage of time, celluloid film degrades and valuable film history is lost, resulting in loss of cultural history which contributes to the shared sense of community, identify, and place at a local and national level. Despite the growth in digitised services for accessing cultural resources, to date no economic valuation has been performed on digital local history resources which are accessible online. Despite the recent emergence of online portals for digital cultural services in many countries (such as virtual tours of art galleries and digitisation of cultural archives) a shift which has accelerated in response to the Covid-19 epidemic, there remains a major literature gap around the value of digital culture. Failure to account for the value of digital archives risks sub-optimal allocation of resources to accessing and preserving these aspects of local cultural history. In response, we performed the first contingent valuation study to estimate willingness to pay for a free online film archive portal containing historical film footage for localities throughout the United Kingdom. Users were willing to pay an average hypothetical subscription for digital archive film services of £38.52/annum. Non-users in the general population were asked their willingness to pay a hypothetical annual donation to maintain free public access (£4.68/annum on average). The results suggest that positive social value is gained from online access to digital archive film, and from knowing that the cultural heritage continues to be digitally accessible by the public for current and future generations. We outline how this evidence aligns with a theoretical framework of use and non-use value for digital goods and services extending beyond those who currently use the portal, to those introduced to it, and those in the general public who have never directly experienced the online archive service. We also report what we believe is the first application of Subjective Wellbeing analysis to engagement with a digital cultural service. The advantage of applying methods from economics to value cultural activities in monetary terms is that it makes emerging modes of digital cultural goods and services commensurable with other costs and benefits as applied to cultural policy and investment decisions, putting it on a level footing with physical cultural assets.
“Research systems connect is a fully managed, cloud-based service that joins up your existing institutional research systems (including your CRIS, repository and preservation systems) so you can save time on transferring data and metadata between your systems and free up staff time for other tasks. It also connects to external scholarly communications services, maximising impact with minimal effort….”
“We are pleased to see the U.S. Senate endorse language that strongly supports providing faster access to taxpayer-funded research results with today’s passage of the U.S. Innovation and Competition Act (S. 1260).
Section 2527 of the bill, formerly the Endless Frontier Act, (titled “Basic Research”) includes language originally written by Senator Wyden and supported by Senator Paul that directs federal agencies funding more than $100 million annually in research grants to develop a policy that provides for free online public access to federally-funded research “not later than 12 months after publication in peer-reviewed journals, preferably sooner.”
The bill also provides important guidance that will maximize the impact of federally-funded research by ensuring that final author manuscripts reporting on taxpayer funded research are:
Deposited into federally designated or maintained repositories;
Made available in open and machine readable formats;
Made available under licenses that enable productive reuse and computational analysis; and
Housed in repositories that ensure interoperability and long-term preservation. …”
“On April 5, 2021, the Supreme Court issued its opinion on the long-running litigation between Oracle and Google over the reuse of aspects of Oracle’s Java programming framework in Google’s Android mobile operating system. The majority opinion, written by Justice Breyer and joined by five of his fellow justices (Chief Justice Roberts, and Justices Kagan, Sotomayor, Kavanaugh, and Gorsuch), sided with Google, saying its use was lawful because it was protected by fair use. Justice Thomas wrote a dissent, joined only by Justice Alito, arguing that Google’s use was infringing. The newest Justice, Amy Coney Barrett, did not participate in the arguments or decision of the case as it predated her joining the Court. More background on the case can be found in my earlier blog post for SPN summarizing the oral arguments.
Justice Breyer’s opinion is already a landmark for the reasons I laid out there: it is the first Supreme Court opinion to address fair use in nearly thirty years—the last one was Campbell v. Acuff-Rose in 1994. And it is the first Supreme Court opinion to address copyright’s protection for software—ever. And now we know that the opinion will be a milestone for another reason: it is a confident, erudite treatment of the issue by a Justice who has been thinking about copyright and software for more than half a century. As a law professor, Stephen Breyer earned tenure at Harvard based on his 1970 article, “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs.” The opinion is thus a very happy coincidence: a thorny and consequential issue confronted by a subtle and experienced thinker. The results are quite encouraging for software preservation and for cultural heritage institutions and fair users generally….”
“This Practical Guide provides guidance to ensure the long-term preservation and accessibility of research data, and supports organisations to provide a framework in which researchers can share their output in a sustainable way.
It includes three complementary maturity matrices for funders, performers, and data infrastructures. These allow them to evaluate the current status of their policies and practices, and to identify next steps towards sustainable data sharing and seeking alignment with other organisations in doing so….”
“Major publishers want to censor research-sharing resource Sci-Hub from the internet, but archivists are quickly responding to make that impossible.
More than half of academic publishing is controlled by only five publishers. This position is built on the premise that users should pay for access to scientific research, to compensate publishers for their investment in editing, curating, and publishing it. In reality, research is typically submitted and evaluated by scholars without compensation from the publisher. What this model is actually doing is profiting off of a restriction on article access using burdensome paywalls. One project in particular, Sci-Hub, has threatened to break down this barrier by sharing articles without restriction. As a result, publishers are going to every corner of the map to destroy the project and wipe it from the internet. Continuing the long tradition of internet hacktivism, however, redditors are mobilizing to create an uncensorable back-up of Sci-Hub….”
“Thu, 20 May 2021, 9:24 am·2-min read The Sci-Hub science platform, blocked since December 2020, is receiving support from a number of Reddit users.A group of Reddit users are protesting against the FBI’s attempts to pressure Alexandra Elbakyan, creator of the Sci-Hub website, which publishes scientific studies for free. The community is mobilizing around her vision: to create a digital library of scientific articles accessible for free.Sci-Hub is an illegal site and in theory impossible to access in many regions. Sci-Hub offers free access to scientific articles. To do this, the site bypasses the paid access locks of research publishers. Since its launch on September 5, 2011, more than 85 million articles have been made available for free while the average cost for a single article would be about 30 dollars. Its creator, Alexandra Elbakyan, a native of Kazakhstan, wanted to make scientific knowledge and insights accessible to others like her who could not access them due to cost. Used by many students and researchers, the site was also the target of publishers of these journals, including the publishing company Elsevier, which since 2015 has been attempting via lawsuits in the United States, France and India, to put the site out of business with the claim that the site infringes their copyrights….”
“Sci-Hub hosts 85 million articles and the Reddit community at /r/datahoarder wants to make sure they’re free and available for everyone forever by decentralizing it because of recent legal challenges for the site, which was sued by science publishing giant Elsevier and owes it millions.
“It’s time we sent Elsevier and the USDOJ a clearer message about the fate of Sci-Hub and open science: we are the library, we do not get silenced, we do not shut down our computers, and we are many,” said a post on the /r/datahoarder subreddit. …”
“Sci-Hub itself is currently frozen and has not downloaded any new articles since December 2020. This rescue mission is focused on seeding the article collection in order to prepare for a potential Sci-Hub shutdown….”
“Now, people are trying to rescue the site before it’s wiped off the web for good. A collection of data-hoarding redditors have banned together to personally torent each of the 85 million articles currently housed within Sci-Hub’s walls. Ultimately, their goal is to make a fully open-source library that anyone can access, but nobody can take down….”
“Research Associate in Archiving and Preserving Open Access Books – Community-led Open Publication Infrastructures for Monographs (COPIM) Project.
1.0 FTE, fixed-term appointment for 12 months ending no later than 31 October 2022.
A full-time Research Associate (1.0 FTE) is required to contribute to the Research England and Arcadia Foundation funded COPIM project (Community-led Open Publication Infrastructures for Monographs), which is composed of 10 main partners including Universities, libraries, publishers, and infrastructure providers. The post is for a full-term contract of 12 months at 1.0 FTE.
The Research Associate will lead Loughborough University contributions to the COPIM project and collaborate closely with project partners to in identifying the metadata and other information required by preservation services as well as repository platforms used by libraries and universities; manage the creation of a Toolkit to assist authors, publishers, and Librarians in archiving open access books; and build relationships with projects working in similar areas….”
“Archive Team, a self-described “loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage,” is a volunteer organization that monitors fading or at-risk sites before they’ve vanished completely. When Google announced the end of failed social network Google+, the collective saved 1.56 petabytes of its data in under four weeks.
Much of what Archive Team saves is then stored within the Internet Archive, which anyone can use to digitize whatever they feel is important. But the Wayback Machine uses bots to crawl the web and take snapshots as they go, while the Archive Team is laser focused on preserving endangered sites. It’s the difference between slowly amassing a huge library and trying to save every book from a specific collection that’s about to catch fire. To accomplish this, anyone can donate bandwidth and hard drive space to the “Warrior,” an archiving application that systematically downloads sites the group is worried about. Those downloads are then sent to the Archive Team’s servers before being moved to the safety of the Internet Archive. The Warrior’s current projects include the soon-to-shutter Freewebs, a hosting service that’s housed 55 million webpages since 2001, as well as certain subreddits that have been quarantined, often the first step discussion website Reddit takes before deleting an entire forum. The content of conversations within those communities might help researchers understand how, for example, extremist viewpoints spread online….”