Postdoctoral researcher on digital commons

“Recruitment within the Horizon Europe project NGI Commons (Open Source and Internet Commons for Europe’s Digital Sovereignty) to ensure strategic alignment and integration of the Next Generation Internet (NGI) efforts within the broader European Digital / Internet Commons (DC/IC) context

The candidate will carry out this scientific research, prepare articles presenting his/her research at conferences and in journals, and contribute to the results of the project with the project team.

Activities

Map active communities in Europe based on interviews
Analyse existing governance models
Propose recommendations for the growth, maintenance, security and sustainability of the digital commons and strategies for the inclusion of the digital commons and the internet on the political agenda
Design policies to strengthen the development of free and open source software and commons contributing to European digital sovereignty and ethical technological development respectful of human rights
Organise and participate in meetings and conferences with communities, members of funding bodies and public digital infrastructure investment programmes, and political, governmental and European institutions….”

What Is A Repository For? – Building the Commons

“If you haven’t heard, in 2024 Humanities Commons will be launching a completely reimagined open-access repository. It’s currently under heavy construction. So we’ve been asking ourselves: Why does the Commons have a repository in the first place? At our heart we are a social network, a hub for scholarly exchange. Most of us don’t think “repository” when we think about social networks like Mastodon or Instagram or Facebook. So what exactly is a repository? And why will the new repository be so vital to the life of the Commons?…

How will the new Commons repository broadcast researchers’ work? Reaching an audience is partly about open access. This is not just a matter of letting visitors view the works on the repository site free-of-charge. It is also about letting other open access services and sites “re-broadcast” works from the Commons collection. So we will offer free access to the Commons repository in the formats that other tools and aggregators can use: a REST API, OAI-PMH streams, and (later on) the COAR Notify protocol. And we will embed data about each work in its repository page so that it is catalogued by services like Google Scholar. This extends the audience for members’ work far beyond the circle of people who visit the Commons….”

Culture Heritage and Structured Data: How DPLA became the biggest institution to contribute to Structured Data on Commons – Diff

“What would become of Wikipedia and its sister projects without images from museums, libraries, and archives? Pictures from these institutions are able to illustrate a range of different articles, in diverse fields and areas. However, in order to really accomplish that, images should not only be available, but also enriched with data that can make them more findable on the projects. 

And so, for the past few years, the Culture and Heritage team at the Wikimedia Foundation has been involved with Structured Data-related initiatives in order to engage heritage materials on the Wikimedia projects. Our objective, together with the Structured Data Across Wikimedia (SDAW) team, was to support and increase image usage across the projects, as well as to structure Wikimedia to help it reach communities globally.

One of the main projects we worked on together was the initiative with the Digital Public Library of America (DPLA). This institution became one of the biggest Wikimedia Commons contributors, with 3.7 million images available on the project, by not only being the main institution in the United States directly uploading files to the platform, but also because of its structured data activities. Since 2020, DPLA has worked on adding and modeling structured data and engaging in discussions around the topic, precisely to make its files (the files from the 300 institutions that contribute to the DPLA’s Wikimedia pipeline) more findable and used on Commons, on Wikipedia, and elsewhere. Currently, DPLA presents around 15 million edits to 50-100 million structured data on Commons statements….”

AI and the tyranny of the data commons | Opinions | Al Jazeera

“These ideas about the positive power of the commons, which became popular around the same time the internet was coming of age, greatly influenced the idealism behind the sharing economy. We were led to believe that there was no tragedy in this commons, that it was ok to give our data to corporations because data was a non-rival good. We were encouraged to spend as much of our lives as possible in this digital land of plenty, where all benefitted equally.

 

Unfortunately, this idea, despite its aspirational beauty, has not served us well. This is because while corporations have been publicly persuading us to believe in the data commons and encouraging us to contribute to it, behind closed doors they have been doing everything in their power to privatise and monetise it. That’s where the tyranny comes in….”

Reclaiming the Digital Commons: A Public Data Trust for Training Data

Abstract:  Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.

 

Ten lessons for data sharing with a data commons | Scientific Data

“A data commons is a cloud-based data platform with a governance structure that allows a community to manage, analyze and share its data. Data commons provide a research community with the ability to manage and analyze large datasets using the elastic scalability provided by cloud computing and to share data securely and compliantly, and, in this way, accelerate the pace of research. Over the past decade, a number of data commons have been developed and we discuss some of the lessons learned from this effort.”

ResearchEquals Supporting Memberships

“Supporting memberships are a community based approach to how ResearchEquals evolves. They come with membership dues (€79.99 per year or €9.99 per month) and together we build a network of people with one common denominator: To make all research work visible.

As a supporting member, you get front row in shaping ResearchEquals. Every member has equal voice regardless of whether you are a professor, junior researcher, citizen scientist, or even an institute. One member, one vote.

With a supporting membership you also get certain rights:

The right to request information
The right to petition for action or to desist action
The right to block third party acquisitions…”

Promoting Commons Presents and Futures Symposium | University of Essex

This is a one-day symposium event to showcase COVER’s research and its impact on developing a more inclusive and egalitarian education and economy. The event will be structured across a series of presentations and discussions aiming to bring together multiple stakeholders from across disciplines, areas of practice and geographies to discuss how COVER’s work can influence education and economic policy.

[…]

 

Balász Bodó: ‘Digital commons are actually reproducing existing power inequalities’ – Open Knowledge Foundation blog

“OKFN: What does the process of chasing and taking down Z-Library mean for the concept of open knowledge?

Balász Bodó: When I read the news that these two Russian individuals have been detained, I thought, well, history has come to a full circle. I don’t know these people, how old they are, I assume they are in their thirties. But certainly, their parents or their grandparents may have been or could have easily been detained by the Soviet authorities for sharing books that they were not supposed to share. And now, 30 years after the fall of the Berlin Wall, people are again detained for sharing books – for a different reason, but it’s the same threat, ‘You’re gonna lose your freedom if you share knowledge’. …”

A Federated Commons | Building the Commons

by Mike Thicke

Twitter’s recent troubles have catalyzed unprecedented attention on Mastodon as an alternative. In turn, this has introduced many to the Fediverse—a loose collection of services that, like Mastdodon, use the ActivityPub protocol to communicate with each other.

At Humanities Commons, we have long considered ActivityPub to be the most promising way to expand from our current, single-site, structure to a network of associated Commonses. We have taken Mastodon as an inspiration and model for a new, federated Commons network.

I hope to use this blog both to keep users at Humanities Commons informed of our plans and progress toward this goal of a renewed Commons and Commons network, but to also have conversations with all of you about our direction, about how we can best serve your needs, and about how you can contribute to our journey.

In this post, I want to describe in general terms how the Commons functions as a pseudo-network now, some of the challenges we’ve experienced with that structure, and how a federated or decentralized Commons might address those problems. In future posts I will go into more detail about how different components of the site—such as profiles, groups, sites, and the repository—might function in a federated Commons, as well as discussions of how we plan to implement all of this.

[…]

 

Democratic research: Setting up a research commons for a qualitative, comparative, longitudinal interview study during the COVID-19 pandemic – ScienceDirect

Abstract:  The sudden and dramatic advent of the COVID-19 pandemic led to urgent demands for timely, relevant, yet rigorous research. This paper discusses the origin, design, and execution of the [PROJECT NAME] research commons, a large-scale, international, comparative, qualitative research project that sought to respond to the need for knowledge among researchers and policymakers in times of crisis. The form of organization as a research commons is characterized by an underlying solidaristic attitude of its members and its intrinsic organizational features in which research data and knowledge in the study is shared and jointly owned. As such, the project is peer-governed, rooted in (idealist) social values of academia, and aims at providing tools and benefits for its members. In this paper, we discuss challenges and solutions for qualitative studies that seek to operate as research commons.

 

Communities, Commoning, Open Access and the Humanities: An Interview with Martin Eve – ScienceOpen

Abstract:  Leading open access publishing advocate and pioneer Professor Martin Paul Eve considers several topics in an interview with WPCC special issue editor Andrew Lockett. These include the merits of considering publishing in the context of commons theory and communing, digital platforms as creative and homogenous spaces, cosmolocalism, the work of intermediaries or boundary organisations and the differing needs of library communities. Eve is also asked to reflect on research culture, the academic prestige economy, the challenges facing the humanities, digital models in trade literature markets and current influences in terms of work in scholarly communications and recent academic literature. Central concerns that arise in the discussion are the importance of values and value for money in an environment shaped by increasing demands for policies determined by crude data monitoring that are less than fully thought through in terms of their impact and their implications for academics and their careers.

 

Sam Moore of The Radical Open Access Collective | Frontiers of Commoning podcast, with David Bollier

Open access is a term used to describe academic books, journals, and other research that can be freely copied and shared rather than tightly controlled by large commercial publishers as expensive, proprietary product. Over the past 20 years, this vision has fallen far short of its original ambitions, however, as large publishers have developed new regimes to control the circulation of scientific and scholarly knowledge and charge dearly for it. Since 2015, the Radical Open Access Collective has been championing experimental, noncommercial and commons-based alternatives. In this interview, Sam Moore, an organizer of the Collective, takes stock of the state of open access publishing.