Siems (2023) „Überwachen und Strafen“ – Tracking und Kontrolle des Forschungszyklus (“Surveil and Punish” – Tracking and Control of the Research Life Cycle) | ABI Technik

Siems, R. (2023). „Überwachen und Strafen“ – Tracking und Kontrolle des Forschungszyklus. ABI Technik, 43(2), 86-95. https://doi.org/10.1515/abitech-2023-0016

Abstract

Since the German Research Foundation’s Information Paper 2021, data tracking in science has been widely discussed. However, comprehensive concepts on how to remedy this threat to scientific autonomy and integrity, for example by strengthening digital sovereignty, are not given, nor is the grip of the relevant economic actors on the entire research cycle adequately addressed. This paper looks at scientific workflows and their undermining by data-based business models.

Zusammenfassung

Seit dem Informationspapier der Deutschen Forschungsgemeinschaft aus dem Jahr 2021 wird das Thema Datentracking in der Wissenschaft viel diskutiert. Jedoch sind bislang weder umfassende Konzepte sichtbar, wie der damit einhergehenden Gefährdung der wissenschaftlichen Autonomie und Integrität beispielsweise durch die Stärkung digitaler Souveränität abgeholfen werden könnte, noch kommt der Griff der maßgeblichen wirtschaftlichen Akteure nach dem gesamten Forschungszyklus angemessen in den Blick. Dieser Aufsatz betrachtet die wissenschaftlichen Workflows und deren Unterminierung durch datenbasierte Geschäftsmodelle.

 

Principles of Diamond Open Access Publishing: a draft proposal – the diamond papers

Introduction

The Action Plan for Diamond Open Access outlines a set of priorities to develop sustainable, community-driven, academic-led and -owned scholarly communication. Its goal is to create a global federation of Diamond Open Access (Diamond OA) journals and platforms around shared principles, guidelines, and quality standards while respecting their cultural, multilingual and disciplinary diversity. It proposes a definition of Diamond OA as a scholarly publication model in which journals and platforms do not charge fees to either authors or readers. Diamond OA is community-driven, academic-led and -owned, and serves a wide variety of generally small-scale, multilingual, and multicultural scholarly communities. 

Still, Diamond OA is often seen as a mere business model for scholarly publishing: no fees for authors or readers. However, Diamond OA can be better characterized by a shared set of values and principles that go well beyond the business aspect. These distinguish Diamond OA communities from other approaches to scholarly publishing. It is therefore worthwhile to spell out these values and principles, so they may serve as elements of identification for Diamond OA communities. 

The principles formulated below are intended as a first draft. They are not cast in stone, and meant to inspire discussion and evolve as a living document that will crystallize over the coming months. Many of these principles are not exclusive to Diamond OA communities. Some are borrowed or adapted from the more general 2019 Good Practice Principles for scholarly communication services defined by Sparc and COAR1, or go back to the 2016 Vienna Principles. Others have been carefully worked out in more detail by the FOREST Framework for Values-Driven Scholarly Communication in a self-assessment format for scholarly communities. Additional references can be added in the discussion.

The formulation of these principles has benefited from many conversations over the years with various members of the Diamond community now working together in the Action Plan for Diamond Open Access, cOAlition S, the CRAFT-OA and DIAMAS projects, the Fair Open Access Alliance (FOAA), Linguistics in Open Access (LingOA), the Open Library of Humanities, OPERAS, SciELO, Science Europe, and Redalyc-Amelica. This document attempts to embed these valuable contributions into principles defining the ethos of Diamond OA publishing.

 

The Scholarly Fingerprinting Industry

Abstract:  Elsevier, Taylor & Francis, Springer Nature, Wiley, and SAGE: Many researchers know that the five giant firms publish most of the world’s scholarship. Fifty years of acquisitions and journal launches have yielded a stunningly profitable oligopoly, built up from academics’ unpaid writing-and-editing labor. Their business is a form of IP rentiership—collections of title-by-title prestige monopolies that, in the case of Nature or The Lancet, underwrite a stable of spinoff journals on the logic of the Hollywood franchise. Less well-known is that Elsevier and its peers are layering a second business on top of their legacy publishing operations, fueled by data extraction. They are packaging researcher behavior, gleaned from their digital platforms, into prediction products, which they sell back to universities and other clients. Their raw material is scholars’ citations, abstracts, downloads, and reading habits, repurposed into dashboard services that, for example, track researcher productivity. Elsevier and the other oligopolist firms are fast becoming, in other words, surveillance publishers . And they are using the windfall profits from their existing APC-and-subscription business to finance their moves into predictive analytics.

 

Should you trust Elsevier? | bjoern.brembs.blog

Data broker RELX is represented on Twitter by their Chief Communications Officer Paul Abrahams. Due to RELX subsidiary Elsevier being one of the largest publishers of academic journals, Dr. Abrahams frequently engages with academics on the social media platform. On their official pages, Elsevier tries to emphasize that they really, really can be trusted, honestly […]

 

Survey of US Higher Education Faculty 2023, Payment of Open Access Publication Fees

“The report gives highly detailed information on which faculty are receiving support from academic libraries, academic departments, foundations, and college or university administrative departments for the payment of open access publication fees. Separate data sets track payments by each source, enabling the report’s end users to compare support given by academic libraries to that given by academic or administrative departments. The study also helps define who is making personal payments for publication in open access journals.

This 114-page study is based on data from a survey of 725 higher education faculty randomly chosen from nearly 500 colleges and universities in the USA. Data is broken out by personal variables such as work title, gender, personal income level, academic discipline, age and other variables, as well as institutional indicators such as college or university type or Carnegie class, enrollment size, public or private status and others. Readers can compare support received by faculty in medicine to that in the social sciences, for example, or to business faculty. Also, support for associate professors can be compared to support for full professors, or support for men to that for women, etc. etc.

Just a few of this report’s many findings are that:

15.59% of faculty sampled have had their college library, administration or academic department pay a publication fee for them to enable open access publication of one of their works.
27.7% of faculty who consider themselves political conservatives sympathize with the goals of the open access movement.
Broken out by work title, assistant professors were the most likely to receive a subsidy from an academic library for the payment of an open access publication fee….”

Investigating the dimensions of students’ privacy concern in the collection, use, and sharing of data for learning analytics

Abstract:  The datafication of learning has created vast amounts of digital data which may contribute to enhancing teaching and learning. While researchers have successfully used learning analytics, for instance, to improve student retention and learning design, the topic of privacy in learning analytics from students’ perspectives requires further investigation. Specifically, there are mixed results in the literature as to whether students are concerned about privacy in learning analytics. Understanding students’ privacy concern, or lack of privacy concern, can contribute to successful implementation of learning analytics applications in higher education institutions. This paper reports on a study carried out to understand whether students are concerned about the collection, use and sharing of their data for learning analytics, and what contributes to their perspectives. Students in a laboratory session (n = 111) were shown vignettes describing data use in a university and an e-commerce company. The aim was to determine students’ concern about their data being collected, used and shared with third parties, and whether their concern differed between the two contexts. Students’ general privacy concerns and behaviours were also examined and compared to their privacy concern specific to learning analytics. We found that students in the study were more comfortable with the collection, use and sharing of their data in the university context than in the e-commerce context. Furthermore, these students were more concerned about their data being shared with third parties in the e-commerce context than in the university context. Thus, the study findings contribute to deepening our understanding about what raises students’ privacy concern in the collection, use and sharing of their data for learning analytics. We discuss the implications of these findings for research on and the practice of ethical learning analytics

Publication and data surveillance in academia | Research Information

“It is becoming increasingly clear that the core functions of higher education are destined to be quantified and that this data will be harvested, curated, and repackaged through a variety of enterprise management platforms. All aspects of the academic lifecycle, such as research production, publication, distribution, impact determination, citation analysis, grant award trends, graduate student research topic, and more can be sold, analysed, and gamed to an unhealthy degree. By unhealthy, we mean constricted and self-consuming as the output we develop is directly contingent on the input we receive. Well-meaning tools, such as algorithmically derived research suggestions and citation analysis, create a shrinking and inequitable academic landscape that favours invisibly defined metrics of impact that are reinforced through further citation thereby limiting the scope and scale of research available….

As the shift to open access gains momentum, there is danger of the unintended consequences as enterprise platforms seek to maximise profit as the models shift from under their feet. As Alexander Grossmann and Björn Brembs discuss, the cost creep incurred by libraries reflects this pivot shifting to a model of author costs, which are often supported by libraries, thereby adjusting costing methods from the backend subscription model to the front-end pay to publish model. It is not surprising or controversial that for-profit enterprise, database, and academic platform vendors are seeking to turn a profit. We should remain vigilant, however, to academia’s willingness to find the easy and convenient solution without considering the longer-term effects of what they are selling. In a recent industry platform webinar, academic enterprise representatives discussed the “alchemy” of user-derived data and their ability to repackage and sell this data, with consent, to development companies with their key take away being a driver towards increased revenue. More to the point, they had learned the lessons of the tech industry, and more specifically the social media companies in understanding the data we generate can be used to target us, to sell to us, to use us for further development. They discussed the ways in which the use of this data would become, like social media, intelligent and drive user behaviour – further cinching the knot on the closed-loop as algorithmically-based suggestions further constrain research and reinforce a status-quo enabled by profit motive in the guise of engagement, use, and reuse….”

Book Talk: Data Cartels Tickets, Wed, Nov 30, 2022 at 10:00 AM | Eventbrite

“In our digital world, data is power. Information hoarding businesses reign supreme, using intimidation, aggression, and force to maintain influence and control. Sarah Lamdan brings us into the unregulated underworld of these “data cartels”, demonstrating how the entities mining, commodifying, and selling our data and informational resources perpetuate social inequalities and threaten the democratic sharing of knowledge….”

‘All your data are belong to us’: the weaponisation of library usage data and what we can do about it | UKSG

By Caroline Ball – Academic Librarian, University of Derby, #ebookSOS campaigner
Twitter: @heroicendeavour, Mastodon: @heroicendeavour@mas.to

and Anthony Sinnott, Access and Procurement Development Manager, University of York; Twitter: @librarianth

What do 850 football players and their performance data have in common with academic libraries and online resources? More than you’d think! The connecting factor is data, how it is collected, used and for what purposes.

‘Project Red Card’ is demanding compensation for the use of footballers’ performance data by betting companies, video game manufacturers, scouts and others, arguing that players should have more control over how their personal data is collected and particularly how it is monetized and commercialised.

Similarly, libraries’ online resources, whether a single ebook or vast databases, are producing enormous amounts of data, utilised by librarians to assist us in our vital functions: assessing usage and value, determining demand and relevance.

But are we the only ones using this data generated by our users? What other uses is this data being put to? We know for certain that vendors have access to more data than they provide to us via COUNTER statistics etc, but we have no way of knowing how much, what types, or what is done with it.

Witness the recent controversy generated by Wiley’s removal of 1,379 e-books from Academic Complete. Publishers like Wiley determine high use by accessing statistics generated by our end-users via the various e-book platforms through which they access the content. This in itself is indicative of our end-user/library data being provided to third parties without our knowledge or consent, particularly concerning given our licences are with vendors and not publishers. We are also not privy to what data-sharing agreements exist between vendors and publishers. Should we allow library usage data to be weaponized against us in this fashion? What recourse do we have to push back against this practice of ‘data extractivism’, to either withhold this data from publishers and vendors or prohibit them from using it for their own commercial gain?

 

 

The Quiet Invasion of ‘Big Information’ | WIRED

This story is adapted from Data Cartels: The Companies That Control and Monopolize Our Information, by Sarah Lamdan.

 

When people worry about their data privacy, they usually focus on the Big Five tech companies: Google, Apple, Facebook, Amazon, and Microsoft. Legislators have brought Facebook’s CEO to the capitol to testify about the ways the company uses personal data. The FTC has sued Google for violating laws meant to protect children’s privacy. Each of the tech companies is followed by a bevy of reporters eager to investigate how it uses technology to surveil us. But when Congress got close to passing data privacy legislation, it wasn’t the Big Five that led the most urgent effort to prevent the law from passing, it was a company called RELX.

You might not be familiar with RELX, but it knows all about you. Reed Elsevier LexisNexis (RELX) is a Frankensteinian amalgam of publishers and data brokers, stitched together into a single information giant. There is one other company that compares to RELX—Thomson Reuters, which is also an amalgamation of hundreds of smaller publishers and data services. Together, the two companies have amassed thousands of academic publications and business profiles, millions of data dossiers containing our personal information, and the entire corpus of US law. These companies are a culmination of the kind of information market consolidation that’s happening across media industries, from music and newspapers to book publishing. However, RELX and Thomson Reuters are uniquely creepy as media companies that don’t just publish content but also sell our personal data.

 

Webinar: Dr. Chris Gilliard aka HyperVisible on Educational Surveillance. 20 Oct 2022, 6pm (EDT)| The Feminist and Accessible Publishing, Communications, + Tech Series @Eventbrite

Dr. Chris Gilliard (@hypervisible) is a leading critic of surveillance technology, digital privacy, and the problematic ways that tech intersects with race and social class. He will talk about the digital forms of surveillance that are coming into schools, colleges, and universities.

Dr. Chris Gilliard is a writer, professor and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology. He is an advocate for critical and equity-focused approaches to tech in education. His work has been featured in The Chronicle of Higher Ed, EDUCAUSE Review, Fast Company, Vice, and Real Life Magazine. He was recently a a research fellow with the Technology and Social Change Research Project at Harvard Kennedy School’s Shorenstein Center.

This event is part of the 4th Season of the Feminist and Accessible Publishing and Communications Technologies Speaker and Workshop Series (https://www.feministandaccessiblepublishingandtechnology.com), organized by Dr. Alex Ketchum.

Our series was made possible thanks to our sponsors: SSHRC, the Institute for Gender, Sexuality, and Feminist Studies (IGSF), the DIGS Lab, Milieux, Initiative for Indigenous Futures, MILA, Dean of Arts Grant, ReQEF, and more (see our website!)

There is no fee required to attend this event. We will provide professional captions in english. This event will NOT be recorded and NOT bemade available on our website after the event. However, you can watch other past events at: https://www.feministandaccessiblepublishingandtechnology.com/p/videos.html

 

Navigating Risk in Vendor Data Privacy Practices: An Initial Analysis of Elsevier’s ScienceDirect

“As libraries spend more and more resources licensing platforms, the terms around how vendors treat user data have become complex and difficult to understand. This poses serious concerns given the ever-increasing incentive for vendors to monetize this data—many in ways fundamentally at odds with libraries’ commitment to privacy. Protecting user privacy is a challenge for libraries trying to navigate through the vague and abstruse vendor contracts and policies. To assist libraries in this challenge, SPARC has partnered with Becky Yoose of LDH Consulting Services to analyze vendor contracts and privacy policies to provide libraries a better understanding of the potential risks they pose to user privacy….”

Data Cartels and Surveillance Publishing

“Over the last years, as the process of conducting research and scholarship has moved more and more online,  it has become clear that user surveillance and data extraction has crept into academic infrastructure in multiple ways. 

For those committed to preserving academic freedom and knowledge equity, it’s important to interrogate the practices and structures of the companies that are collecting and selling this data, and the impacts of this business model on academic infrastructure – and particularly on already marginalized and underfunded scholars and students. 

To help us understand this landscape and its implications, today we are in conversation with Sarah Lamdan, author of the forthcoming book  Data Cartels: The Companies That Control and Monopolize Our Information. …”

Mündiges Datensubjekt statt Laborratte: Rechtsschutz gegen Wissenschaftstracking | Jahrbuch Technikphilosophie

 by Felix Reda

Bei der Debatte um das Wissenschaftstracking stand bislang vor allem die Sensibilisierung für den Datenschutz im Vordergrund. Das ist ein wichtiger erster Schritt, denn nur wenn Forschende sich darüber bewusst sind, dass ihr Forschungsverhalten Klick für Klick überwacht und kommerziell verwertet wird, können sie sich dafür engagieren, dieser Praxis Einhalt zu gebieten. Doch wie so oft bei Datenschutzthemen droht sich Fatalismus breitzumachen, wenn die Debatte in der Problembeschreibung steckenbleibt.

Viel zu wenige Universitäten bieten ihren Forschenden proaktiv eine eigene, datenschutzsensible Software-Infrastruktur an, die kollaboratives wissenschaftliches Arbeiten auch institutionenübergreifend ermöglichen würde. Große Teile der wissenschaftlichen Literatur sind ausschließlich über die Portale der kommerziellen Wissenschaftsverlage verfügbar, die mit verwirrenden Cookie-Bannern aufwarten. Allein sich einen Überblick zu verschaffen, welche Daten ein Konzern wie Elsevier über einen gespeichert hat, ist ein aufwändiges Unterfangen[1]. Im ohnehin schon stressigen Forschungsalltag ist es unrealistisch, dass einzelne Forschende sich selbst vor dem Tracking durch diese Unternehmen schützen, indem sie deren Produkte meiden.