Why moving to diamond open access will not only save money, but also help to protect privacy – Walled Culture

“First, the routine assignment of copyright by academics to publishers like Elsevier is creating highly profitable businesses. Moreover, those profits allow big companies to become even bigger, notably by buying up smaller companies, to create what is effectively an oligopoly.

Secondly, the extent to which academic publishers control the dissemination of research, and increasingly are embedded in many other aspects of the academic world, means that they have a unique opportunity to gather, collate and exploit huge quantities of personal data.

A move to open access publishing on its own won’t address those problems. Academic publishers have proved adroit at subverting the original open access movement so as to cement their position as knowledge gatekeepers. The only way to stop this excessive profit-taking, and to cut the academic publishers down to size, is to move to diamond open access publishing, discussed on Walled Culture a few months back. An additional benefit is that doing so will help to reduce the constant surveillance and loss of privacy that the current academic publishing model is starting to put at its heart….”

Solving medicine’s data bottleneck: Nightingale Open Science | Nature Medicine

“Open datasets, curated around unsolved medical problems, are vital to the development of computational research in medicine, but remain in short supply. Nightingale Open Science, a non-profit computing platform, was founded to catalyse research in this nascent field….”

Gearing Up for 2023 Part II: Implementing the NIH Data Management and Sharing Policy – NIH Extramural Nexus

“NIH has a long history of developing consent language and, as such, our team worked across the agency – and with you! – to develop a new resource that shares best practices for developing informed consents to facilitate data/biospecimen storage and sharing for future use.  It also provides modifiable sample language that investigators and IRBs can use to assist in the clear communication of potential risks and benefits associated with data/biospecimen storage and sharing.  In developing this resource, we engaged with key federal partners, as well as scientific societies and associations.  Importantly, we also considered the 102 comments from stakeholders in response to a RFI that we issued in 2021.

As for our second resource, we are requesting public comment on protecting the privacy of research participants when data is shared. I think I need to be upfront and acknowledge that we have issued many of these types of requests over the last several months and NIH understands the effort that folks take to thoughtfully respond.  With that said, we think the research community will greatly benefit from this resource and we want to hear your thoughts on whether it hits the mark or needs adjustment….”

Welcome to Hotel Elsevier: you can check-out any time you like … not – Eiko Fried

“Luckily, folks over at Elsevier “take your privacy and trust in [them] very seriously”, so we used the Elsevier Privacy Support Hub to start an “access to personal information” request. Being in the EU, we are legally entitled under the European General Data Protection Regulation (GDPR) to ask Elsevier what data they have on us, and submitting this request was easy and quick.

After a few weeks, we both received responses by email. We had been assigned numbers 0000034 and 0000272 respectively, perhaps implying that relatively few people have made use of this system yet. The emails contained several files with a wide range of our data, in different formats. One of the attached excel files had over 700,000 cells of data, going back many years, exceeding 5mb in file size. We want to talk you through a few examples of what Elsevier knows about us….

To start with, of course they have information we have provided them with in our interactions with Elsevier journals: full names, academic affiliations, university e-mail addresses, completed reviews and corresponding journals, times when we declined review requests, and so on.

Apart from this, there was a list of IP addresses. Checking these IP addresses identified one of us in the small city we live in, rather than where our university is located. We also found several personal user IDs, which is likely how Elsevier connects our data across platforms and accounts. We were also surprised to see multiple (correct) private mobile phone numbers and e-mail addresses included….

And there is more. Elsevier tracks which emails you open, the number of links per email clicked, and so on….

We also found our personal address and bank account details, probably because we had received a small payment for serving as a statistical reviewer1. These €55 sure came with a privacy cost larger than anticipated.

Data called “Web Traffic via Adobe Analytics” appears to list which websites we visited, when, and from which IP address. “ScienceDirect Usage Data” contains information on when we looked at which papers, and what we did on the corresponding website. Elsevier appears to distinguish between downloading or looking at the full paper and other types of access, such as looking at a particular image (e.g. “ArticleURLrequestPage”, “MiamiImageURLrequestPage”, and “MiamiImageURLreadPDF”), although it’s not entirely clear from the data export. This leads to a general issue that will come up more often in this piece: while Elsevier shared what data they have on us, and while they know what the data mean, it was often unclear for us navigating the data export what the data mean. In that sense, the usefulness of the current data export is, at least in part, questionable. In the extreme, it’s a bit like asking google what they know about you and they send you a file full of special characters that have no meaning to you….”

 

 

Is it time to share qualitative research data?

Abstract:  Policies by the National Institutes of Health and the National Science Foundation and scandals surrounding failures to reproduce the findings of key studies in psychology have generated increased calls for sharing research data. Most of these discussions have focused on quantitative, rather than qualitative, research data. This article examines scientific, ethical, and policy issues surrounding sharing qualitative research data. We consider advantages of sharing data, including enabling verification of findings, promoting new research in an economical manner, supporting research education, and fostering public trust in science. We then examine standard procedures for archiving and sharing data, such as anonymizing data and establishing data use agreements. Finally, we engage a series of concerns with sharing qualitative research data, such as the importance of relationships in interpreting data, the risk of reidentifying participants, issues surrounding consent and data ownership, and the burden of data documentation and depositing on researchers. For each concern, we identify options that enable data sharing or describe conditions under which select data might be withheld from a data repository. We conclude by suggesting that the default assumption should be that qualitative data will be shared unless concerns exist that cannot be addressed through standard data depositing practices such as anonymizing data or through data use agreements.

 

The importance of transparency and openness in research data to drive patient benefit—Examples from the United Kingdom

“Key points

• Openness in research is discussed in many guises and brings many benefits and there is a need to join up, share good practice and talk a common language to make maximum progress.

• Importance of open publication as a key step to increasing trust and reducing waste in research.

• There is a need to be careful about the language used and it is crucial that the right safeguards are in place to protect people’s personal data. Personal data should not be ‘open’, and discussing it in this way risks its availability and associated use.

• Need to start with trust and involvement of patients and the public to ensure maximum benefit can flow from data.”

Researchfish accused of ‘intimidating’ academics | Research Professional News

Research-tracking system’s online threats to report UK researchers to funders cause uproar

The Researchfish impact-tracking service has been accused of intimidation and bullying after saying it would report academics to their funders for criticising the system on Twitter.

The social media spat was sparked after Christopher Jackson, a geoscience professor at the University of Manchester, confessed on the platform that he had never heard of the database—which uses technology to track research and evidence impact—writing “What’s Researchfish?”

His message prompted dozens of replies from amused academics, many of whom are required to report their research outcomes to the service as part of their funders’ terms and conditions.

One user wrote, jokingly: “The Researchfish is a small, bright yellow fish, which can be placed in someone’s ear in order for them to be able to instantly forget any impact their work has achieved.”

[…]

Das Lesen der Anderen (The Reading of Others) | o-bib. Das offene Bibliotheksjournal

Siems, R. (2022). Das Lesen der Anderen: Die Auswirkungen von User Tracking auf Bibliotheken. O-Bib. Das Offene Bibliotheksjournal / Herausgeber VDB, 9(1), 1–25. https://doi.org/10.5282/o-bib/5797

 

English abstract (via deepl.com): In recent years, the major science publishers have moved away from being content publishers to becoming data analytics businesses. As platform companies, they aim for high margins and use this capital to buy up alternative offerings emerging from the science community and expand into other business areas. The aim is to make themselves indispensable in all central processes of science management, so that, as in the information sector, one must then speak of a vendor lock-in. To this end, publishers have equipped their platforms with tools for comprehensive user tracking. At the same time, they are trying to bring access authentication under their control to ensure personalised access to all users. Some publishers or their parent companies are also intertwining with the security industry and (semi-)state actors to form opaque data deals that also bring university networks into view. The paper attempts to analyse this development and formulate consequences.

 

Abstract: Die großen Wissenschaftsverlage entwickelten sich in den vergangenen Jahren weg von einem verlegerischen Inhaltsanbieter hin zu einem Data Analytics Business. Als Plattformunternehmen er zielen sie hohe Margen und nutzen dieses Kapital, um aus der Wissenschaftscommunity entstehende Alternativangebote aufzukaufen und sich in weitere Geschäftsfelder auszudehnen. Ziel ist es, sich in allen zentralen Prozessen der Wissenschaftssteuerung unverzichtbar zu machen, sodass dann wie im Informationsbereich von einem Vendor Lock-in gesprochen werden muss. Zu diesem Zweck haben die Verlage ihre Plattformen mit Instrumenten für ein umfassendes User Tracking ausgestattet. Zugleich versuchen sie, die Zugangsauthentifizierung unter ihre Kontrolle zu bringen, um den personalisierten Zugriff auf alle Nutzenden sicherzustellen. Einige Verlage oder deren Mutterkonzerne verflechten sich auch mit der Sicherheitsindustrie und (halb-)staatlichen Akteuren zu undurchsichtigen Datengeschäften, bei denen auch die Hochschulnetze in den Blick geraten. Der Aufsatz versucht, diese Entwicklung zu analysieren und Konsequenzen zu formulieren.

 

OSU, PSU and UO Libraries initiate negotiations with Elsevier | Libraries | Oregon State University

Oregon State University Libraries, Portland State University Library, and the University of Oregon Libraries are entering into contract negotiations with Elsevier for journal access in 2023, and for up to three years beyond that. For the sake of transparency, we want to reach out to our respective campuses to provide you with the goals we hope to achieve with this renewal cycle.

FBI Gains Access to Sci-Hub Founder’s Google Account Data * TorrentFreak

“Sci-Hub founder Alexandra Elbakyan says that following a legal process, the Federal Bureau of Investigations has gained access to data in her Google account. Google itself informed her of the data release this week noting that due to a court order, the company wasn’t allowed to inform her sooner….

In an email to Elbakyan dated March 2, 2022, Google advises that following a legal process issued by the FBI, Google was required to hand over data associated with Elbakyan’s account. Exactly what data was targeted isn’t made clear but according to Google, a court order required the company to keep the request a secret….”

 

Welcome to Surveillance University, where privacy no longer matters

By Sarah Craig

When Allemai Dagnatchew (SFS ’22) began her final semester of college, the last thing she wanted to worry about was digital privacy. But within the first few days of spring 2022 classes, she found out that one of her professors mandated use of Perusall, a program that allows instructors to see how many students are doing their class readings.

Dagnatchew’s distrust of Perusall mirrors a larger sentiment college students have felt toward proctoring software and their utilization during the pandemic. The use of online test proctoring and similar programs skyrocketed over the last two years, with students reporting discomfort towards proctoring programs since the beginning of virtual learning.

“[Professors] are excited about these programs, and they don’t think it’s weird at all,” she said. “And that’s what I feel like is odd—that they think this is normal.”

Perusall, for example, gives professors access to the amount of time a student spends on a reading and how many of the assigned pages they’ve viewed. Despite students feeling like their privacy is compromised with this access and the return of most students to in-person learning, schools are still utilizing proctoring and similar invasive technologies.

The use of virtual learning tools has been subject to the fluctuating pandemic and schools’ virtual status, with the Omicron variant causing many colleges to move online for final exams and the beginning of the spring semester. As COVID-19 continues, students have been increasingly subject to excessive monitoring technologies—whether proctoring exams or scanning files—such as Proctorio, ProctorU, and Perusall.

[…]

Full article: Accepting Free Content during the COVID-19 Pandemic: An Assessment

Abstract:  Deciding what risks are worth taking amidst a global pandemic poses quite specific challenges for Acquisitions librarians. For example, given that virtually all colleges and universities now offer classes electronically, demand for electronic library content has increased sharply. This challenging situation is magnified at smaller campuses, due to their smaller Acquisitions budgets and having to retain substantial print content. In response, vendors are offering free content to those affected and, given the usual limits on library funding, librarians may find such offers almost irresistible. But while there can be advantages to accepting such content, it can be a double-edged sword. In this paper, we examine how crisis situations, such as the current pandemic, affect librarian decision-making, in particular concerning accepting free content from vendors. How do we best navigate these new territories without losing our bearings amidst a pandemic? And how might these decisions and situations affect our patrons? We focus our research on three important issues, with both practical and ethical implications. First, the issue of patron privacy rights. The free content being offered by vendors poses substantial privacy risks for libraries and patrons, because it is not licensed and thus not governed by privacy agreements. Second, we examine the problem of ensuring accessibility for all users and the extent to which accessibility can be guaranteed with non-licensed content. Finally, we look at the likely impact on faculty-librarian relationships when free content will have to be relinquished and libraries cannot afford the same content. Such changes will likely cause tension between faculty and librarians and be especially frustrating for students. While vendors coming to the aid of the libraries during this time is potentially a generous gesture, it also implies pitfalls and negative impacts in its aftermath.

 

Full article: A Librarian’s Perspective on Sci-Hub’s Impact on Users and the Library

Abstract:  On December 19, 2019, The Washington Post reported that the U.S. Justice Department is investigating the founder and operator of Sci-Hub Alexandra Elbakyan on suspicion of working with Russian intelligence to steal U.S. military secrets from defense contractors. The article further discusses Sci-Hub’s methods for acquiring the login credentials of university students and faculty “to pilfer vast amounts of academic literature.” This has long been public knowledge. But the confirmation of Sci-Hub potentially working with Russian intelligence was major news. Both fronts of the Sci-Hub assault on stealing intellectual property are concerning. Since many academic researchers and their employers routinely receive defense contracts to perform sensitive research, the article helped posit that offering free access to academic research articles is perhaps a Trojan Horse strategy for Sci-Hub. To add to The Washington Post’s report, we sought out individuals at universities with a vantage point on Sci-Hub’s activities to see if there is independent evidence to support the report. We spoke to Dr. Jason Ensor who at the time of this interview was Manager, Engagement Strategy and Scholarly Communication, Library Systems at Western Sydney University Library in Australia. Ensor holds four degrees in related critical thinking fields and is an experienced business professional in software development, data scholarship and print publishing. He is also a distinguished speaker on digital humanities and linked fields, presenting regularly in national and international forums.

 

EU and US legislation seek to open up digital platform data

“Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight (1). Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament (2), and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators (3). Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight….

Platforms’ hesitancy to share data with researchers is not wholly unwarranted. A Facebook-approved research partnership with Cambridge Analytica resulted in scandal, a $5 billion fine by the US Federal Trade Commission (FTC) for privacy violations, and new requirements in an FTC consent order to implement comprehensive data privacy and security safeguards (9)….”