Data for Good Can’t be a Casualty of Tech Restructuring  • CrisisReady

“Technology companies like Meta, Twitter and Amazon are laying off thousands of employees as part of corporate restructuring in an uncertain global economy. In addition to jobs, many internal programs deemed unnecessary or financially infeasible may be lost. Programs that fall under the rubric of “corporate social responsibility” (CSR) are generally the first casualties of restructuring. CSR efforts include “data for good” programs designed to translate anonymized corporate data into social good and may be seen in the current climate as a way that companies cater to employee values or enable friendlier regulatory environments; in other words, nice-to-haves rather than need-to-haves for the bottom line.  

We believe the platforms built to safely and ethically share corporate data to support public policy are not a luxury that companies should jettison or monetize. The data we produce in our daily lives has become integral to how public decisions are made while planning for public health or disaster response. Our 21st century public data ecosystem is increasingly reliant on novel private data streams that corporations own and currently share only conditionally and increasingly, for profit….

We contend that the rapid sharing of aggregated and anonymized location data with disaster response and public health agencies should be automatic and free — though conditional on strict privacy protocols and time-limited — during acute emergencies….

While the challenges to realizing the full value of private data for public good are many, there is precedent for a path forward. Two decades ago, the International Space Charter was negotiated to facilitate access to satellite data from companies and governments for the sake of responding to major disasters. A similar approach guaranteeing access rights to privately held data for good during emergencies is more important now….”

Mehr Transparenz in der klinischen Forschung: Wie werden die neuen Transparenzvorschriften aus Sicht der pharmazeutischen Industrie bewertet? | SpringerLink

[English-language abstract, article in German.]

Abstract:  The year 2014 was a turning point for transparency in clinical research. Two regulatory innovations comprehensively changed the rules in the EU. For one thing, Regulation (EU) No. 536/2014 on clinical trials of medicinal products for human use (Clinical Trials Regulation – CTR) came into force, and for another thing, Policy 0070 of the European Medicines Agency (EMA) on the publication of and access to clinical data was published. While the policy has been occupying the pharmaceutical industry in practice since 2015, the requirements of the CTR came into effect at the end of January 2022.

The main innovation of the CTR is public access to the majority of documents and records that are created during the application process as well as during the course and after completion of a clinical trial. The special feature of Policy 0070 is the possibility for EU citizens to inspect the essential parts of a marketing authorisation application, such as the Clinical Study Report.

This contribution to the discussion describes the completely new challenges in the area of transparency that the pharmaceutical industry is facing as a result of the new requirements. In principle, transparency is to be welcomed in order to achieve the goals of the EU in the development and availability of medicines and vaccines. However, the protection of trade and business secrets of the pharmaceutical industry would be jeopardised. In the worst case, this could lead to a decline in investment in research and development within the scope of this regulation and to an international shift of clinical trials, including developing or emerging countries. Germany could lose more and more its leading role in conducting clinical trials in the EU.

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

“Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report….

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

 

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers….”

Not all that shines is Diamond: why Open Access publication favours rich authors, prestigious universities and industry-funded research | A Blog of Trial and Error

by Marcel Hobma

In recent years, it has become increasingly common for researchers to publish their work in Open Access by paying article processing costs to the publisher [1, 2]. Before the digital revolution, academic publishing was mostly subscription-based and university libraries paid publishers at regular intervals for large bundles of journals. Every physical copy of a journal came with its own production and distribution costs, making Open Access an unrealistic pursuit. When academic research was digitalized and the costs of copying and disseminating research lowered dramatically, the Open Access movement gained momentum and at least four ways of Open Access (OA) publishing joined the old subscription model [3]. Authors can now publish their studies in subscription-based Green OA journals, which allows them to republish their work on large preprint servers such as ArXiv and in freely accessible institutional repositories managed by university libraries. A second option is to publish in full Open Access, peer-reviewed journals that rely on author-paid article processing costs to maintain a steady source of income. Diamond OA journals like the Journal of Trail and Error also publish in full Open Access, but don’t charge the authors with any costs. Lastly, there exists the option to publish in commercial Hybrid journals that combine the subscription model with Open Access publishing.

Article processing costs allow researchers to publish Open Access articles in well-edited and prestigious journals, which is the main reason for authors and their funders to pay these costs. Open access is often portrayed as essential to the transparent and cooperative nature of science, but also aims to circumvent the high paywalls raised by commercial publishers that limit the access of research, and therefore facilitate the dissemination of valuable knowledge [4-6]. However, the promises and advantages of the author-paid funding mechanism also come with a downside in the form of publication bias. Not every author or institution might be able or willing to pay article processing costs if they are too high and this could lead to selective publishing practices that favour certain groups of researchers, institutions and research topics.

[…]

 

Chief Scientist plan for free research access for all

“The nation’s chief scientist will this year recommend to government a radical departure from the way research is distributed in Australia, proposing a world-first model that shakes up the multi-billion-dollar publishing business so Australian readers don’t pay a cent.

The proposed open access model would give every Australian  access to research without fee – not just researchers – with a new implementation body negotiating a deal with the publishers who have historically kept the work behind paywalls.

The model goes much further than open access schemes in the US and Europe by including existing research libraries and has been designed specifically for Australia’s own challenges.

After exploring the issue for decades, including the last 18 months working on a new national open access strategy, Dr Cathy Foley will recommend the new model to the Albanese government as a way to address key economic and social issues….

Dr Foley has instead opted for a “gold” open access model, where publishers maintain the functional role they play and are paid for it, but must permanently and freely make research literature available online for any Australian to read….

National agreements with publishers would cover both open access publishing costs, also called article processing charges or APCs, for all Australian-led research, and read access for all of Australia to each publisher’s entire catalogue.

In the proposed model, a central body will pool the money usually spent on research access to negotiate a better deal with collective bargaining because even some of Australia’s biggest research institutions pale in comparison to global publishing giants, Dr Foley said….”

A new open-access platform to bring greater oversight of deforestation risks – SPOTT.org | SPOTT.org

“ZSL [Zoological Society of London], as a sub-grantee alongside Global Canopy, will be launching a revolutionary platform in 2022 bringing together the best data available on corporate exposure to, and reporting on, deforestation and other related environmental, social and governance (ESG) issues.

The project aims to provide market-leading data to help financial institutions identify risks and find opportunities for sustainable investments to meet the growing demand for responsible financial products in light of the biodiversity and climate crises.

The database will be underpinned by the data collected through ZSL’s SPOTT assessments, Global Canopy’s Forest 500 assessments and the Stockholm Environment Institute, Global Canopy and Neural Alpha’s Trase Supply Chains and Trase Finance data, and will be aligned with the Accountability Framework Initiative and its guidance.

Supported by a five-year grant from the Norwegian government, the resulting data and metrics will provide a more comprehensive view of company performance on deforestation, conversion and associated human rights risks. The dataset will also provide broader coverage of the most exposed forest risk supply chains (in particular: palm oil, soy, timber, pulp, rubber and cattle products) and geographies where corporate performance data on these topics is currently missing. By mapping and integrating data from aligned initiatives and external datasets, more complete and in-depth coverage of corporate performance data will be available….”

Public use and public funding of science | Nature Human Behaviour

Abstract:  Knowledge of how science is consumed in public domains is essential for understanding the role of science in human society. Here we examine public use and public funding of science by linking tens of millions of scientific publications from all scientific fields to their upstream funding support and downstream public uses across three public domains—government documents, news media and marketplace invention. We find that different public domains draw from various scientific fields in specialized ways, showing diverse patterns of use. Yet, amidst these differences, we find two important forms of alignment. First, we find universal alignment between what the public consumes and what is highly impactful within science. Second, a field’s public funding is strikingly aligned with the field’s collective public use. Overall, public uses of science present a rich landscape of specialized consumption, yet, collectively, science and society interface with remarkable alignment between scientific use, public use and funding.

 

Early firm engagement, government research funding, and the privatization of public knowledge | SpringerLink

Abstract:  Early firm engagement in the scientific discovery process in public institutions is an important form of science-based innovation. However, early firm engagement may negatively affect the academic value of public papers due to firms’ impulse to privatize public knowledge. In this paper, we crawl all patent and paper text data of the Distinguished Young Scholars of the National Science Foundation of China (NSFC) in the chemical and pharmaceutical field. We use semantic recognition techniques to establish the link between scientific discovery papers and patented technologies to explore the relationship between the quality of public knowledge production, government research funding, and early firm engagement in the science-based innovation process. The empirical results show that, first, there is a relatively smooth inverted U-shaped relationship between government research funding for scholars and the quality of their publications. An initial increase in government research funding positively drives the quality of public knowledge production, but the effect turns negative when research funding is excessive. Second, government research funding for scholars can act as a value signal, triggering the firm’s impulse to privatize high-value scientific discoveries. Hence, early firm engagement moderates the inverted U-shaped relationship such that at low levels of research funding, early firm engagement can improve the quality of public knowledge production, and at high levels of research funding, early firm engagement can further reduce the quality of public knowledge production.

 

OPEN FUTURE SALON #1: INTRODUCING THE PUBLIC DATA COMMONS. B2G DATA SHARING IN THE PUBLIC INTEREST

“The European Commission’s proposal for the Data Act has introduced a narrow business-to-government (B2G) data sharing mandate, limited only to situations of public emergency and exceptional need. While being a step in the right direction, it fails to deliver a “data sharing for the public good” framework. 

The policy vision for such a framework has been presented in the European strategy for data, and specific recommendations for a robust B2G data sharing model have been made by the Commission’s high-level expert group.

The European Union is uniquely positioned to deliver a data governance framework that ensures broader B2G data sharing, in the public interest. In our latest policy brief, Public Data Commons. A public-interest framework for B2G data sharing in the Data Act, we propose such a model, which can serve as a basis for amendments to the proposed Data Act.  Our proposal not only extends the scope of B2G data sharing provisions, but includes the creation of the European Public Data Commons, a body that acts as a recipient and clearinghouse for the data made available.”

PUBLIC DATA COMMONS: A public-interest framework for B2G data sharing in the Data Act

“The Data Act represents a unique opportunity for the European legislator to deliver on the “data sharing for public good” narratives, which have been discussed for over a decade now. To make this happen the framework for B2G data sharing contained in Chapter V of the proposal needs to be strengthened so that it can serve as a robust baseline for sectoral regulations. As such, it will contribute to a European Public Data Commons that can serve as a public interest steward for data sharing and use in support of public interest objectives, such as securing public health and education, combatting the climate crisis and ensuring strong and just public institutions.”

Artificial Intelligence for Public Domain Drug Discovery: Recommendations for Policy Development

“The current drug discovery market is not responding sufficiently to health care needs where it is not adequately lucrative to do so. Unfortunately, there are a number of important yet non-lucrative fields of research in domains including pandemic prevention and antimicrobial resistance, with major current and future costs for society. In these domains, where high-risk public health needs are being met with low R&D investment, government intervention is critical. To maximize the efficiency of the government’s involvement, it is recommended that the government couple its work catalyzing R&D with the creation of a drug development ecosystem that is more conducive to the use of high-impact artificial intelligence (AI) technologies. The scientific and political communities have been ringing alarm-bells over the threat of bacterial resistance to our current antibiotics arsenal and, more generally, the evolving resistance of microbes to existing drugs. Yet, a combination of technical capacity issues and economic barriers has led to an almost complete halt of R&D into treatments that would otherwise address this threat. When a gap arises between what the market is incentivized to produce and the healthcare needs of society, governments must step in. The COVID-19 pandemic illustrates the importance of bridging that gap to ensure we are protected from future threats that would result in similarly devastating consequences. Artificial intelligence (AI) capabilities have contributed to watershed moments across a variety of industries already. The transformative power of AI is showing early signs of success in the drug discovery industry as well. Should AI for drug discovery reach its full potential, it offers the ability to discover new categories of effective drugs, enable intelligent, targeted design of novel therapies, vastly improve the speed and cost of running clinical trials, and further our understanding about the basic science underlying drug and disease mechanics. However, the current drug discovery ecosystem is suboptimal for AI research, and this threatens to limit the positive impact of AI. The field requires a shift towards open data and open science in order to feed the most powerful, data-hungry AI algorithms. This shift will catalyze research in areas of high social impact, such as addressing neglected diseases and developing new antibiotic solutions to incoming drug-resistant threats. Yet, while open science and AI promise successes on producing new compounds, they cannot address the challenges associated with market-failure for certain drug categories. Government interventions to stimulate AI-driven pharmaceutical innovation for these drug categories must therefore target the entire drug development and deployment lifecycle to ensure that the benefits of AI technology, as applied to the pharmaceutical industry, result in strong value added to improve healthcare outcomes for the public….

This document puts forward a set of recommendations that, taken together, task governments with the responsibility to promote: 1. Research and development in fields of drug discovery that are valuable to society and necessary to public health, but for which investments are currently insufficient because of market considerations. 2. Uptake of AI throughout the entire drug discovery and development pipeline. 3. A shift in culture and capabilities towards more open-data among stakeholders in academia and industry when undertaking research on drug discovery and development….”

AI-assisted drug discovery held back by private sector secrecy on datasets | Science|Business

“The discovery of new drugs is being held back because pharmaceutical firms are not sharing their data, limiting the potentially revolutionary impact of artificial intelligence on the field, according to AI experts….

Last year, for example, a team at the Massachusetts Institute of Technology reported discovering a new antibiotic compound using a computer model that can screen more than 100 million compounds in a matter of days.  

But such breakthroughs are being hampered by a lack of data sharing by private companies, stymying efforts to use powerful AI models to improve healthcare, said Yoshua Bengio, an AI pioneer at the University of Montreal and one of the leaders of an OECD-backed investigation into the issue.

“The lack of open datasets is a failure of the principle of profit maximization by individual actors,” he said.

Releasing datasets “hurts their competitiveness, even though it would help the overall market to progress faster to technological solutions,” Bengio said.  …

“The field requires a shift towards open data and open science in order to feed the most powerful, data-hungry AI algorithms,” says Artificial Intelligence for Public Domain Drug Discovery, presented at the annual conference of the Global Partnership on Artificial Intelligence (GPAI), an initiative launched in 2020 under French and Canadian leadership.

In the academic community, data sharing has taken off, and is now mandatory under most government funded grants, said Bengio. Researchers are rewarded through downstream citations if they allow others to use their data.

But the incentives for the private sector are still to keep data closed. Companies need to be encouraged to share their data, “by force of contract and financial rewards for doing the right things”, Bengio said. The GPAI report also calls for government intervention to “strongly encourage” data-sharing….”

Senators unveil bipartisan bill requiring social media giants to open data to researchers | TheHill

“Meta and other social media companies would be required to share their data with outside researchers under a new bill announced by a bipartisan group of senators on Thursday. …

The bill, the Platform Accountability and Transparency Act, would allow independent researchers to submit proposals to the National Science Foundation. If the requests are approved, social media companies would be required to provide the necessary data subject to certain privacy protections. …”

FDA looks on while major U.S. institutions violate medical research rules

“The FDA has issued warnings to only a handful of the companies and institutions with the worst track records of violating a key clinical trial disclosure law, a new report finds.

 

 

Out of 51 large US-based companies and institutions that have failed to make five or more clinical trial results public, only three have been contacted by the U.S. drug regulator, and only one has received a final warning, FDA enforcement data show.

 

 

Failing to rapidly make clinical trial results public on the American trial registry harms patients because it slows down medical progress, leaves gaps in the medical evidence base, and wastes public funds. …”

Giving drug researchers control of their data

“Drug industry–led efforts, like the Allotrope Foundation, have advanced common terms for data management, Plasterer says. Most recently, the FAIR principles—guidelines for ensuring data in storage are findable, accessible, interoperable, and reusable—have been adopted by drug companies including AstraZeneca and Pfizer….”