Mehr Transparenz in der klinischen Forschung: Wie werden die neuen Transparenzvorschriften aus Sicht der pharmazeutischen Industrie bewertet? | SpringerLink

[English-language abstract, article in German.]

Abstract:  The year 2014 was a turning point for transparency in clinical research. Two regulatory innovations comprehensively changed the rules in the EU. For one thing, Regulation (EU) No. 536/2014 on clinical trials of medicinal products for human use (Clinical Trials Regulation – CTR) came into force, and for another thing, Policy 0070 of the European Medicines Agency (EMA) on the publication of and access to clinical data was published. While the policy has been occupying the pharmaceutical industry in practice since 2015, the requirements of the CTR came into effect at the end of January 2022.

The main innovation of the CTR is public access to the majority of documents and records that are created during the application process as well as during the course and after completion of a clinical trial. The special feature of Policy 0070 is the possibility for EU citizens to inspect the essential parts of a marketing authorisation application, such as the Clinical Study Report.

This contribution to the discussion describes the completely new challenges in the area of transparency that the pharmaceutical industry is facing as a result of the new requirements. In principle, transparency is to be welcomed in order to achieve the goals of the EU in the development and availability of medicines and vaccines. However, the protection of trade and business secrets of the pharmaceutical industry would be jeopardised. In the worst case, this could lead to a decline in investment in research and development within the scope of this regulation and to an international shift of clinical trials, including developing or emerging countries. Germany could lose more and more its leading role in conducting clinical trials in the EU.

Comparison of Clinical Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles | Medical Journals and Publishing | JAMA Network Open | JAMA Network

Abstract:  Importance  Preprints have been widely adopted to enhance the timely dissemination of research across many scientific fields. Concerns remain that early, public access to preliminary medical research has the potential to propagate misleading or faulty research that has been conducted or interpreted in error.

Objective  To evaluate the concordance among study characteristics, results, and interpretations described in preprints of clinical studies posted to medRxiv that are subsequently published in peer-reviewed journals (preprint-journal article pairs).

Design, Setting, and Participants  This cross-sectional study assessed all preprints describing clinical studies that were initially posted to medRxiv in September 2020 and subsequently published in a peer-reviewed journal as of September 15, 2022.

Main Outcomes and Measures  For preprint-journal article pairs describing clinical trials, observational studies, and meta-analyses that measured health-related outcomes, the sample size, primary end points, corresponding results, and overarching conclusions were abstracted and compared. Sample size and results from primary end points were considered concordant if they had exact numerical equivalence.

Results  Among 1399 preprints first posted on medRxiv in September 2020, a total of 1077 (77.0%) had been published as of September 15, 2022, a median of 6 months (IQR, 3-8 months) after preprint posting. Of the 547 preprint-journal article pairs describing clinical trials, observational studies, or meta-analyses, 293 (53.6%) were related to COVID-19. Of the 535 pairs reporting sample sizes in both sources, 462 (86.4%) were concordant; 43 (58.9%) of the 73 pairs with discordant sample sizes had larger samples in the journal publication. There were 534 pairs (97.6%) with concordant and 13 pairs (2.4%) with discordant primary end points. Of the 535 pairs with numerical results for the primary end points, 434 (81.1%) had concordant primary end point results; 66 of the 101 discordant pairs (65.3%) had effect estimates that were in the same direction and were statistically consistent. Overall, 526 pairs (96.2%) had concordant study interpretations, including 82 of the 101 pairs (81.2%) with discordant primary end point results.

Conclusions and Relevance  Most clinical studies posted as preprints on medRxiv and subsequently published in peer-reviewed journals had concordant study characteristics, results, and final interpretations. With more than three-fourths of preprints published in journals within 24 months, these results may suggest that many preprints report findings that are consistent with the final peer-reviewed publications.

Clinical Trial Data-sharing Policies Among Journals, Funding Agencies, Foundations, and Other Professional Organizations: A Scoping Review – Journal of Clinical Epidemiology

Abstract:  Objectives

To identify the similarities and differences in data-sharing policies for clinical trial data that are endorsed by biomedical journals, funding agencies, and other professional organizations. Additionally, to determine the beliefs, and opinions regarding data-sharing policies for clinical trials discussed in articles published in biomedical journals.

 

Study Design

Two searches were conducted, a bibliographic search for published articles that present beliefs, opinions, similarities, and differences regarding policies governing the sharing of clinical trial data. The second search analyzed the gray literature (non-peer-reviewed publications) to identify important data-sharing policies in selected biomedical journals, foundations, funding agencies, and other professional organizations.

 

Results

A total of 471 articles were included after database search and screening, with 45 from the bibliographic search and 426 from the gray literature search. A total of 424 data-sharing policies were included. Fourteen of the 45 published articles from the bibliographic search (31.1%) discussed only advantages specific to data-sharing policies, 27 (27/45; 60%) discussed both advantages and disadvantages, and 4 (4/45; 8.9%) discussed only disadvantages specific. A total of 216 journals (of 270; 80%) specified a data-sharing policy provided by the journal itself. One hundred industry data-sharing policies were included, and 32 (32%) referenced a data-sharing policy on their website. One hundred and thirty-six (42%) organizations (of 327) specified a data-sharing policy.

 

Conclusion

We found many similarities listed as advantages to data-sharing and fewer disadvantages were discussed within the literature. Additionally, we found a wide variety of commonalities and differences — such as the lack of standardization between policies, and inadequately addressed details regarding the accessibility of research data — that exist in data-sharing policies endorsed by biomedical journals, funding agencies, and other professional organizations. Our study may not include information on all data sharing policies and our data is limited to the entities’ descriptions of each policy.

Left in the dark: the importance of publicly available clinical trial protocols – Braat – 2022 – Medical Journal of Australia – Wiley Online Library

“Prospective registration of a randomised controlled trial (RCT) based on a protocol with formal ethics approval is a benchmark for transparent medical research. The reporting of the primary results of the study should correspond to the design, analysis, and reporting specified in the protocol and trial registration. However, modifications to various aspects of the trial are often made after registration, ranging from administrative updates to substantial protocol amendments. To track the history of revisions, the protocol and registry entry should be updated, and the documentation trail should support an independent appraisal of whether any biases have been introduced that could affect interpretation of trial results.

In this issue of the MJA, Coskinas and colleagues report their investigation of changes to 181 phase 3 RCTs registered with the Australian New Zealand Clinical Trials Registry (ANZCTR) during 1 September 2007 – 31 December 2013.1 The authors compared protocol documents (including ANZCTR registration information) with subsequent journal publications for any changes to the primary outcome, treatment comparisons, analysis set definition, eligibility criteria, sample size, or primary analysis method. They found that protocols were available for only 124 trials (69%); it could be determined that no major changes had been made to eleven of these trials (9%), while 78 had definitely been modified (63%). By comparing publications with trial registration information, it was found that no changes were made to five of the 57 trials without available protocols (9%), and it could not be determined whether changes had been made to a further ten (18%)….”

Frontiers | Applications for open access normalized synthesis in metastatic prostate cancer trials

Abstract:  Recent metastatic castration-resistant prostate cancer (mCRPC) clinical trials have integrated homologous recombination and DNA repair deficiency (HRD/DRD) biomarkers into eligibility criteria and secondary objectives. These trials led to the approval of some PARP inhibitors for mCRPC with HRD/DRD indications. Unfortunately, biomarker-trial outcome data is only discovered by reviewing publications, a process that is error-prone, time-consuming, and laborious. While prostate cancer researchers have written systematic evidence reviews (SERs) on this topic, given the time involved from the last search to publication, an SER is often outdated even before publication. The difficulty in reusing previous review data has resulted in multiple reviews of the same trials. Thus, it will be useful to create a normalized evidence base from recently published/presented biomarker-trial outcome data that one can quickly update. We present a new approach to semi-automating normalized, open-access data tables from published clinical trials of metastatic prostate cancer using a data curation and SER platform. Clinicaltrials.gov and Pubmed.gov were used to collect mCRPC clinical trial publications with HRD/DRD biomarkers. We extracted data from 13 publications covering ten trials that started before 22nd Apr 2021. We extracted 585 hazard ratios, response rates, duration metrics, and 543 adverse events. Across 334 patients, we also extracted 8,180 patient-level survival and biomarker values. Data tables were populated with survival metrics, raw patient data, eligibility criteria, adverse events, and timelines. A repeated strong association between HRD and improved PARP inhibitor response was observed. Several use cases for the extracted data are demonstrated via analyses of trial methods, comparison of treatment hazard ratios, and association of treatments with adverse events. Machine learning models are also built on combined and normalized patient data to demonstrate automated discovery of therapy/biomarker relationships. Overall, we demonstrate the value of systematically extracted and normalized data. We have also made our code open-source with simple instructions on updating the analyses as new data becomes available, which anyone can use even with limited programming knowledge. Finally, while we present a novel method of SER for mCRPC trials, one can also implement such semi-automated methods in other clinical trial domains to advance precision medicine.

 

Data-sharing and re-analysis for main studies assessed by the European Medicines Agency—a cross-sectional study on European Public Assessment Reports | BMC Medicine | Full Text

Abstract:  Background

Transparency and reproducibility are expected to be normative practices in clinical trials used for decision-making on marketing authorisations for new medicines. This registered report introduces a cross-sectional study aiming to assess inferential reproducibility for main trials assessed by the European Medicines Agency.

Methods

Two researchers independently identified all studies on new medicines, biosimilars and orphan medicines given approval by the European Commission between January 2017 and December 2019, categorised as ‘main studies’ in the European Public Assessment Reports (EPARs). Sixty-two of these studies were randomly sampled. One researcher retrieved the individual patient data (IPD) for these studies and prepared a dossier for each study, containing the IPD, the protocol and information on the conduct of the study. A second researcher who had no access to study reports used the dossier to run an independent re-analysis of each trial. All results of these re-analyses were reported in terms of each study’s conclusions, p-values, effect sizes and changes from the initial protocol. A team of two researchers not involved in the re-analysis compared results of the re-analyses with published results of the trial.

Results

Two hundred ninety-two main studies in 173 EPARs were identified. Among the 62 studies randomly sampled, we received IPD for 10 trials. The median number of days between data request and data receipt was 253 [interquartile range 182–469]. For these ten trials, we identified 23 distinct primary outcomes for which the conclusions were reproduced in all re-analyses. Therefore, 10/62 trials (16% [95% confidence interval 8% to 28%]) were reproduced, as the 52 studies without available data were considered non-reproducible. There was no change from the original study protocol regarding the primary outcome in any of these ten studies. Spin was observed in the report of one study.

Conclusions

Despite their results supporting decisions that affect millions of people’s health across the European Union, most main studies used in EPARs lack transparency and their results are not reproducible for external researchers. Re-analyses of the few trials with available data showed very good inferential reproducibility.

Reporting and transparent research practices in sports medicine and orthopaedic clinical trials: a meta-research study | BMJ Open

Abstract:  Objectives Transparent reporting of clinical trials is essential to assess the risk of bias and translate research findings into clinical practice. While existing studies have shown that deficiencies are common, detailed empirical and field-specific data are scarce. Therefore, this study aimed to examine current clinical trial reporting and transparent research practices in sports medicine and orthopaedics.

Setting Exploratory meta-research study on reporting quality and transparent research practices in orthopaedics and sports medicine clinical trials.

Participants The sample included clinical trials published in the top 25% of sports medicine and orthopaedics journals over 9 months.

Primary and secondary outcome measures Two independent reviewers assessed pre-registration, open data and criteria related to scientific rigour, like randomisation, blinding, and sample size calculations, as well as the study sample, and data analysis.

Results The sample included 163 clinical trials from 27 journals. While the majority of trials mentioned rigour criteria, essential details were often missing. Sixty per cent (95% confidence interval (CI) 53% to 68%) of trials reported sample size calculations, but only 32% (95% CI 25% to 39%) justified the expected effect size. Few trials indicated the blinding status of all main stakeholders (4%; 95% CI 1% to 7%). Only 18% (95% CI 12% to 24%) included information on randomisation type, method and concealed allocation. Most trials reported participants’ sex/gender (95%; 95% CI 92% to 98%) and information on inclusion and exclusion criteria (78%; 95% CI 72% to 84%). Only 20% (95% CI 14% to 26%) of trials were pre-registered. No trials deposited data in open repositories.

Conclusions These results will aid the sports medicine and orthopaedics community in developing tailored interventions to improve reporting. While authors typically mention blinding, randomisation and other factors, essential details are often missing. Greater acceptance of open science practices, like pre-registration and open data, is needed. As these practices have been widely encouraged, we discuss systemic interventions that may improve clinical trial reporting.

Clinical trial results for $3.2 billion Covid drug are missing in action

“The results of most clinical trials of the Covid drug molnupiravir (Lagrevio) have not been made public and remain completely unknown, a new study has found.

 

 

 

The drug is currently being administered to Covid patients in the United States, the UK, and India. The World Health Organisation has issued a “conditional recommendation” for its use in some patient groups. Global sales so far stand at $3.2 billion….”

Evaluation of publication bias for 12 clinical trials of molnupiravir to treat SARS-CoV-2 infection in 13,694 patients | Research Square

Abstract:  Introduction:

During the COVID-19 pandemic, Merck Sharp and Dohme (MSD) acquired the global licensing rights for molnupiravir. MSD allowed Indian manufacturers to produce the drug under voluntary license. Indian companies conducted local clinical trials to evaluate the efficacy and safety of molnupiravir.

Methods

Searches of the Clinical Trials Registry-India (CTRI) were conducted to find registered trials of molnupiravir in India. Subsequent investigations were performed to assess which clinical trials had been presented or published.

Results

According to the CTRI, 12 randomised trials of molnupiravir were conducted in India, in 13,694 patients, starting in late May 2021. By July 2022, none of the 12 trials has been published, one was presented at a medical conference, and two were announced in press releases suggesting failure of treatment. Results from three trials were shared with the World Health Organisation. One of these three trials had many unexplained results, with effects of treatment significantly different from the MSD MOVE-OUT trial in a similar population.

Discussion

The lack of results runs counter to established practices and leaves a situation where approximately 90% of the global data on molnupiravir has not been published in any form. Access to patient-level databases is required to investigate risks of bias or medical fraud.

EU Clinical Trials Register – Update

“Following the issuing of the Joint Letter by the European Commission, EMA and HMA, National Competent Authorities and European Medicines Agency have sent reminders to sponsors who were not compliant with the European Commission guideline on results posting. Thanks to these reminders, the percentage of posted results substantially increased. However, for some trials the reminders were not successful: detailed lists of these trials can be found here. …”

Clinical Trial Registry Errors Undermine Transparency | The Scientist Magazine®

“Confusion about terminology on the world’s largest clinical trials registry may be delaying the release of drug trial results and undermining rules designed to promote transparency, an investigation by The Scientist has found. 

Key study dates and other information are entered into the ClinicalTrials.gov database by trial researchers or sponsors, and are used by US science and regulatory agencies to determine legal deadlines by which results must be reported. The rules are supposed to ensure timely public access to findings about a potential therapy’s harms and benefits, as well as provide the scientific community with an up-to-date picture of the status of clinical research.

But neither the agencies nor staff overseeing the database routinely monitor individual trial records for veracity, instead relying on the person in charge of a given record to correctly declare information such as when a study ends and how many people were enrolled. …”

Adoption of World Health Organization Best Practices in Clinical Trial Transparency Among European Medical Research Funder Policies | Global Health | JAMA Network Open | JAMA Network

Abstract:  Importance  Research funders can reduce research waste and publication bias by requiring their grantees to register and report clinical trials.

Objective  To determine the extent to which 21 major European research funders’ efforts to reduce research waste and publication bias in clinical trials meet World Health Organization (WHO) best practice benchmarks and to investigate areas for improvement.

Design, Setting, and Participants  This cross-sectional study was based on 2 to 3 independent assessments of each funder’s publicly available documentation and validation of results with funders during 2021. Included funders were the 21 largest nonmultilateral public and philanthropic medical research funders in Europe, with a combined budget of more than US $22 billion.

Exposures  Scoring of funders using an 11-item assessment tool based on WHO best practice benchmarks, grouped into 4 broad categories: trial registries, academic publication, monitoring, and sanctions. Funder references to reporting standards were captured.

Main Outcomes and Measures  The primary outcome was funder adoption or nonadoption of 11 policy and monitoring measures to reduce research waste and publication bias as set out by WHO best practices. The secondary outcomes were whether and how funder policies referred to reporting standards. Outcomes were preregistered after a pilot phase that used the same outcome measures.

Results  Among 21 of the largest nonmultilateral public and philanthropic funders in Europe, some best practices were more widely adopted than others, with 14 funders (66.7%) mandating prospective trial registration and 6 funders (28.6%) requiring that trial results be made public on trial registries within 12 months of trial completion. Less than half of funders actively monitored whether trials were registered (9 funders [42.9%]) or whether results were made public (8 funders [38.1%]). Funders implemented a mean of 4 of 11 best practices in clinical trial transparency (36.4%) set out by WHO. The extent to which funders adopted WHO best practice items varied widely, ranging from 0 practices for the French Centre National de la Recherche Scientifique and the ministries of health of Germany and Italy to 10 practices (90.9%) for the UK National Institute of Health Research. Overall, 9 funders referred to reporting standards in their policies.

Conclusions and Relevance  This study found that many European medical research funder policy and monitoring measures fell short of WHO best practices. These findings suggest that funders worldwide may need to identify and address gaps in policies and processes.

Medical research funders across Europe tighten rules on clinical trial reporting

“Eight of the 21 largest public and philanthropic medical research funders in Europe are stepping up their efforts to improve clinical reporting, following an assessment that found widespread gaps in existing research waste safeguards.

 

 

 

At present, many academic clinical trials in Europe fail to make their results public, wasting taxpayers’ money and leaving large gaps in the medical evidence base.

 

 

 

The public institutions that hand out money to medical researchers can prevent such waste by putting into place eleven safeguards recommended by the World Health Organisation. …”

Finding My Way from clinical trial to open access dissemination: comparison of uptake, adherence, and psychosocial outcomes of an online program for cancer-related distress | SpringerLink

Abstract:  Few digital psycho-oncology programs have been adopted into routine practice; how these programs are used after trial completion remains unexplored. To address this, the present study transitioned our evidence-based 6-module CBT-based program, Finding My Way, into open access (OA) after completion of the RCT, and compared uptake, usage, and psychosocial outcomes to the earlier RCT.

Methods

Recruitment was passive, via promotion through (1) media and social media releases, (2) public lectures, (3) radio interviews and podcasts, and (4) clinician-initiated referral. Measures included number of enrolled users, number of modules completed, and pre- and optional post-measures of distress and quality of life (QOL).

Results

Uptake was lower in OA (n?=?120; 63% of RCT). Usage was markedly lower: 1.5 modules were completed on average (vs 3.7 in RCT), and only 13% completed a ‘therapeutic dose’ of 4?+?modules (vs. 50% in RCT). Research attrition was high; n?=?13 completed post-measures. OA users were more sociodemographically and clinically diverse than RCT users, had higher baseline distress (OA Mpre?=?36.7, SD?=?26.5; RCT Mpre?=?26.5, SD?=?21.7), and reported larger pre-post reductions than their RCT counterparts (OA Mpost?=?23.9, SD?=?20.7; RCT Mpost?=?21.2, SD?=?21.2). Moderate improvements in mental QOL occurred during OA (Mpre?=?37.3, SD?=?12.6; Mpost?=?44.5, SD?=?12.1), broadly replicating RCT findings.

Conclusion

Findings that OA users were more medically and sociodemographically diverse and distressed at baseline than their RCT counterparts, and — despite having lower usage of the program — achieved larger changes from baseline to post-program, will help to shape future intervention design, tailoring, and dissemination.