Opening the Window of Discourse for Citizen Science

COVID has democratised data science and increasingly the public expect open data, research, and interpretation in more aspects of their lives. Who will be the ones to provide this knowledge for citizens? A proposed community publication The Citizen Science Guide for Research Libraries by the LIBER Citizen Science Working Group looks to explore these questions – putting forward that research…

Source

Open Science Podcasts: 7 + 3 Tips for Your Ears

by Claudia Sittner

Podcasts are booming – not just these days, but in the time of the pandemic the format has gained a new appeal for many. And since this blog is all about Open Science and infrastructure service providers, we set out to explore the best podcasts on the topic.

So here are our podcast tips for anyone who is interested in Open Science or would like to take a closer look at the topic. At this point, a big thank you to our followers on Twitter, whose tips we have included in the following collection.

In it, we present 7 open science podcasts that are still being produced and 3 that have unfortunately already been discontinued, but are still interesting for open science beginners. Have fun listening!

  1. Open Science Radio
    This podcast deals with the topic of Open Science in its many-sided and -layered aspects – from Open Access to Citizen Science and Open Data to public science and Open Education. The podcast aims to create a basic understanding, but above all to inform about current developments.

    Hosts: Matthias Fromm, Konrad Förstner
    Since: 2013
    Language: German, some in English

  2. ORION Open Science Podcast
    From Data Sharing to Citizen Science and from Peer Review to professional development the episodes of ORION Open Science Podcast will explore the good, the bad, and the ugly of the current scientific system, and what Open Science practices can do to improve the way we do science.

    Given the momentum of the Open Science movement it makes sense that researchers of all levels want to understand the issues and the opportunities behind it. The principles of Open Science are about accessibility and collaboration in research, but this still leaves questions to be answered. Why has Open Science become part of the research landscape? How will it impact day-to-day scientific work? What new developments are available and how can they be used effectively?

    The podcasts’s motto: The best way to learn about something new is to simply talk to people who have knowledge and informed opinions on the topic.

    Hosts: Luiza Bengtsson, Emma Harris, Zoe Ingram
    Organisation: Max-Delbrück-Center for Molecular Medicine, Helmholtz Association (MDC)
    Since: 2019
    Language: English

  3. The Future is Open Science
    In this podcast people from the scientific community talk about how they promote Open Science in their daily work. The topic is examined from very different perspectives: Whether it’s with the big science policy glasses, when it comes to classifying different initiatives and developments, or with the subject-specific glasses, when the economic cultural change towards more research transparency is illuminated, or with the operational view of practitioners, how Open Science can be implemented concretely. The podcast delves into the depths of science communication in the digital age and gives tips and tricks on Open Science in practice.

    Host: Doreen Siegfried
    Organisation: ZBW – Leibniz Information Centre for Economics
    Language: German
    Since: 2020

  4. The Road to Open Science
    The Road to Open Science Podcast functions as a guide on everything Open at Utrecht University and beyond. In the monthly podcast the hosts discuss the latest developments in the fields of Open Acces, Open Data/Software, public engagement and recognition and rewards. The hosts follow the path to adapting Open Science practices through the perspective of researchers from different disciplines. In each episode they talk to people within the academic community about their research, initiatives, or experiences in relation to open science.

    Hosts: Sanli Faez, Lieven Heeremans
    Organisation: Utrecht University, The Netherlands
    Language: English
    Since: 2017

  5. Open Science Talk
    This podcast is pretty much «open anything» from Open Science, Open Access, Open Education, Open Data, Open Software …. The hosts invite guests to explain different topics or share with them some of their research practices and reflections related to Open Science.

    They try to cover a wide range of different topics within the Open Science spectrum, such as Open Access, Open Data, Open Research, Open Education, Citizen Science, Open Health… the list is long. They seek to cover the various branches of Open Science from different angles, and they also try to talk about recent events in the Open Science world.

    Their guests: librarians, professors, students, and PhD-candidates from all kinds of different fields, also publishers and administrative employees that work within science.

    Hosts: Per Pippin Aspaas, before: Erik Lieungh
    Organisation: University Library at UiT The Arctic University of Norway
    Language: English
    Since: 2018

  6. ReproducibiliTea
    Serving mugs of ReproducibiliTea: The blends of this podcast include transparency, openness and robustness with a spoonful of science. The hosts reflect on their experiences trying to push for open and reproducible research. Their conversations with other early career researchers highlight challenges and opportunities to achieve changes in the scientific system.

    Hosts: Sophia Crüwell, Amy Orben, Sam Parsons
    Organisation: ReproducibiliTea Journal Club, 121 Clubs worldwide
    Language: English
    Since: 2019

  7. Open Science Stories
    This podcast is the newcomer among our collection. Open Science concepts are compactly packaged as stories and explained in 10 minutes or less.

    Host: Heidi Seibold
    Language: English
    Since: 2021

Open-Science-Podcasts: production discontinued

In addition to the podcasts that are currently in production, we would also like to recommend some whose production has unfortunately already been discontinued. Nevertheless, the existing episodes are still available and worth listening to, especially for Open Science beginners:

  1. Open Science in Action
    This interview podcast is about Open Science activities – mostly from Austria. People and institutions are visited who are involved in Open Science and/or who are doing it themselves. In 30-60 minutes, innovative and exciting activities around the opening of science are shown and the listeners are introduced to the different aspects of it. From Open Access in university libraries to Open Source at research institutes to open hackspaces and Citizen Science.

    Hosts: Stefan Kasberger, Marc Pietkiewicz
    Organisation: ÖH Universität Graz, Austria (Students´ Union of the University of Graz, Austria)
    Language: German
    Since: 2014, 10 episodes

  2. Open Science
    In this series of podcasts the impact of opening up science is considered: allowing both the research community and the public a freely access to the results of scientific work. Individuals can be fully informed about medical or environmental research, students worldwide can get access to the latest work, and software agents can roam the vast scientific knowledge base seeking patterns and correlations that no human has observed. Ultimately, it may profoundly change the way science is done.

    Organisation: University of Oxford
    Language: English
    Since: 2012-2013, 23 episodes

  3. Colper Science
    This interview podcast is also about Open Science and its methods. The podcast makers believe that it is possible for researchers to fully migrate into the universe of Open Science by using tools and methods that are already available. Unfortunately, however, most of these tools, methods and opportunities remain unknown to most of the research community. The aim of Colper Science is to make these tools known by sharing success stories around Open Science.

    Hosts: Kambiz Chizari, Ilyass Tabiai
    Language: English
    Since: 2017-2018, 26 episodes

Your Open Science podcast is not included?

These were our discoveries of Open Science podcasts. I’m sure there are more podcasts worth listening to in this field, especially internationally.

If you can think of any, we would be happy to receive link tips on Twitter and Facebook or by email to team (at) zbw-mediatalk.eu! We will be glad to add your podcast to our collection!

Read more

References Portrait: Photo Claudia Sittner©

This text has been translated from German.

The post Open Science Podcasts: 7 + 3 Tips for Your Ears first appeared on ZBW MediaTalk.The post Open Science Podcasts: 7 + 3 Tips for Your Ears first appeared on Leibniz Research Alliance Open Science.

Science Barometer 2020: Starting Points for Open Science?

by Claudia Sittner

The Science Barometer – not to be confused with the Barometer for the Academic World – is a representative opinion poll that has been examining the attitude of German citizens to science and research annually since 2014. There were additional surveys in April and May 2020 owing to the corona crisis (“Corona Special”). Last month, the results of the most recent survey from November 2020 were presented.

Brochure Science Barometer 2020 (PDF). The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.

The Science Barometer was commissioned by the organisation Wissenschaft im Dialog – An initiative of Germany´s scientific community (Science in Dialog, WiD). This non-profit organisation is aimed at promoting dialogue about science and research in Germany and encouraging as many people as possible to take part. WiD also drives forward the further development of science communication and thereby also of Open Science. The survey is sponsored by the Robert Bosch Stiftung and the Fraunhofer-Gesellschaft.

Around 1,000 citizens from the age of 14 upwards in private households were surveyed during telephone interviews. German-speaking residents formed the parent population. We have taken a look at the results of the Science Barometer 2020 from the perspective of its importance for Open Science in science and research, and present its interesting findings.

Interest stable; traditional media most important source of information

Interest in science and research is stable at 60% of the population and is only exceeded by a 68% interest in local news. That corresponds with the opinion of 59% of those surveyed, who agree with the statement either partly or completely that they personally profit from science and research.

Science Barometer 2020. The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.

They get their information primarily (80% – occasionally to very frequently) via the traditional media. Less frequently via internet sites of scientific institutions (43%), and in only 29% of cases, those surveyed got their information via research topics on social media. In the light of the corona pandemic, online services of traditional news media became more relevant.

For science and research, this means that it is worth investing more in press and PR work so that relevant scientific findings are taken up by traditional media and can reach the population. Particularly for institutions that are committed to Open Science, this seems to be a good place to start, to ensure that their content and dedication are perceived more strongly. Fittingly, a third of those surveyed are of the opinion that scientists should inform people more strongly about their work.

Trust higher than in previous years; tendency sinking in the COVID-19 year 2020

Trust in science and research is also very high in November 2020 at almost two thirds (60% either tend to trust, or trust completely). In previous years, this value was around 50%. It is interesting here that trust in science and research initially rose sharply – to 77% – at the beginning of the corona pandemic (survey April 2020): in comparison to 2019, four times as many people surveyed trusted it “fully and completely”. However, this value had almost halved again by the time of the November 2020 survey.

Science Barometer 2020. The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.




This shows a high degree of confidence in science and research at the beginning of the pandemic and could point to a disappointment experienced by many people during the second corona lockdown.

Science Barometer 2020. The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.

Reasons for the credibility were quoted as expertise, integrity as well as acting in the interests of the general public. Compared to the previous year, the tendency is increasing for all reasons. By contrast, the reasons for mistrust are:

  • Dependency on funders (49% tend to agree, or agree fully and completely),
  • Scientists adjust the findings to their expectations (25% – see above),
  • Often make mistakes (16% – see above).

In comparison to the previous year, however, the agreement with these reasons is to some extent severely reduced. The lowest value since the beginning of the survey series regarding the question of whether people should trust their feelings and their faith instead of science, corresponds to this (23% – tend to agree and agree fully and completely).

For supporters of Open Science, the fact that trust and educational level correlate can play an additional role here: The higher the formal educational level, the greater the trust. If one assumes that in most cases science communication reaches people with a higher level of education in particular, this could be evaluated as a positive sign. For among these people, trust in science and research is high. On the other hand, it also means that science communication needs to make more of an effort to reach people without a higher formal education in order to gain the trust of this group as well.

Corona Special: Science fundamentally important; controversy welcome

When it comes to the coronavirus, the public trust the statements of doctors and medical personnel the most (80% – tend to trust and fully and completely trust), closely followed by trust in the statements of scientists (73% – tend to trust or fully and completely trust). However, some also suspect (39% – tend to agree and fully and completely agree), that scientists are not telling us everything they know about the coronavirus. The same number of respondents also believes that it is important to get information about the virus from outside science.

“The fact that so many people trust in science shows how good the dialogue between science and society is functioning during the pandemic. However, the relatively high number of people who are undecided or sceptical is cause for concern: Science needs to open up even more and also seek to start a dialogue with those who are sceptical. To ensure that this occurs, we need to support all researchers in communicating their knowledge, their results and their working methods”.

— WiD CEO Markus Weißkopf.

Overall, the public wants political decisions in the context of the corona pandemic to be based on scientific findings. Direct interference by scientists in politics, on the other hand, is not desired. On the whole, this is good news for Open Science enthusiasts, as it means that they are awarded credibility in issues regarding corona, and it is therefore worth conducting one’s research as openly as possible and communicating one’s own work. It also shows that Open Science can score points with the public, precisely because of its transparency: Results can be openly understood, and there are no obligatory intermediaries such as journalists, who filter and evaluate the information.

Science Barometer 2020. The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.

By contrast, increasingly less credibility is ascribed to the statements of politicians and journalists. One can conclude that researchers would be well advised to communicate coronavirus´ issues to the public themselves or to aim for a very close collaboration with the traditional media. The format of (scientific) podcasts (German) has proven to be a good option for this during the corona crisis – the number of listeners and their popularity have strongly increased over the previous year.

There is a very high level of trust that researchers are clearly communicating whether their statements are verified findings or open issues on the topic of the COVID-19 pandemic (46% – tend to agree and fully and completely agree; 40% undecided). Controversies among scientists are evaluated as being positive and informative by more than two thirds of those asked. For the Open Science community, this is a confirmation that it should campaign for discourse to be opened up and create spaces, so that this can take place transparently, publicly and comprehensibly.

This is even more important, because there are also people who “in the corona pandemic prefer to rely on ‘common sense’ than on scientific studies. It is even more important to communicate facts and recommendations for action via diverse formats, in order to reach those who are uncertain and have doubts”, confirms Tina Stengele, provisional head of the science division at the Robert Bosch Stiftung, which is supporting the Science Barometer.

Science Barometer and Open Science: Strengthen science communication

Applying and verifying scientific findings quickly has become more important than ever, owing to the corona crisis. This has led to science taking on a more prominent role amongst the public, whose trust in researchers and their integrity was also strong according to the last survey of the Science Barometer.“

“A decisive pillar in strengthening and extending trust is the accessibility and comprehensibility of research results.. (…) This strengthens us in our conviction that it is a successful model to explain findings and developments straightforwardly, to classify them and to present their benefits – for experts and laypersons alike”.

— Janis Eitner, director of communication at the Fraunhofer-Gesellschaft

From an Open Science perspective, now is certainly still a good time for urging the scientific system to become even more open in all its subprocesses, and for using professional science communication more widely, on own channels such as podcasts or on a stable cooperation with the traditional media. The negotiating position for more Open Science is favourable right now, and the experiences from the pandemic have made it unmistakably clear to everyone how important having a more open ecosystem of free knowledge is and will be in times of global crises.

This might also interest you:

References
Portrait: Photo Claudia Sittner©
The use of the graphics of the results is possible if the source “Wissenschaft im Dialog/Kantar Emnid” is mentioned. The graphics run under the licence [CC BY-ND 4.0], adaptations of the format for editorial publications are permitted.

This text has been translated from German.

The post Science Barometer 2020: Starting Points for Open Science? first appeared on ZBW MediaTalk.The post Science Barometer 2020: Starting Points for Open Science? first appeared on Leibniz Research Alliance Open Science.

20 Jahre Wikipedia  

ein Beitrag von Matti Stöhr und Michael Hohlfeld

Vor 20 Jahren hat Jimmy Wales zusammen mit Larry Sanger die Wikipedia aus der Taufe gehoben. Die freie Online-Enzyklopädie und größte digitale Wissenssammlung der Welt hat diesen runden Geburtstag am und rund um den 15. Januar 2021 ausgiebig gefeiert und sich (zu recht) feiern lassen.

Medial wurde der 20. Geburtstag auch hierzulande sehr breit aufgegriffen. Insbesondere Funk und Fernsehen haben sich in vielen Beiträgen mit den Hintergründen der Entstehung der Wikipedia und dem Thema freies Wissen auseinandergesetzt und aufgezeigt, wie die – sehr erfolgreiche, aber tatsächlich nicht unumstrittene – Plattform funktioniert. Empfehlen wollen wir an dieser Stelle die WDR-Dokumentation „Das Wikipedia Versprechen“ in der ARD-Mediathek, welche in einer etwas längeren Fassung auch ARTE im Programm hat. Nicht zuletzt kritische Stimmen kommen hier nicht zu kurz. Weitere Beiträge sind zum Beispiel auf der offiziellen Geburtstagsseite der Wikimedia Deutschland verlinkt. Auf dieser Seite finden sich aber nicht nur Medienberichte, sondern z.B. auch eine visualisierte Zeitreise, persönliche Geschichten und viele Informationen, wie man bei der Wikipedia mitmachen kann.

Auch in den den sozialen Netzwerken wurde und wird das Jubiläum unter dem Hashtag #Wikipedia20 ausgiebig thematisiert. Auf Twitter und Instagram haben wir uns am letzten Freitag sehr gerne den Gratulanten angeschlossen, da uns als Bibliothek und Forschungseinrichtung doch viel mit der Wikipedia verbindet. Lambert Heller, Leiter unseres Open Science Labs, hat dazu in einem kurzen Gratulationsvideo die Wikipedia aus TIB-Sicht gewürdigt. Er benennt beispielhaft Aktivitäten und Projekte aus der TIB in Nutzung und in Zusammenarbeit mit der Wikipedia, der Wikimedia Deutschland und diverser Schwesterprojekte. Unter anderem kommt die Mentor*innen-Beteiligung am Fellow-Programm Freies Wissen mit Ina Blümel zur Sprache.


Eine sehr aktuelle Verbindung ist etwa der Kultur-Hackaton Coding da Vinci Niedersachsen 2020, welcher ganz bald am 29. Januar 2021 mit einer Online-Preisverleihung endet. Ab 16 Uhr werden dann die einzelnen Projekte der Öffentlichkeit vorgestellt und in verschiedenen Kategorien ausgezeichnet. (hier kostenlos zur Preisverleihung anmelden)

Auch viele andere Bibliotheken nutzen mit verschiedenen Aktivitäten die Wikipedia professionell und/oder sind eng mit ihr verzahnten Plattformen wie Wikimedia Commons oder Wikidata verbunden. Das von der Deutschen Nationalbibliothek (DNB) und Wikimedia Deutschland initiierte WikiLibrary-Manifest unterstreicht diese Verbundenheit und das gemeinsame Ziel eines internationalen Wissensnetzwerks. Natürlich haben auch wir als TIB das Manifest mitgezeichnet.

Außerdem: Anlässlich des Wikipedia-Geburtstags haben wir im TIB AV-Portal eine kleine Liste von thematisch passenden Videos zusammengestellt. Seien Sie herzlich eingeladen in unsere Wikiversum-Watchlist reinzuschauen.

Screenshot der öffentlichen Watchlist „Wikiversum“ im TIB AV-Portal

#1Lib1Ref

Passend zum Geburtstag wurde am 15. Januar auch wieder die Editier-Kampagne #1Lib1Ref gestartet. Eine konkrete, aktive Form des Mitfeierns und des Mitgestaltens. #1Lib1Ref hat das Ziel, durch das Hinzufügen von mindestens einem Zitat bzw. Beleg aus zuverlässigen Quellen in Wikipedia-Artikeln, die Enzyklopädie stetig noch besser zu machen und baut auf die bibliothekarische Expertise. Die aktuelle Kampagne läuft noch bis zum 5. Februar (sowie vom 15. Mai bis 5. Juni) und der Aufwand für jede*n Einzelne*n hält sich in Grenzen. Weitere Details und Hilfe gibt es – natürlich – im Wikipedia-Artikel zu #1Lib1Ref. Übrigens: selbstverständlich können auch (wissenschaftliche) Videos als Quelle/Beleg in Wikipedia-Artikel eingefügt werden. Für die Einbindung von Videos aus dem TIB AV-Portal gibt es sogar praktische Vorlagen:

  1. zur Einbindung von expliziten Video-Zitaten – Vorlage 1: TIBAV sowie
  2. zur Einbindung von Video-Suchen unter Berücksichtigung bestimmter Parameter – Vorlage 2: TIBAV-Suche.

Zum Abschluss nochmals: herzlichen Glückwunsch zum Geburtstag an Wikipedia!

Wir freuen uns auf die weitere Zusammenarbeit und feiern freies Wissen und Offenheit generell – ganz im Sinne der strategischen Ziele und Aktivitäten der TIB, unter anderem gewürdigt durch den erhaltenen Open Library Badge 2020 .

Der Beitrag 20 Jahre Wikipedia   erschien zuerst auf TIB-Blog.

The post 20 Jahre Wikipedia   first appeared on Leibniz Research Alliance Open Science.

Open Science & Libraries 2021: 20 Tips for Conferences, Barcamps & Co.

by Claudia Sittner

After the Corona Year 2020 threw the event industry off track worldwide, event organisers have adapted to the “new normal” in 2021 and developed new digital formats. The advantage: the event world has become smaller. Events that used to take place out of reach in Sydney or Bangkok can now often be attended conveniently from the home office.

Many organisers have also used the year to rethink their event prices, reduce fees or eliminate them altogether – which is entirely in the spirit of the Open Science idea. That is why it was not difficult for us to put together a list of conferences, workshops, barcamps and other events that you should not miss in 2021.

JANUARY 2021

Open Science Barcamp
14.01.21, Online-Event
“A session in the series leading up to the Netherlands National Open Science Festival on February 11th 2021.”
Organised by: National Platform Open Science Netherlands


Webinar Serie: German-Dutch dialogue on the future of libraries: Sustainability and libraries – agenda 2030
18.01.21, Online-Event
“Libraries are not only sustainable institutions per se, but they also make an intensive contribution to raising awareness of the need for a sustainable society. To this end they provide information, organize projects and support sustainable engagement. Why libraries in the Netherlands and in Germany play an important social role here, how they can contribute to this and what examples are available will be presented and discussed. How the international library associations like IFLA and EBLIDA support this global challenge will also be a topic in this online-seminar.”
Organised by: Erasmus University Library Rotterdam


ALA Midwinter Meeting & Exhibits
22.01.21 – 26.01.21, Online-Event
“Symposium on the Future of Libraries, offering sessions on future trends to inspire innovation in libraries, “News You Can Use” with updates that highlight new research, innovations, and advances in libraries.”
Organised by: American Library Association


PIDapalooza 2021: The Open Festival of Persistent Identifiers
27.01.21, Online-Event
“Festival of persistent identifiers. Sessions around the broad theme of PIDs and Open Research Infrastructure.”
Organised by: CDL, Crossref, DataCite, NISO and ORCID


FEBRUARY 2021

Education for Data Science
07.02.21 – 09.02.21, Jerusalem (Israel)
“How Data Science should be taught in academic institutions and what kind of training and retraining can help support the need for new professionals in the data science ecosystem.”
Organised by: Israel Academy of Sciences and Humanities, CODATA


Fake News: Impact on Society 4/4
08.02.21, Online-Event
“This event offers research into the concept of fake news and its impact in modern society:
Strengthening information literacy in the time of COVID-19: the role and contributions of the National Library of Singapore. News analytics in LIS Education and Practice.”
Organised by: News Media, Digital Humanities, FAIFE, and CLM


Open Science Festival
11.02.21, Online-Event
“Open Science stands for the transition to a new, more open and participatory way of conducting, publishing and evaluating scholarly research. Central to this concept is the goal of increasing cooperation and transparency in all research stages. The National Open Science Festival provides researchers the opportunity to learn about the benefits of various Open Science practices. It is a place to meet peers that are already working openly or that are interested to start doing so. Key to this day is sharing knowledge and best practices.”
Organised by: NPOS project Accelerate Open Science


Barcamp Open Science 2021
16.02.21, Online-Event
“Discussing and learning more about, and sharing experiences on practices in Open Science.”
Organised by: Leibniz Research Alliance Open Science


Open Science Conference 2021
17.02.21 – 19.02.21, Online-Event
“This conference will especially focus on the effects and impact of (global) crises and associated societal challenges, such as the Corona pandemic or the climate change, to open research practices and science communication in the context of the digitisation of science. And vice versa, how open practices help to cope with crises. Overall, the conference addresses topics around Open Science such as: Effects and impact of current crises on open research practices and science communication – Learnings from crises to sustainably ensure the opening of science in the future – Innovations to support Open Science practices and their application and acceptance in scientific communities – Scientific benefit of Open Science practices and their impact in society such as coping with crises – Open Science education and science communication to different target groups in the broad public.”
Organised by: Leibniz Research Alliance Open Science


MARCH 2021

3. Workshop Retrodigitalisierung: „OCR – Prozesse und Entwicklungen“
01.03.21, Online-Event
“Digitalisierung bietet neue Erschließungsmöglichkeiten, auch und vor allem durch gute Texterkennungsprogramme. Die Optical Character Recognition (OCR) ist ein Werkzeug, von dessen Qualität die Durchsuchbarkeit von Texten maßgeblich beeinflusst wird. Der Workshop befasst sich daher mit Prozessen und Entwicklungen in der OCR – einem wichtigen Bestandteil aller Digitalisierungsprojekte.”
Organised by: ZB MED, TIB, ZBW and Staatsbibliothek zu Berlin – Preußischer Kulturbesitz


Open Data Day 2021
06.03.21, Online-Event
“Open Data Day is an annual celebration of open data all over the world. Groups from around the world create local events on the day where they will use open data in their communities. It is an opportunity to show the benefits of open data and encourage the adoption of open data policies in government, business and civil society.”
Organised by: Open Knowledge Foundation


2. Bibliothekspolitischer Bundeskongress: Bibliotheken im digitalen Wandel: Orte der Partizipation und des gesellschaftlichen Zusammenhalts
26.03.21, Online-Event
“Bibliotheken im digitalen Wandel: Orte der Partizipation und des gesellschaftlichen Zusammenhalts“ – miteinander über bibliothekspolitische Fragen ins Gespräch zu kommen.”
Organised by: German Library Association (dbv)


APRIL 2021

Webinar Serie: German-Dutch dialogue on the future of libraries: Central services for public libraries
12.04.21, Online-Event
“The national library (KB) in the Netherlands offers central digital services to public libraries and to patrons as well. How these services were initiated in the past and how the situation is currently will be presented and compared with the situation in Germany. Because of the political system and the cultural sovereignty of the federal states, the support of smaller public libraries in Germany is not centralized, but so called “Fachstellen” in various federal states offer services to their libraries. This system, its tasks and services are presented – decentralised or centralised support for public libraries – what are the advantages and disadvantages? And what effects will the pandemic have to these services in the future? How will the idea of the third place be connected with the need to offer mobile services for the library users during and after corona?”
Organised by: Erasmus University Library Rotterdam


MAY 2021

IASSIST 2021: Data by Design – Building a Sustainable Data Culture
May, Online-Event
“The conference theme, “Data by Design: Building a Sustainable Data Culture”, emphasizes two core values embedded in the culture of Gothenburg and Sweden: design and sustainability. We invite you to explore these topics further, and discuss what they could mean to data communities. As a member of IASSIST, you are already part of at least one data community. Your other data communities may be across departments, within organizations, or among groups in different countries. How are these groups helping design a culture of practices around data that will persist across organizations and over time?”
Organised by: Swedish National Data Service (SND)


Library Publishing Virtual Forum
10.05.21 – 14.05.21 Online-Event
“This is an annual conference bringing together representatives from libraries engaged in (or considering) publishing initiatives to define and address major questions and challenges; to identify and document collaborative opportunities; and to strengthen and promote this community of practice.”
Organised by: Library Publishing Coalition (LPC)


JUNE 2021

Deutscher Bibliothekarstag: forward to far
15.06.21 – 18.06.21, Bremen (Germany)
“Alternative room concepts, Inventory management, Library management, Library education, Blended Library Concepts, Community building, Digitale editions, Digitization of the teaching, Discovery and eBooks, Electronic Resource Management – and much more.”
Organised by: The Association of German Librarians (VDB – Verein Deutscher Bibliothekarinnen und Bibliothekare) and Berufsverband Information Bibliothek e.V. (BIB)


IASSIST 2021/CESSDA: Data by Design – Building a Sustainable Data Culture
30.06.21 – 02.07.21, Gothenburg (Sweden)
“The conference theme, “Data by Design: Building a Sustainable Data Culture”, emphasizes two core values embedded in the culture of Gothenburg and Sweden: design and sustainability. We invite you to explore these topics further, and discuss what they could mean to data communities. As a member of IASSIST, you are already part of at least one data community. Your other data communities may be across departments, within organizations, or among groups in different countries. How are these groups helping design a culture of practices around data that will persist across organizations and over time?”
Organised by: Swedish National Data Service (SND)


JULY 2021

ICOSRP 2021: International Conference on Open Science Research Philosophy
19.07.21 – 20.07.21, Helsinki (Finland)
“All aspects of Open Science Research Philosophy.”
Organised by: International Research Conference


SEPTEMBER 2021

OA-Tage 2021
27.09.21 – 29.09.21 Bern (Switzerland)
“Open Access und Open Science.”
Organised by: open-access.network


OCTOBER 2021

FORCE2021
18.10.21 – 20.10.21 San Sebastián (Spain)
“At a FORCE11 annual conference stakeholders come together for an open discussion, on an even playing field, to talk about changing the ways scholarly and scientific information is communicated, shared and used. Researchers, publishers, librarians, computer scientists, informaticians, funders, educators, citizens, and others attend the FORCE11 meeting with a view to supporting the realization of promising new ideas and identifying new potential collaborators.”
Organised by: Force11


Events 2021: How to stay up to date

These are our event tips for the Open Science and library world for 2021. Of course, there will be more exciting conferences, workshops, barcamps and other formats in the course of the year. We will collect them for you in our event calendar on ZBW MediaTalk! To keep up to date with interesting events, you can either check there from time to time or subscribe to our newsletter, in which we will regularly inform you about new highlights on the Open Science and library event horizon: sign up for the ZBW MediaTalk newsletter.

Is an event missing?

Do you have an event tip that is not yet listed in our event calendar? Then we would be happy if you would let us know.

Further reading tips for event organisers:

Do you organise events yourself and are looking for tips on how to make them even better? We have been dealing with this more frequently lately:

Decision-making aids for event attendance: highlights 2020

Despite Corona, there were many conferences, workshops, barcamps & co. worth visiting in 2020. We wrote about some of them in ZBW MediaTalk. So if you are thinking about attending one of the events we recommend, our review will certainly help you make your decision:

References Portrait: Photo Claudia Sittner©

This text has been translated from German.

The post Open Science & Libraries 2021: 20 Tips for Conferences, Barcamps & Co. first appeared on ZBW MediaTalk.The post Open Science & Libraries 2021: 20 Tips for Conferences, Barcamps & Co. first appeared on Leibniz Research Alliance Open Science.

Barcamp@GeNeMe’2020: Open Science in Times of Crisis

by Sabine Barthold, Loek Brinkmann, Ambreen Hamadani, Shweata Hegde, Franziska Günther, Peter Murray-Rust, Guido Scherp and Simon Worthington

On 7 October 2020, the TU Dresden Media Centre and the Leibniz Research Alliance Open Science invited Open Science scholars and activists to the first “Barcamp Open Science@GeNeMe 2020” (Barcamp@GeNeMe’2020). It served as pre-event of the conference “Communities in New Media – GeNeMe 2020” and was also the first satellite event of the established Barcamp Open Science.

Like so many other events this year, we took up the challenge of organising the Barcamp in a purely online format. At the same time, however, this challenge was an opportunity to further open up the Barcamp for international participation and to invite Open Science enthusiasts from all over the world to join us and exchange ideas, discuss new developments and share their experiences in the proliferation of open, collaborative and digital science. In the end, among 40 Open Science enthusiasts we had contributors from countries like the United Kingdom, Russia, India, Iran, Germany, Chile and the Netherlands.

From the crisis of science to science for times of crisis?

The barcamp topic ‘From the crisis of science to science for times of crisis?’ was inspired by the global fight against the spread of the novel coronavirus that dominates the social, economic and cultural life of most countries in the world since spring 2020. The current crisis experiences also brought other societal threats such as climate change or global environmental destruction back into the public consciousness. The enormous importance of scientific knowledge for the handling of the ongoing crisis highlighted the value of the core ideals of Open Science – transparency, collaboration, rapid and open publication of research and data, and importance of effective science communication to translate research into social and political action. The question we wanted to discuss was: what role can Open Science play in addressing this crisis in particular, but also other global crises, like climate change, in general? We asked some of the session moderators to summarise their highlights and personal impressions of the event.

 
 

Start your own Open Science Community

by Loek Brinkmann

Grassroots Open Science Communities (OSCs) and initiatives play a crucial role in the transition to Open Science. OSCs are breeding grounds for Open Science initiatives and showcase cutting-edge Open Science practices amongst colleagues, to instigate a culture change amongst researchers. Most Dutch universities have an OSC in place and its format is now also catching on abroad. Collectively, the communities have published an Open Science Community Starter Kit, which we presented in our session.

INOSC Starter Kit: The four stages of developing an Open Science Community, this work is licensed under [CC BY NC SA 4.0].

We invite researchers around the globe (that’s you!) to start their own OSC and connect it to the International Network of Open Science Communities (INOSC).

These communities are places where newcomers can learn from their colleagues and ease into Open Science. Moreover, OSCs provide tools and training to interact with societal stakeholders, so that researchers can increase the societal impact of their work. For example, by including stakeholders from government, industry or civil societies early on in the research cycle, to optimise research questions and output formats for relevant and meaningful implementations in society.

During the barcamp, we had a fruitful discussion on how to articulate the benefits of Open Science for societal impact and how Open Science Communities can inspire researchers to engage more with societal stakeholders. Very nice experience! Thank you for all your input!
 
 

openVirus, Citizen Science and curiosity

by Ambreen Hamadani and Shweata Hegde

The COVID-19 crisis was thought-provoking. It taught us that our common enemy can only be defeated if all of us come together and share our intellectual resources. openVirus epitomises this idea and has embarked on a mission to create a system for mining open literature to draw useful inferences so that viral epidemics can be prevented and controlled. It aims to build a better world through citizen collaboration. openVirus encourages the exchange of ideas and welcomes volunteers even from the remotest and most cut off regions of the world. This is crucial for building an incorrigibly curious community determined to fuel science with new and revolutionary ideas.

The Barcamp@GeNeMe’2020 provided the openVirus team with a perfect platform to achieve these goals. The event was indeed an intellectual treat and we are immensely grateful for the opportunity to host a session on openVirus and Citizen Science. It gave us a chance to demonstrate the immense potential of open toolkits, Open Knowledge and citizens’ contributions to science. It was also a wonderful opportunity to learn, to share ideas, and to have more volunteers join our diverse group. We not only got to meet new people with similar interests but also got a chance to know more about great Open Science initiatives and projects. Physical presence is often impossible for such global events and the Barcamp@GeNeMe’2020 was a great solution to that!
 
 

Experiences with training materials on Open Science

by Franziska Günther

The contributors discussed training on Open Science focusing on different topics: Massive Open Online Courses (MOOCs) and their platforms, training, and practices on Open Science in different subject areas. In the latter case, one contributor pointed out that practices and training on Open Science highly depend on the subject area. Other ways of learning about Open Science, such as informal learning or through commitments in projects, were also of interest in the session. As the discussion moved on, contributors focused on how sustainable and continuous Open Science training can be provided. They agreed that Open Science should be part of the curricula for university graduates. The last issue in the session was whether Open Science is only related to the academic world and where non-academics can receive Open Science training to become part of this practice.

I enjoyed being part of the Barcamp@GeNeMe’2020. The discussions were interesting and fruitful. For me, this was mostly due to the fact of the diverse background of the participants. People from all over the world could participate and at the end also did. I got new perspectives on Open Science topics and therefore I am grateful.
 
 

Global issues of Open Science: equality, resources, goals

by Peter Murray-Rust

I was very grateful to be able to take part in Barcamp@GeNeMe’2020 as it’s a completely new way of people getting together. Both technically and socially it worked very well. It was great to have contributors from all over the world, especially India.

We face massive global challenges such as infectious diseases (viral pandemics, antibiotic resistance), food security, and climate change. To tackle this we need a global response, with a large multicultural contribution from Open Science, based on community action and inclusion, equity and diversity. All citizens (not just rich universities and companies) are needed to contribute to solutions, and a major way is through scientific research and practice. Science is based on equity (anyone can be a scientist) but this is often warped by a dominant Anglophone capitalist North. In the digital age Open Knowledge is an essential tool and we must work to create a shared resource – creation, dissemination, re-use – that everyone can take part in.

I hope that the Barcamp@GeNeMe’20 leads to a different way of scientific conferencing. We didn’t have to spend two days travelling, and spending lots of money. There are downsides – the informal meetings/coffee, the chance encounters – but for me (retired) and openVirus (students, India) there is no way that we could have done this last year!
 
 

Open Science and Climate Change

by Simon Worthington

The session was revealing how the work of individual researchers, working groups, and communities asking Open Science questions can make a difference. It makes you realise we can all redirect some of our time to climate issues.

Inspiring is the researcher Joachim Allgaier who asks in the GenR interview “YouTube – Fix Your AI for Climate Change! An Invitation to an Open Dialogue”. When you search for Climate Change on YouTube it will return 50% as anti-climate change content, which can be attributed to YouTube financially rewarding and so recruiting these content producers. What needs to happen with social media networks like YouTube is a good dose of Open Science transparency and regulation of their content algorithms.

The project Open Climate Knowledge FORCE11 Working Group advocates for 100% open research for climate change. In research literature we see less than 30% of the papers being Open Access. Greta Thunberg says, she “wants you to listen to the scientists” – but how can the public do this when it’s paywalled?

Enhancing Climate Change Research With Open Science, Travis C. Tai and James P. W. Robinson

As a research community the Open Energy Modelling Initiative (openmod), mainly coming out of Germany, works on new energy systems for a low carbon future. It has enthusiastically embraced Open Science practice. As yet, no future low carbon economic plans are reliable and have not been reliably tested with energy models using Open Science practices – essentially we are currently trapped – “planlos” (without a plan).
 
 

Online barcamps: Can they work?

The most important thing: A barcamp works virtually. You have already seen this at other barcamps, but it is different to have this experience as an organiser. And the contributors have to adapt to this new environment, too. To lighten up the atmosphere, simple elements like a social break with relaxation exercises or a pub quiz can help. And as with face-to-face events, digital retreat areas (coffee kitchens) are also needed. The great advantage of a virtual event is obvious: people from all over the world, who could not be reached with a face-to-face event, take part. This was very nice to see on the Barcamp@GeNeMe’20, whereby time zone differences naturally make participation only possible to certain time slots. In addition, compared to previous face-to-face barcamps, we could observe a higher fluctuation. It is easy to disconnect and reconnect, and you are more selective as participation in online events is generally a bit tiring.

Virtual barcamps may not (yet) come close to the spirit of a local barcamp, because certain possibilities of social interaction and exchange are simply missing. But we will certainly see more online formats (possibly as a supplement to offline formats) in the future. However, a hybrid barcamp seems to be a bit unimaginable at the moment.

Barcamp recommendations for 2021

The upcoming Barcamp Open Science (16 February 2021) as pre-event of the Open Science Conference -will be completely virtual-. Here we would also like to point out the Barcamp in the context of the Open Science Festival (14 January 2021) which is organised by colleagues in the Netherlands.

This might also interest you:

Authors:

Sabine Barthold (Media Centre, TU Dresden), Loek Brinkmann (Assistant professor in Open Science, Utrecht University, and community coordinator/co-founder @ Open Science Community Utrecht), Franziska Günther (Media Centre, TU Dresden), Ambreen Hamadani (SKUAST-Kashmir), Shweata Hegde (Regional Institute Of Education, Manasagangothri, Mysuru), Peter Murray-Rust (University of Cambridge and @TheContentMine), Guido Scherp (Open Science Transfer, ZBW, and Coordinator Leibniz Research Alliance Open Science), Simon Worthington (Open Science Lab, TIB, and GenR Editor-in-chief).

References portrait photos:
Loek Brinkmann: Ivar Pel© | Ambreen Hamadani: Ambreen Hamadani© | Shweata Hegde: Shweata Hegde© | Franziska Günther: Kirsten Lassig© | Peter Murray-Rust: Slowking – Own work, GFDL 1.2 | Simon Worthington: TIB / Christian Bierwagen©.

The post Barcamp@GeNeMe’2020: Open Science in Times of Crisis first appeared on ZBW MediaTalk.

The post Barcamp@GeNeMe’2020: Open Science in Times of Crisis first appeared on Leibniz Research Alliance Open Science.

Why I care about replication studies

In 2009 I attended a European Social Cognition Network meeting in Poland. I only remember one talk from that meeting: A short presentation in a nearly empty room. The presenter was a young PhD student – Stephane Doyen. He discussed two studies where he tried to replicate a well-known finding in social cognition research related to elderly priming, which had shown that people walked more slowly after being subliminally primed with elderly related words, compared to a control condition.

His presentation blew my mind. But it wasn’t because the studies failed to replicate – it was widely known in 2009 that these studies couldn’t be replicated. Indeed, around 2007, I had overheard two professors in a corridor discussing the problem that there were studies in the literature everyone knew would not replicate. And they used this exact study on elderly priming as one example. The best solution the two professors came up with to correct the scientific record was to establish an independent committee of experts that would have the explicit task of replicating studies and sharing their conclusions with the rest of the world. To me, this sounded like a great idea.

And yet, in this small conference room in Poland, there was this young PhD student, acting as if we didn’t need specially convened institutions of experts to inform the scientific community that a study could not be replicated. He just got up, told us about how he wasn’t able to replicate this study, and sat down.

It was heroic.

If you’re struggling to understand why on earth I thought this was heroic, then this post is for you. You might have entered science in a different time. The results of replication studies are no longer communicated only face to face when running into a colleague in the corridor, or at a conference. But I was impressed in 2009. I had never seen anyone give a talk in which the only message was that an original effect didn’t stand up to scrutiny. People sometimes presented successful replications. They presented null effects in lines of research where the absence of an effect was predicted in some (but not all) tests. But I’d never seen a talk where the main conclusion was just: “This doesn’t seem to be a thing”.

On 12 September 2011 I sent Stephane Doyen an email. “Did you ever manage to publish some of that work? I wondered what has happened to it.” Honestly, I didn’t really expect that he would manage to publish these studies. After all, I couldn’t remember ever having seen a paper in the literature that was just a replication. So I asked, even though I did not expect he would have been able to publish his findings.

Surprisingly enough, he responded that the study would soon appear in press. I wasn’t fully aware of new developments in the publication landscape, where Open Access journals such as PlosOne published articles as long as the work was methodologically solid, and the conclusions followed from the data. I shared this news with colleagues, and many people couldn’t wait to read the paper: An article, in print, reporting the failed replication of a study many people knew to be not replicable. The excitement was not about learning something new. The excitement was about seeing replication studies with a null effect appear in print.

Regrettably, not everyone was equally excited. The publication also led to extremely harsh online comments from the original researcher about the expertise of the authors (e.g., suggesting that findings can fail to replicate due to “Incompetent or ill-informed researchers”), and the quality of PlosOne (“which quite obviously does not receive the usual high scientific journal standards of peer-review scrutiny”). This type of response happened again, and again, and again. Another failed replication led to a letter by the original authors that circulated over email among eminent researchers in the area, was addressed to the original authors, and ended with “do yourself, your junior co-authors, and the rest of the scientific community a favor. Retract your paper.”

Some of the historical record on discussions between researchers around between 2012-2015 survives online, in Twitter and Facebook discussions, and blogs. But recently, I started to realize that most early career researchers don’t read about the replication crisis through these original materials, but through summaries, which don’t give the same impression as having lived through these times. It was weird to see established researchers argue that people performing replications lacked expertise. That null results were never informative. That thanks to dozens of conceptual replications, the original theoretical point would still hold up even if direct replications failed. As time went by, it became even weirder to see that none of the researchers whose work was not corroborated in replication studies ever published a preregistered replication study to silence the critics. And why were there even two sides to this debate? Although most people agreed there was room for improvement and that replications should play some role in improving psychological science, there was no agreement on how this should work. I remember being surprised that a field was only now thinking about how to perform and interpret replication studies if we had been doing psychological research for more than a century.
 

I wanted to share this autobiographical memory, not just because I am getting old and nostalgic, but also because young researchers are most likely to learn about the replication crisis through summaries and high-level overviews. Summaries of history aren’t very good at communicating how confusing this time was when we lived through it. There was a lot of uncertainty, diversity in opinions, and lack of knowledge. And there were a lot of feelings involved. Most of those things don’t make it into written histories. This can make historical developments look cleaner and simpler than they actually were.

It might be difficult to understand why people got so upset about replication studies. After all, we live in a time where it is possible to publish a null result (e.g., in journals that only evaluate methodological rigor, but not novelty, journals that explicitly invite replication studies, and in Registered Reports). Don’t get me wrong: We still have a long way to go when it comes to funding, performing, and publishing replication studies, given their important role in establishing regularities, especially in fields that desire a reliable knowledge base. But perceptions about replication studies have changed in the last decade. Today, it is difficult to feel how unimaginable it used to be that researchers in psychology would share their results at a conference or in a scientific journal when they were not able to replicate the work by another researcher. I am sure it sometimes happened. But there was clearly a reason those professors I overheard in 2007 were suggesting to establish an independent committee to perform and publish studies of effects that were widely known to be not replicable.

As people started to talk about their experiences trying to replicate the work of others, the floodgates opened, and the shells fell off peoples’ eyes. Let me tell you that, from my personal experience, we didn’t call it a replication crisis for nothing. All of a sudden, many researchers who thought it was their own fault when they couldn’t replicate a finding started to realize this problem was systemic. It didn’t help that in those days it was difficult to communicate with people you didn’t already know. Twitter (which is most likely the medium through which you learned about this blog post) launched in 2006, but up to 2010 hardly any academics used this platform. Back then, it wasn’t easy to get information outside of the published literature. It’s difficult to express how it feels when you realize ‘it’s not me – it’s all of us’. Our environment influences which phenotypic traits express themselves. These experiences made me care about replication studies.

If you started in science when replications were at least somewhat more rewarded, it might be difficult to understand what people were making a fuss about in the past. It’s difficult to go back in time, but you can listen to the stories by people who lived through those times. Some highly relevant stories were shared after the recent multi-lab failed replication of ego-depletion (see tweets by Tom Carpenter and Dan Quintana). You can ask any older researcher at your department for similar stories, but do remember that it will be a lot more difficult to hear the stories of the people who left academia because most of their PhD consisted of failures to build on existing work.

If you want to try to feel what living through those times must have been like, consider this thought experiment. You attend a conference organized by a scientific society where all society members get to vote on who will be a board member next year. Before the votes are cast, the president of the society informs you that one of the candidates has been disqualified. The reason is that it has come to the society’s attention that this candidate selectively reported results from their research lines: The candidate submitted only those studies for publication that confirmed their predictions, and did not share studies with null results, even though these null results were well designed studies that tested sensible predictions. Most people in the audience, including yourself, were already aware of the fact that this person selectively reported their results. You knew publication bias was problematic from the moment you started to work in science, and the field knew it was problematic for centuries. Yet here you are, in a room at a conference, where this status quo is not accepted. All of a sudden, it feels like it is possible to actually do something about a problem that has made you feel uneasy ever since you started to work in academia.

You might live through a time where publication bias is no longer silently accepted as an unavoidable aspect of how scientists work, and if this happens, the field will likely have a very similar discussion as it did when it started to publish failed replication studies. And ten years later, a new generation will have been raised under different scientific norms and practices, where extreme publication bias is a thing of the past. It will be difficult to explain to them why this topic was a big deal a decade ago. But since you’re getting old and nostalgic yourself, you think that it’s useful to remind them, and you just might try to explain it to them in a 2 minute TikTok video.

History merely repeats itself. It has all been done before. Nothing under the sun is truly new.
Ecclesiastes 1:9

Thanks to Farid Anvari, Ruben Arslan, Noah van Dongen, Patrick Forscher, Peder Isager, Andrea Kis, Max Maier, Anne Scheel, Leonid Tiokhin, and Duygu Uygun for discussing this blog post with me (and in general for providing such a stimulating social and academic environment in times of a pandemic).

ESCAIDE 2019 – A Smörgåsbord of Infectious Disease Epidemiology

ESCAIDE 2019 [Credit: ECDC](The title is an homage to the host country of the conference – Sweden – where the smörgåsbord is a buffet-style meal served on a large table. In English, the term has also adopted a

The Value of Preregistration for Psychological Science: A Conceptual Analysis

This blog is an excerpt of an invited journal article for a special issue of Japanese Psychological Review, that I am currently one week overdue with (but that I hope to complete soon). I hope this paper will raise the bar in the ongoing discussion about the value of preregistration in psychological science. If you have any feedback on what I wrote here, I would be very grateful to hear it, as it would allow me to improve the paper I am working on. If we want to fruitfully discuss preregistration, researchers need to provide a clear conceptual definition of preregistration, anchored in their philosophy of science.
For as long as data has been used to support scientific claims, people have tried to selectively present data in line with what they wish to be true. In his treatise ‘On the Decline of Science in England: And on Some of its Cases’ Babbage (1830) discusses what he calls cooking: “One of its numerous processes is to make multitudes of observations, and out of these to select those only which agree or very nearly agree. If a hundred observations are made, the cook must be very unlucky if he can not pick out fifteen or twenty that will do up for serving.” In the past researchers have proposed solutions to prevent bias in the literature. With the rise of the internet it has become feasible to create online registries that ask researchers to specify their research design and the planned analyses. Scientific communities have started to make use of this opportunity (for a historical overview, see Wiseman, Watt, & Kornbrot, 2019).
Preregistration in psychology has been a good example of ‘learning by doing’. Best practices are continuously updated as we learn from practical challenges and early meta-scientific investigations into how preregistrations are performed. At the same time, discussions have emerged about what the goal of preregistration is, whether preregistration is desirable, and what preregistration should look like across different research areas. Every practice comes with costs and benefits, and it is useful to evaluate whether and when preregistration is worth it. Finally, it is important to evaluate how preregistration relates to different philosophies of science, and when it facilitates or distracts from goals scientists might have. The discussion about benefits and costs of preregistration has not been productive up to now because there is a general lack of a conceptual analysis of what preregistration entails and aims to accomplish, which leads to disagreements that are easily resolved when a conceptual definition would be available. Any conceptual definition about a tool that scientists use, such as preregistration, must examine the goals it achieves, and thus requires a clearly specified view on philosophy of science, which provides an analysis of different goals scientists might have. Discussing preregistration without discussing philosophy of science is a waste of time.

What is Preregistration For?

Preregistration has the goal to transparently prevent bias due to selectively reporting analyses. Since bias in estimates only occurs in relation to a true population parameter, preregistration as discussed here is limited to scientific questions that involve estimates of population values from samples. Researchers can have many different goals when collecting data, perhaps most notably theory development, as opposed to tests of statistical predictions derived from theories. When testing predictions, researchers might want a specific analysis to yield a null effect, for example to show that including a possible confound in an analysis does not change their main results. More often perhaps, they want an analysis to yield a statistically significant result, for example so that they can argue the results support their prediction, based on a p-value below 0.05. Both examples are sources of bias in the estimate of a population effect size. In this paper I will assume researchers use frequentist statistics, but all arguments can be generalized to Bayesian statistics (Gelman & Shalizi, 2013). When effect size estimates are biased, for example due to the desire to obtain a statistically significant result, hypothesis tests performed on these estimates have inflated Type 1 error rates, and when bias emerges due to the desire to obtain a non-significant test result, hypothesis tests have reduced statistical power. In line with the general tendency to weigh Type 1 error rates (the probability of obtaining a statistically significant result when there is no true effect) as more serious than Type 2 error rates (the probability of obtaining a non-significant result when there is a true effect), publications that discuss preregistration have been more concerned with inflated Type 1 error rates than with low power. However, one can easily think of situations where the latter is a bigger concern.
If the only goal of a researcher is to prevent bias it suffices to make a mental note of the planned analyses, or to verbally agree upon the planned analysis with collaborators, assuming we will perfectly remember our plans when analyzing the data. The reason to write down an analysis plan is not to prevent bias, but to transparently prevent bias. By including transparency in the definition of preregistration it becomes clear that the main goal of preregistration is to convince others that the reported analysis tested a clearly specified prediction. Not all approaches to knowledge generation value prediction, and it is important to evaluate if your philosophy of science values prediction to be able to decide if preregistration is a useful tool in your research. Mayo (2018) presents an overview of different arguments for the role prediction plays in science and arrives at a severity requirement: We can build on claims that passed tests that were highly capable of demonstrating the claim was false, but supported the prediction nevertheless. This requires that researchers who read about claims are able to evaluate the severity of a test. Preregistration facilitates this.
Although falsifying theories is a complex issue, falsifying statistical predictions is straightforward. Researchers can specify when they will interpret data as support for their claim based on the result of a statistical test, and when not. An example is a directional (or one-sided) t-test testing whether an observed mean is larger than zero. Observing a value statistically smaller or equal to zero would falsify this statistical prediction (as long as statistical assumptions of the test hold, and with some error rate in frequentist approaches to statistics). In practice, only range predictions can be statistically falsified. Because resources and measurement accuracy are not infinitely large, there is always a value close enough to zero that is statistically impossible to distinguish from zero. Therefore, researchers will need to specify at least some possible outcomes that would not be considered support for their prediction that statistical tests can pick up on. How such bounds are determined is a massively understudied problem in psychology, but it is essential to have falsifiable predictions.
Where bounds of a range prediction enable statistical falsification, the specification of these bounds is not enough to evaluate how highly capable a test was to demonstrate a claim was wrong. Meehl (1990) argues that we are increasingly impressed by a prediction, the more ways a prediction could have been wrong.  He writes (1990, p. 128): “The working scientist is often more impressed when a theory predicts something within, or close to, a narrow interval than when it predicts something correctly within a wide one.” Imagine making a prediction about where a dart will land if I throw it at a dartboard. You will be more impressed with my darts skills if I predict I will hit the bullseye, and I hit the bullseye, than when I predict to hit the dartboard, and I hit the dartboard. Making very narrow range predictions is a way to make it statistically likely to falsify your prediction, if it is wrong. It is also possible to make theoretically risky predictions, for example by predicting you will only observe a statistically significant difference from zero in a hypothesis test if a very specific set of experimental conditions is met that all follow from a single theory. Regardless of how researchers increase the capability of a test to be wrong, the approach to scientific progress described here places more faith in claims based on predictions that have a higher capability of being falsified, but where data nevertheless supports the prediction. Anyone is free to choose a different philosophy of science, and create a coherent analysis of the goals of preregistration in that framework, but as far as I am aware, Mayo’s severity argument currently provides one of the few philosophies of science that allows for a coherent conceptual analysis of the value of preregistration.
Researchers admit to research practices that make their predictions, or the empirical support for their prediction, look more impressive than it is. One example of such a practice is optional stopping, where researchers collect a number of datapoints, perform statistical analyses, and continue the data collection if the result is not statistically significant. In theory, a researcher who is willing to continue collecting data indefinitely will always find a statistically significant result. By repeatedly looking at the data, the Type 1 error rate can inflate to 100%. Even though in practice the inflation will be smaller, optional stopping strongly increases the probability that a researcher can interpret their result as support for their prediction. In the extreme case, where a researcher is 100% certain that they will observe a statistically significant result when they perform their statistical test, their prediction will never be falsified. Providing support for a claim by relying on optional stopping should not increase our faith in the claim by much, or even at all. As Mayo (2018, p. 222) writes: “The good scientist deliberately arranges inquiries so as to capitalize on pushback, on effects that will not go away, on strategies to get errors to ramify quickly and force us to pay attention to them. The ability to register how hunting, optional stopping, and cherry picking alter their error-probing capacities is a crucial part of a method’s objectivity.” If researchers were to transparently register their data collection strategy, readers could evaluate the capability of the test to falsify their prediction, conclude this capability is very small, and be relatively unimpressed by the study. If the stopping rule keeps the probability of finding a non-significant result when the prediction is incorrect high, and the data nevertheless support the prediction, we can choose to act as if the claim is correct because it has been severely tested. Preregistration thus functions as a tool to allow other researchers te transparently evaluate the severity with which a claim has been tested.
The severity of a test can also be compromised by selecting a hypothesis based on the observed results. In this practice, known as Hypothesizing After the Results are Known (HARKing, Kerr, 1998) researchers look at their data, and then select a prediction. This reversal of the typical hypothesis testing procedure makes the test incapable of demonstrating the claim was false. Mayo (2018) refers to this as ‘bad evidence, no test’. If we choose a prediction from among the options that yield a significant result, the claims we make base on these ‘predictions’ will never be wrong. In philosophies of science that value predictions, such claims do not increase our confidence that the claim is true, because it has not yet been tested. By preregistering our predictions, we transparently communicate to readers that our predictions predated looking at data, and therefore that the data we present as support of our prediction could have falsified our hypothesis. We have not made our test look more severe by narrowing the range of our predictions after looking at the data (like the Texas sharpshooter who draws the circles of the bullseye after shooting at the wall of the barn). A reader can transparently evaluate how severely our claim was tested.
As a final example of the value of preregistration to transparently allow readers to evaluate the capability of our prediction to be falsified, think about the scenario described by Babbage at the beginning of this article, where a researchers makes multitudes of observations, and selects out of all these tests only those that support their prediction. The larger the number of observations to choose from, the higher the probability that one of the possible tests could be presented as support for the hypothesis. Therefore, from a perspective on scientific knowledge generation where severe tests are valued, choosing to selectively report tests from among many tests that were performed strongly reduces the capability of a test to demonstrate the claim was false. This can be prevented by correcting for multiple testing by lowering the alpha level depending on the number of tests.
The fact that preregistration is about specifying ways in which your claim could be false is not generally appreciated. Preregistrations should carefully specify not just the analysis researchers plan to perform, but also when they would infer from the analyses that their prediction was wrong. As the preceding section explains, successful predictions impress us more when the data that was collected was capable of falsifying the prediction. Therefore, a preregistration document should give us all the required information that allows us to evaluate the severity of the test. Specifying exactly which test will be performed on the data is important, but not enough. Researchers should also specify when they will conclude the prediction was not supported. Beyond specifying the analysis plan in detail, the severity of a test can be increased by narrowing the range of values that are predicted (without increasing the Type 1 and Type 2 error rate), or making the theoretical prediction more specific by specifying detailed circumstances under which the effect will be observed, and when it will not be observed.

When is preregistration valuable?

If one agrees with the conceptual analysis above, it follows that preregistration adds value for people who choose to increase their faith in claims that are supported by severe tests and predictive successes. Whether this seems reasonable depends on your philosophy of science. Preregistration itself does not make a study better or worse compared to a non-preregistered study. Sometimes, being able to transparently evaluate a study (and its capability to demonstrate claims were false) will reveal a study was completely uninformative. Other times we might be able to evaluate the capability of a study to demonstrate a claim was false even if the study is not transparently preregistered. Examples are studies where there is no room for bias, because the analyses are perfectly constrained by theory, or because it is not possible to analyze the data in any other way than was reported. Although the severity of a test is in principle unrelated to whether it is pre-registered or not, in practice there will be a positive correlation that is caused by the studies where the ability to evaluate how capable these studies were to demonstrate a claim was false is improved by transparently preregistering, such as studies with multiple dependent variables to choose from, which do not use standardized measurement scale so that the dependent variable can be calculated in different ways, or where additional data is easily collected, to name a few.
We can apply our conceptual analysis of preregistration to hypothetical real-life situations to gain a better insight into when preregistration is a valuable tool, and when not. For example, imagine a researcher who preregisters an experiment where the main analysis tests a linear relationship between two variables. This test yields a non-significant result, thereby failing to support the prediction. In an exploratory analysis the authors find that fitting a polynomial model yields a significant test result with a low p-value. A reviewer of their manuscript has studied the same relationship, albeit in a slightly different context and with another measure, and has unpublished data from multiple studies that also yielded polynomial relationships. The reviewer also has a tentative idea about the underlying mechanism that causes not a linear, but a polynomial, relationship. The original authors will be of the opinion that the claim of a polynomial relationship has passed a less severe test than their original prediction of a linear prediction would have passed (had it been supported). However, the reviewer would never have preregistered a linear relationship to begin with, and therefore does not evaluate the switch to a polynomial test in the exploratory result section as something that reduces the severity of the test. Given that the experiment was well-designed, the test for a polynomial relationship will be judged as having greater severity by the reviewer than by the authors. In this hypothetical example the reviewer has additional data that would have changed the hypothesis they would have preregistered in the original study. It is also possible that the difference in evaluation of the exploratory test for a polynomial relationship is based purely on a subjective prior belief, or on the basis of knowledge about an existing well-supported theory that would predict a polynomial, but not a linear, relationship.
Now imagine that our reviewer asks for the raw data to test whether their assumed underlying mechanism is supported. They receive the dataset, and looking through the data and the preregistration, the reviewer realizes that the original authors didn’t adhere to their preregistered analysis plan. They violated their stopping rule, analyzing the data in batches of four and stopping earlier than planned. They did not carefully specify how to compute their dependent variable in the preregistration, and although the reviewer has no experience with the measure that has been used, the dataset contains eight ways in which the dependent variable was calculated. Only one of the eight ways in which the dependent variable yields a significant effect for the polynomial relationship. Faced with this additional information, the reviewer believes it is much more likely that the analysis testing the claim was the result of selective reporting, and now is of the opinion the polynomial relationship was not severely tested.
Both of these evaluations of how severely a hypothesis was tested were perfectly reasonable, given the information reviewer had available. It reveals how sometimes switching from a preregistered analysis to an exploratory analysis does not impact the evaluation of the severity of the test by a reviewer, while in other cases a selectively reported result does reduce the perceived severity with which a claim has been tested. Preregistration makes more information available to readers that can be used to evaluate the severity of a test, but readers might not always evaluate the information in a preregistration in the same way. Whether a design or analytic choice increases or decreases the capability of a claim to be falsified depends on statistical theory, as well as on prior beliefs about the theory that is tested. Some practices are known to reduce the severity of tests, such as optional stopping and selective reporting analyses that yield desired results, and therefore it is easier to evaluate how statistical practices impact the severity with which a claim is tested. If a preregistration is followed through exactly as planned then the tests that are performed have desired error rates in the long run, as long as the test assumptions are met. Note that because long run error rates are based on assumptions about the data generating process, which are never known, true error rates are unknown, and thus preregistration makes it relatively more likely that tests have desired long run error rates. The severity of a tests also depends on assumptions about the underlying theory, and how the theoretical hypothesis is translated into a statistical hypothesis. There will rarely be unanimous agreement on whether a specific operationalization is a better or worse test of a hypothesis, and thus researchers will differ in their evaluation of how severely specific design choices tests a claim. This once more highlights how preregistration does not automatically increase the severity of a test. When it prevents practices that are known to reduce the severity of tests, such as optional stopping, preregistration leads to a relative increase in the severity of a test compared a non-preregistered study. But when there is no objective evaluation of the severity of a test, as is often the case when we try to judge how severe a test was based on theoretical grounds, preregistration merely enables a transparent evaluation of the capability of a claim to be falsified.

Could #Blockchain provide the technical fix to solve science’s reproducibility crisis?

Soenke Bartling and Benedikt Fecher on the use of blockchain technology in research.

Currently blockchain is being hyped. Many claim that the blockchain revolution will affect not only our online life, but will profoundly change many more aspects of our society. Many foresee these changes as potentially being more far-reaching than those brought by the internet in the last two decades. If this holds true, it is certain that research and knowledge creation will also be affected by this. So, what is blockchain all about? More importantly, could knowledge creation benefit from it? One potential area it could be useful is in addressing the credibility and reproducibility crisis in science.

Article on #openaccess scholarly innovation and research infrastructure

In this article Benedikt Fecher and Gert Wagner argue that the current endeavors to achieve open access in scientific literature require a discussion about innovation in scholarly publishing and research infrastructure. Drawing on path dependence theory and addressing different open access (OA) models and recent political endeavors, the authors argue that academia is once again running the risk of outsourcing the organization of its content.

Council of the European Union calls for full open access to scientific research by 2020 – Creative Commons blog – Creative Commons

Science! by Alexandro Lacadena, CC BY-NC-ND 2.0 A few weeks ago we wrote about how the European Union is pushing ahead its support for open access to EU-funded scientific research and data. Today at the meeting of the Council of the European Union, the Council reinforced the commitment to making all scientific articles and data […]

The post Council of the European Union calls for full open access to scientific research by 2020 appeared first on Creative Commons blog.

Complying With HEFCE’s Open Access Policy: What You Need To Know

Complying With HEFCE’s Open Access Policy: What You Need To Know

Most researchers working in the UK will know that the Higher Education Funding Council for England (HEFCE) open access policy took effect from April 1st of this year, but what does that mean for you, and how can you make sure you are fully compliant?     What is the HEFCE open access policy? Around…