Barcamp Open Science 2023: So much has happened and so much still needs to happen!

by Evgeny Bobrov, Christian Busse, Julien Colomb, Tamara Diederichs, Tamara Heck, Renu Kumar, Peter Murray-Rust, Daniel Nüst, Merle-Marie Pittelkow, Lozana Rossenova, Guido Scherp

When we (the Barcamp Orga Team) were planning the ninth Barcamp Open Science, we were faced with the question: back to a face-to-face event or online, and if we choose a face-to-face event, could a hybrid format work at a barcamp? Hybrid works! Especially thanks to the technical progress and the appropriate premises at Wikimedia. A barcamp is a format that can have its best effect when meeting in person. But for us, hybrid also means openness towards those who, for various reasons, cannot come to Berlin. This year, we had 40 participants on site and 60 more taking part online, including people from India. Online participants could follow the barcamp in the main room (opening, ignition talk, session planning), but also participate in sessions in hybrid rooms. In one case, a session was even moderated remotely.

This year, we were particularly pleased that half of the participants took part in the Barcamp Open Science for the first time, and that many of them proposed and moderated a session straight away. The barcamp thus also contributes to enlarging the ‘Open Science bubble’ bit by bit.

There has never been a lack of topics at the Barcamp, though some of the topics are recurring, of course. Every year new aspects are included in the discussion. This year for example, Open Science for climate justice and bringing Open Science into ‘schools’, that is what role can (educational) organisations play in the context of knowledge justice. The traditional ‘Ignition Talk’ also brings new and relevant topics to the table. It was held this year by Peter Murray-Rust on the topic ‘Why do I do Open Science?’. He emphasised how important open knowledge is for society as a whole, especially in tackling the climate crisis. He is therefore actively involved in the #semanticClimate project, in which tools are being developed to make knowledge from the Intergovernmental Panel on Climate Change (IPCC) reports by the UN semantically available.

Some of the session moderators have summarised their sessions and their results below.

Indicators for Researcher Contributions to Wikidata
by Evgeny Bobrov

Wikidata and more generally knowledge graphs (KGs) hold a lot of promise to make knowledge, prominently including scientific knowledge, available in a structured way for automated applications. With the advent of Large Language Models (LLMs), there is much discussion about the future role of KGs, but in terms of quality-assurance, traceability, and speed KGs are far superior to LLMs. Thus, as Denny Vrandecic describes in his talk ‘The Future of Knowledge Graphs in a World of LLMs’, they will only continue to grow in importance. Positions of ‘Wikimedian in Residence’ are becoming increasingly common, as at the University of Edinburgh or the University of Virginia. Given these developments, we anticipate that entering knowledge generated at research institutions into KGs will become increasingly important for institutions and a standard practice for researchers.

However, if this is to become a common practice, it needs to be rewarded, and the question arises of how to monitor and reward sharing of knowledge in Wikidata and other KGs. This was the main topic of the session, although we ventured into other Wikidata-related topics as well. In particular, one participant was disillusioned with Wikidata, as many necessary relations were as yet undefined, and it was generally still too limited. Thus, entering data into Wikidata came at the cost of simplifying or sometimes even distorting knowledge. The opinion was also voiced, however, that this is legitimate as a start and should not prevent researchers from extending Wikidata. It was also mentioned that the NFDI is considering using Wikidata as an infrastructure, and that the Volkswagen Foundation in its funding for data management tasks might include entering knowledge into Wikidata. A sideline of the discussion, which would warrant more attention, is in how far Wikidata should contain all knowledge, or how else could it be organised, for instance in a federated way.

Specifically regarding monitoring and incentives, the following aspects were discussed:

Own contributions

  • Number of contributions to Wikidata
  • Number of links, own contributions receive in other entries
  • Number of references to entries in scientific works
  • Fraction of entries which have been validated by others

Community work:

  • Number of entries reviewed or validated

There is already work in this direction, for example, to allow a tracking of citations to Wikipedia contributions, and this metric is mentioned in the Metrics Toolkit. However, for Wikidata, there would need to be a method in place to reference sources in a much more granular way than is currently common

Indicators as listed above, which can be conceived of as a type of nanopublication, could then be aggregated and for instance be used in CVs. There was agreement, however, that these metrics are not fundamentally different from article and citation metrics, and can thus lead to an overemphasis of quantity as well as be gamed. However, we could not come up with more unique metrics in this session, which would be less quantitative and/or less easily gamed.

Legislative Measures to Increase Data Availability
by Christian Busse

The central question for this session was whether legislative measures that aim to increase the availability of data held by private or public entities are considered as useful and appropriate from the Open Science perspective. The backdrop to this is a number of ongoing legislative initiatives on the European DataAct – at this point finalised, but not passed yet, European Health Data Space (EHDS) Regulation – still under discussion by the EU Council) and the German federal level (proposed ‘Forschungsdatengesetz’ (German) / Research Data Act – still at a draft stage) for which provisions have been discussed that would/will enforce private and public entities to share their data with researchers.

After a quick overview about the provisions at hand, the participants started with a discussion that covered a broad range of aspects, but had three main key take-aways. First, coercing private entities into data sharing was NOT perceived as a constructive measure by the participants, as providing data (that is complying to the legal requirement) does not guarantee good data quality. Second, the participants saw greater promise in trying to utilise data from public entities, as financial compensation would be less of an issue. However, the participants also agreed that this would require a more service-oriented mindset in the public administration and that empowering and up-skilling public employees (for instance to become data stewards, product owners, and so on) would be helpful. Third, a marketplace solution in which (public and private) entities can offer their data products for research was also considered an interesting option by the participants, although the ultimate outcome would depend on numerous parameters of the marketplace and hence is hard to predict.

New Forms of Communication
by Christian Busse

The social medium X (formerly known as Twitter) has been in turmoil for a while. In this session, the participants discussed whether and how this affects their communication strategies when promoting and discussing Open Science online. There was a consensus among the participants that the goal should be to serve a broad audience and that this will require more channels, now that some people and communities are moving away from X. Mastodon is considered an interesting alternative due to its federated character, but the verdict is still out whether it can serve as a long-term replacement. Moving beyond the individual platforms, the participants then discussed ways to organise (scientific) quality control in media that do not primarily serve science. How can we validate that an account belongs to a given person, that a person is really a member of a given institution and that a person is really an expert in a given field of research? While there are some technical solutions to some aspects of this (for instance Mastodon’s link verification), this requires trusted entry points that are controlled by the community.

An Exit Strategy for Github
by Julien Colomb

In this session, we tried to collect strategies to make Open Science projects independent of GitHub. While GitHub is a very nice platform, it may become less nice at any time (and there may already be some issues with Microsoft tracking all your activities on the platform). We saw with the formerly Twitter that platforms can become unusable, such that relying on one platform for one specific scholarly work can be dangerous.

In order to get less dependent on GitHub and be able to move a community to another platform if needed (or wished for), we collected different strategies:

  • Push the content into GitLab or other alternatives: this makes the code and document easy to access without GitHub, but a lot of community work and ongoing activities would still be lost (issues, PR, forks, discussions,…). This technical solution does not deal with moving the community on a different platform.
  • Get your community on different platforms. For example, use a forum, a chat-application (discourse) on top of GitHub. So if you need to move from one platform to another, the community still has other communication channels that are still working.

We then talked about development in decentralised systems: Forgejo is planning to make different instances interoperable. Indeed, having different GitLab instances in different universities in Germany makes it difficult to have collaborators in other universities (a new account is needed for each instance). Alternatively, we may see the development of European, institutionalised GitLab instances like EUDAT GitLab Repository?

Bringing Open Science Into ‘Education’
by Tamara Diederichs

A group of five people from different backgrounds participated in this session. The basic question was about Open Science and the connection with education. The session, which was also recorded and whose result is available as a transcript in the pad, said that we need a cultural change and that a cultural change to Open Science can happen through education and the organisations there.

The following questions were discussed:

  • What can organisations do to bring Open Science to the world or society?
  • What kind of structures for organisations or what kind of structures can be built with organisations to bring Open Science into the society?
  • Are there Open Science strategies and organisations and which ones?
  • Are there already movements that are taking Open Science in Education?

Some Conclusions:

  • There are different organisations that can promote Open Science, for instance universities or schools.
  • It is important to be transparent yourself and to inform others why transparency is important.
  • Collaboration should be an important approach in knowledge generation.
  • It is difficult to get people outside the Open Science bubble excited about Open Science.
  • Organisations need Open Science strategies.
  • The traditional education and science system can be described as a barrier to Open Science.

Open Scholarship Indicators
by Tamara Heck

Quite a few institutions have passed Open Science policies or guidelines, in which they admit to the principles of Open Science and recommend good practices for their research employees. According to policy templates, each policy may include aspects of ‘monitoring policy compliance’, that are actions on how we can assess the impact of the policy on daily research practices. However, the challenge is to measure Open Science practices properly, which means to make the evaluation fair and transparent, and not availing any non-desirable practices.

Currently, the implementation of an open research culture is not fully measured. The most popular example of assessing the development of Open Science is counting Open Access publications (in relation to closed publications). Current dashboards aim at showing more quantitative data of research output by institutions, like Helmholtz and the Charité.

Looking at the quantified Open Science-related output, it is important to say that not all research practices are easily quantifiable. Moreover, such indicators should be defined according to domain specific aspects and their specific use case. Comparing numbers between different domains or entities can be misleading. Another challenge with semi-automated data collection for such indicators is missing or false metadata in our digital infrastructures. If such difficulties are reduced adequately, these indicators can be measured over longer time periods to see how parameters evolve and to better put the relative numbers into perspective.

Conclusion: Indicators can be used to measure how we evolve in Open Science practices over time. However, the development of such measures needs careful consideration, both on technical aspects like metadata and data collection, and on social aspects like the understanding and appreciation of researchers.

Open Science for Climate Justice
by Peter Murray-Rust and Renu Kumari

This 45 minutes long session was about the role of semantic climate tools used to simplify the chapters from IPCC reports and make these chapters understandable by any person in the world irrespective of their age and education level. The demo was about the tools pyamihtml, pygetpapers and docanalysis. The colab notebook shared in the demo session contains the information to use all the tools and was applied to view one of the output like a word cloud for different useful keywords from the searched literature based on the search query ‘climate justice and Africa’. It nicely explains the terms which are very significant and prevalent for the climate study.

Open Science – Sticks and Carrots for Change
by Daniel Nüst

A transition to Open Science that is sustainable requires a cultural change in all aspects of academic research, even if this requires questioning long-held beliefs and established practices. I proposed that to achieve such a lasting shift in funding, sharing, evaluation, and career building practices that transcends nations and cultures, one needs to think about both incentives and encouragement – carrots – and policy and requirements – sticks!

In the session, the participants started by collecting the stakeholders of a cultural change, and identified rather classical roles in academia across different career stages, for example funders, professors, students, librarians, etc. An excellent resource that was shared in the context is the article ‘Promoting Open Science: A Holistic Approach to Changing Behaviour’ that includes suggestions for these different stakeholders within the academic system. It was noted that bottom-up initialisation of behavioural change works to some degree, but top-down was also needed. This perspective adds another dimension when thinking about cultural change. It was specifically discussed that leadership in organisations and communities needs to be involved, since waiting for a generational change (the Open Science enthusiasts of students today become professors of tomorrow) may take too long. The LIBER Citizen Science working group was pointed out as an initiative that successfully ran a course targeting leadership specifically in the context of Citizen Science, amongst other stakeholder-specific documentation (Citizen Science for Research Libraries – A Guide) – another good idea! With the question of generational change in mind, the discussion then shifted towards the question whether ‘better education’ can implement a cultural change. The experiences here were controversial. One participant reported, that in one field of research, thinking about openness ends with Open Access, whereas another pointed out that the Open Science communities and initiatives quickly tend to focus on education, but these activities don’t seem to have lasting impact: people participate in workshops, but don’t change their practices, and early career researchers don’t have the power to introduce change on the needed large scale. The psychologists in the group pointed out the helpful idea that later career stages struggle to embrace change due to a very human trait and bias: professors think their approach was successful, so it is the right one. Possibly reflecting the career stages of the group members, but also representing experiences as Open Science proponents, the majority was in favour of top-down changes, for instance clear incentives and different evaluation criteria. For such a policy-based approach, major activities such as COARA and DORA were introduced into the conversation and new to some of the participants. The latter was presented as a clever approach to shift institutional policies in a sustainable way. Because the in part abstract goals the declaration pursues can then enable individual members of the signing organisations, who want to advance researcher assessment for example in hiring, to justify changes.

The psychologists also were a bit more exposed in the meeting, as the group shifted their conversation to Psychology as a discipline, which some saw as leading in Open Science practices due to the impactful replication crisis. When realising the questionable replicability of important foundational scientific works, the discipline did shift practices. Maybe ‘having a real crisis’ as a discipline and being lucky that people want to learn from it is the only way for change? Hopefully not.

Finally, the idea that one can draw from the experiences of Citizen Science was put forward but argued against, too. On the one hand, similar to approaches to decolonization, one should think about bringing knowledge back to the public and not just within the world of research. On the other hand, the academic work in Citizen Science was not seen as more advanced in Open Science practices than other disciplines, falling into the same traps of publish or perish, slow change, and further more.

All in all, the session did in some moments reflect a group therapy session. Almost every participant was actively working towards a cultural change, but as individuals or small initiatives, many also often feel quite powerless. The ‘venting’ and sharing that happened during the session was just as important as the useful resources that were shared. The fun and open exchange helped to find new energy to push towards a cultural change in academia, which was seen as needed by all joining and contributing to the discussion.

To wrap up the session, everybody was invited to come up with “the one thing” that one would change to achieve a sustainable shift in academic culture. The following items were mentioned, and shall be listed here in full to value all inputs and give you more food for thought:

  • Incentive structure that gives people the possibility to do good (5 times), like more permanent positions, incentives that understand progress is slow, incentives that work towards a vision for openness, no publication-based dissertations
  • community building / ‘peer pressure’
  • force whole communities to move together
  • establish understanding that ‘open is better’
  • top-down pressure from (public) funders
  • document good practices really well, leading to amplification
  • find an agreement on ‘the right way’ to do research (with the help of technology), making academia a better community
  • more team science

Creating A Shared Definition of Open Science
by Merle-Marie Pittelkow

During the ignition talk, Peter Murray-Rusk asked the audience to raise their hands if they had a clear idea of what Open Science was. In a room full of Open Science enthusiasts and advocates, I was expecting people to confidently throw their hands in the air, but only few raised their hands. This lack of a response inspired this session aiming to create a shared understanding of what Open Science means to the participants of the Barcamp Open Science. My hope was that this would foster and support the following discussions and avoid miscommunication between participants.

While you can find many definitions of Open Science online (e.g., as provided by FOSTER or UNESCO), there are individual differences in how these are interpreted and applied in practice. As a group we concluded that a monolithic, central definition of Open Science is not useful as what constitutes Open Science is context dependent and varies for example per scientific field, institutional context, and policies. Still, we were able to create a working definition of Open Science within the context of this event. The group agreed on the following aspects of Open Science:

  1. Sharing data
  2. Sharing results
  3. Sharing processes and methods (for instance ResearchEquals)
  4. Cocreation of a scientific process – outside of academic journals, more communal
  5. Digital long term storage (like NFDI structure, repositories) with sufficient documentation for the possibility to reuse the data in the long term
  6. Reflexive notes
  7. Preregistration; Registered Reports
  8. Pre-prints
  9. Replication
  10. Ensuring interoperability and then also doing the linking of data (part of FAIR data)

Semantic Annotations
by Lozana Rossenova

In this session, we introduced the ongoing work on semantic annotations for cultural heritage at the Open Science Lab, TIB, part of the NFDI4Culture consortium. The main tools we’re developing are focusing on the annotation of 3D models, and other multi-media cultural heritage representations. Annotations are structured in Linked Open Data (LOD), enriched with a standard authority file data and made accessible via a SPARQL endpoint. The integrated toolchain is called Semantic Kompakkt and consists of Wikibase (for metadata storage as LOD) and Kompakkt (for publishing and annotating 2D, 3D and AV media). OpenRefine is used as main data cleaning, reconciliation and upload tool. The session focused on the open, iterative approach towards the development of the toolchain around specific use-cases with partner institutions of NFDI4Culture. We also discussed the role of Wikidata in facilitating federated queries and the benefits of working with semantic data in general, including the introduction of structured vocabularies and authority file data in the data enrichment process. The final point of discussion touched upon the Antelope service (also from OSL / NFDI4Culture) for terminology search and integration into the annotation workflow. A core focus of the whole session was the use of Open Source software and how further development of Free and Open Source Software (FOSS) in research contexts can both draw upon and support the maintainer communities. The source code from the NFDI4Culture project is available on GitLab.

Data links in the Semantic Kompakkt toolchain. Credit: Lozana Rossenova, CC-BY.

We Will Continue

We would like to thank all participants for their session proposals, which contributed to exciting discussions and the remarkable atmosphere especially on site, but also online. We are happy about a very active and constructive community. Thus, we will continue the hybrid barcamp and are already looking forward to next year’s tenth anniversary. The Barcamp Open Science is our personal Open Science success story! In spite of all the successes the movement has achieved, so much still needs to happen.

Barcamp Open Science 2023

This year’s Barcamp was again accompanied by the Open Science Radio team, who interviewed numerous session moderators. These episodes are currently being published bit by bit and can be found here.

About the authors:

Evgeny Bobrov is Project Leader for Open Data & Research Data Management at the QUEST Center for Responsible Research, which is a part of the Berlin Institute of Health at Charité. He addresses these topics from diverse perspectives, including researcher consulting, teaching, policy, and monitoring. A major focus currently is defining and evaluating the openness of datasets in a thorough way.

Christian Busse is a team leader at German Cancer Research Center (DKFZ). He has a medical background and holds a PhD in experimental immunology. His current work focuses on comprehensive solutions for the management of immunological data. Christian is co-chair of the Standards working group of the AIRR Community and a member of the NFDI4Immuno consortium. He can be found on Mastodon and ORCID.

Julien Colomb is a former neuro-geneticist (10 years of research on fruit fly memory and behaviour), and has been exploiting his interest in Open Research, working on reproducible data analysis and research data management, as well as more recently on Open Source research hardware. He is presently working on ways (technical and social) to implement the principles of FAIR and Open Data in the lab workflow and ways to foster collaboration between researchers via the SmartFigure Gallery and the GIN-Tonic projects. On the other hand, he is fostering the development of open hardware in academia and beyond, inside the OpenMake project. He can be found on Mastodon.

Tamara Diederichs is Co-CEO at and responsible for content and publishing. She has a scientific background with a focus on formal and non-formal adult learning, knowledge transfer, organisational learning and Open Science. She is also a researcher and honorary lecturer at the Department of Educational Sciences at the University of Koblenz. Among others, she can be found on the following channels: LinkedIn and ORCID.

Tamara Heck works at the Information Centre for Education at DIPF | Leibniz Institute for Research and Information in Education. She investigates how digital infrastructures can facilitate and influence information seeking, and how they can support Open Science practices. Tamara Heck can be found on X and LinkedIn.

Renu Kumari is working as Programme manager at #semanticclimate, NIPGR, New Delhi, India.

Peter Murray-Rust is a chemist currently working at the University of Cambridge. As well as his work in chemistry, Murray-Rust is also known for his support of Open Access and Open Data.

Daniel Nüst is a research software engineer and postdoc at the Chair of Geoinformatics, TU Dresden, Germany. He develops tools for open and reproducible geoscientific research and is a proponent for open scholarship and reproducibility in the projects NFDI4Earth, o2r, KOMET, and CODECHECK. He can be found on Mastodon, LinkedIn, GitHub, ORCID, and many more platforms via Nordholmen.

Merle-Marie Pittelkow is a postdoctoral researcher at the QUEST Center for Responsible Research, Berlin Institute of Health at Charité Berlin. In her work, she focuses on research ethics and increasing transparency in informed decision making. As a former OSCG board member and co-chair of ReproducibilitTEA at the University of Groningen, she has been an advocate of Open Science since her PhD.

Lozana Rossenova is a postdoctoral researcher at the Open Science Lab at TIB – Leibniz Information Centre for Science and Technology, and works on the NFDI4Culture project, in the task areas for data enrichment and knowledge graph development for cultural heritage research data. She serves as Wikibase community manager within NFDI4Culture and is a co-founder of the Wikibase Stakeholder Group. She can be found on Mastodon.

Guido Scherp is Head of the “Open Science Transfer” department at the ZBW – Leibniz Information Centre for Economics. He can be found on Mastodon and LinkedIn.

All photos: Bettina Ausserhofer©

The post Barcamp Open Science 2023: So much has happened and so much still needs to happen! first appeared on ZBW MediaTalk.

Open Access Barcamp 2023: Live and in Colour

by Danny Flemming

No journey was too far: participants came from Hamburg, Berlin, Austria and Switzerland, among other places, to Konstanz on Lake Constance on 28 March 2023 – despite nationwide train strikes. The enthusiasm was too great to finally be able to discuss Open Access developments and challenges in person, after the online formats of previous years. The Open Access Barcamp 2023 was organized as part of the project, which not only operates the portal of the same name , but also plays a key role in networking the Open Access community. The event was held at the Communication, Information and Media Centre (KIM) of the University of Konstanz. Its premises offered ideal conditions for productive face-to-face work and exchange in small and large groups, which formed spontaneously depending on the programme item.

Host Dr Anja Oberländer, deputy director of the KIM, warmly welcomed all participants and, after introductory words, handed over to project coordinator Andreas Kirchner, who broke the ice with an interactive round of getting to know each other and thus laid the foundation for a motivating, relaxed and yet concentrated working atmosphere.

Tailor-made programme with session planning

The actual programme was designed by the participants themselves according to their wishes and preferences using session planning. To this end, project team member Dr Martina Benz visualized all the suggestions for 90-minute long sessions that had been received online in advance, which could then be spontaneously supplemented with additional ideas.

Figure 1: A programme tailored to the wishes of the participants. Photo: Andreas Kirchner©.

In total, nine different sessions with lectures, discussions and workshops on Open Access as well as a library tour came together, from which the participants could choose their respective favorites and the KIM organization team worked out the schedule for the rest of the day.

Keeping track – and setting up a publishing house yourself?

The first session, “Open Access Monitoring (Automate),” was offered by Dr Andreas Walker (Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven) and Christian Berger (Freiburg University of Education). Together with them, participants discussed how best to determine an institution’s Open Access rate. One possible solution mentioned was to take the topic of Open Access into account as early as beginning the establishment of a research information system and to set up a reliable data source through an interface to the institutional repository. This would help the library to generate publication lists and quarterly reports – which in turn would offer added value for the institution’s researchers.

Figure 2: From Session 1: How do we actually know what our Open Access rate is? Photo: Danny Flemming©

In parallel, the session “Founding an Open Access University Press” proposed by Gerhard Bissels (PH Schwyz) took place, in which possibilities for founding university-owned publishing houses were discussed. Here, the example of the publishing house “Berlin Universities Publishing” was mentioned, in which four Berlin universities, FU, HU, TU and Charité, founded a joint publishing house. Based on this, the participants discussed various organizational and financing models and emphasized the importance of cross-institutional, ideally state-wide initiatives.

In the following session, Anke Rautenberg (KIM) outlined the concept of an information budget encompassing all revenues and expenses of a library for scientific information (for the concept of the information budget, see Pampel, Heinz, Auf dem Weg zum Informationsbudget: zur Notwendigkeit von Monitoringverfahren für wissenschaftliche Publikationen und deren Kosten, working paper. Potsdam 2019, (German) and Mittermaier, Bernhard, Das Informationsbudget: Konzept und Werkstattbericht, in o-bib 4 (2022) pp. 1-17. (German), which is already under development at the University of Konstanz. As success factors in the practical implementation, she emphasized the trustful cooperation between the departments involved and the increasing use of automated solutions.

Meanwhile, in the parallel session, Dr Martina Benz presented results from the project “Open4DE: Status and Perspectives of the Open Access Transformation in Germany” (documents, data and results of the project (German)). According to Benz, the further development of the Open Access transformation requires a transformation of Open Access funding, the development of information infrastructures, and a reform of research evaluation.

Speed dating and future perspectives

After the lunch break, a speed dating session ensured movement and individual networking before the participants had to choose between a workshop on the question “How can we better reach researchers?”, a session on financing models for Open Access, and a library tour by KIM subject specialist Livia Gertis.

In the workshop, Dr Danny Flemming (KIM) addressed the challenge of better communicating information and networking services on Open Access to the target group of scientists and scholars. Many experiences and best practices were collected, and it was emphasized that scientists must be offered a service with added value if they are to be persuaded to take a closer look at Open Access.

At the same time, Dr Daniela Hahn (University of Zurich) and Dr Martina Benz presented PLATO and KOALA, two model projects in the field of Diamond Open Access (see also the Action Plan for Diamond Open Access presented in 2022 by Science Europe, cOAlition S, OPERAS and the Agence Nationale de la Recherche (ANR), which is now also available in German).

This area was taken up again in the following session, which raised the question “Open Access funding quo vadis?”. During the discussion on developments and future perspectives of Open Access funding, it became clear that Diamond Open Access is a particularly promising and desirable model.

A long and productive day

At the suggestion of Marc Lange (HU Berlin), another session explored how to counter so-called “predatory journals” – a dubious business model of publishing everything unchecked against payment of often excessive fees. In particular, a Swiss Open Access publisher was hotly debated.

In a third parallel session, Nicolas Bach (Stuttgart Media University) presented a practical example of data-sovereign Open Access publishing. He showed how a decentralized distributed file system and a blockchain supported by the research community can be used to guarantee the integrity and authenticity of a publication.

Figure 3: Concentrated work in a professional atmosphere. Photo: Danny Flemming©

In a closing session, the most important results of all sessions were compiled and feedback on the event was collected. The participants emphasized the pleasant and constructive cooperation, which had noticeably benefited by the exchange in presence. Further barcamps are planned for 2024 and 2025. Locations, times, and registration options will be posted on the calendar as soon as they are known. The calendar provides a continuously updated overview of future events.

You may also find interesting:

About the Author:
Dr. Danny Flemming (contact via mail), Graduate Psychologist, received his doctorate from Eberhard Karls University in Tübingen, where he conducted research at the Leibniz Institute for Knowledge Media and is now part of the Open Science team at the Communication, Information, Media Centre (KIM) at the University of Konstanz. He works in the project, which promotes competence development and networking in the Open Access field.
Portrait, photographer: Christian Hartz©

Photos: Danny Flemming© and Andreas Kirchner©

The post Open Access Barcamp 2023: Live and in Colour first appeared on ZBW MediaTalk.

INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions

by Anastasia Kazakova

The third INCONECSS – International Conference on Economics and Business Information – took place online from 17 to 19 May 2022. The panels and presentations focused on artificial intelligence, Open Access and (research) data. INCONECSS also addressed collaboration in designing services for economics research and education and how these may have been influenced by the corona crisis.

Unleash the future and decentralise research!

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, gave the keynote address “The next chapter for research information: decentralised, digital and disrupted”. With this, she wanted to inspire the participants to “unleash the future” and decentralise research. The first topic of her presentation was about German universities. Isabell Welpe took us on a journey through three stations:

  1. What happens at universities?
  2. What does the work of students, researchers and teachers and the organisation at universities look like?
  3. How can universities and libraries be made future-proof?

In her lecture, she pointed out that hierarchically organised teaching is currently often unable to cope with the rapid social changes and new developments in the world of work. Isabell Welpe therefore suggested opening up teaching and organising it “bottom up”. This means relying on the decentralised self-organisation of students, offering (digital) spaces for exchange and tailoring teaching to their needs. Through these changes, students can learn while actively participating in research, which simultaneously promotes their creativity and agility. This is a cornerstone for disruptive innovation; that is, innovation that breaks and radically changes existing structures.

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, drawing: Karin Schliehe

Libraries could support and even drive the upcoming changes. In any case, they should prepare themselves for enormous changes due to the advancing digitisation of science. Isabell Welpe observed the trend towards “digital first” in teaching – triggered by the coronavirus situation. In the long term, this trend will influence the role of libraries as places of learning, but will also determine interactions with libraries as sources of information. Isabell Welpe therefore encouraged libraries to become a market-place in order to promote exchange, creativity and adaptability. The transformation towards this is both a task and an opportunity to make academic libraries future-proof.

In her keynote speech, Isabell Welpe also focused on the topic of decentralisation. One of the potentials of decentralisation is that scientists exchange data directly and share research data and results with each other, without, for example, publishers in between. Keywords were: Web 3.0, Crypto Sci-Hub and Decentralisation of Science.

In the Q&A session, Isabell Welpe addressed the image of libraries: Libraries could be places where people would go and do things, where they would exchange and would be creative; they could be places where innovation took place. She sees libraries as a Web 3.0 ecosystem with different services and encouraged them to be more responsive to what users need. Her credo: “Let the users own a part of the library!”

How can libraries support researchers?

Following on from the keynote, many presentations at INCONECSS dealt with how libraries can succeed even better in supporting researchers. On the first day, Markus Herklotz and Lars Oberländer from the University of Mannheim presented their ideas on this topic with a Poster (PDF, partly in German). The focus was on the interactive virtual assistant (iVA), which enables data collaboration by imparting legal knowledge. Developed by the BERD@BW and BERD@NFDI initiatives, the iVA helps researchers to understand the basic data protection regulations in each case and thereby helps them to evaluate their legal options for data use. The selfdirected assistant is an open-source learning module and can be extended.

Paola Corti from SPARC Europe introduced the ENOEL toolkit with her poster (PDF). It is a collection of templates for slides, brochures and Twitter posts to help communicate the benefits of Open Education to different user groups. The aim is to raise awareness of the importance of Open Education. It is openly designed, available in 16 language versions and can be adapted to the needs of the organisation.

On the last day of INCONECSS, Franziska Klatt from the Economics and Management Library of the TU Berlin reported in her presentation (PDF) on another toolkit that supports researchers in applying the Systematic Literature Review (SLRM) method. Originating from the medical field, the method was adapted to the economic context. SLRM helps researchers to reduce bias and redundancy in their work by following a formalised and transparent process that is reproducible. The toolkit provides a collection of information on the stages of this process, as well as SLR sources, tutorial videos and sample articles. Through the use of the toolkit and the information on the associated website, the media competence of the young researchers could be improved. An online course is also planned.

Field reports: How has the pandemic changed the library world?

The coronavirus is not yet letting go of the world, which also applies to the world of the INCONECSS community: In the poster session, Scott Richard St. Louis from the Federal Reserve Bank of St. Louis presented his experiences of onboarding in a hybrid work environment. He addressed individual aspects of remote onboarding, such as getting to know new colleagues or the lack of a physical space for meetings.

The poster (PDF) is worth a look, as it contains a number of suggestions for new employees and management, e.g.:

  • “Be direct, and even vulnerable”,
  • “Be approachable” or
  • “What was once implicit or informal needs to become explicit or conscious”.

Arjun Sanyal from the Central University of Himachal Pradesh (CUHP) reported in his presentation (PDF) on a project of his library team. They observed that the long absence from campus triggered a kind of indifference towards everyday academic life and an “informational anxiety” among students. The latter manifests itself in a reluctance to use information resources for studying, out of a fear of searching for them. To counteract this, the librarians used three types of measures: Mind-map sessions, an experimental makerspace and supportive motivational events. In the mind-map session, for example, the team collected ideas for improving library services together with the students. The effort had paid off, they said, because after a while they noticed that the campus and the libraries in particular were once again popular. In addition, Makerspace and motivational events helped students to rediscover the joy of learning, reports Arjun Sanyal.

Artificial Intelligence in Libraries

One of the central topics of the conference was without doubt the use of artificial intelligence (AI) in the library context. On the second day of INCONECSS, the panel participants from the fields of research, AI, libraries and thesaurus/ontology looked at aspects of the benefits of AI for libraries from different perspectives. They discussed the support of researchers through AI and the benefits for library services, but also the added value and the risks that arise through AI.

Discussion, drawing: Karin Schliehe

The panellists agreed that new doors would open up through the use of AI in libraries, such as new levels of knowledge organisation or new services and products. In this context, it was interesting to hear Osma Suominen from the National Library of Finland say that AI is not a game changer at the moment: it has the potential, but is still too immature. In the closing statements, the speakers took up this idea again: They were optimistic about the future of AI, yet a sceptical approach to this technology is appropriate. It is still a tool. According to the panellists, AI will not replace librarians or libraries, nor will it replace research processes. The latter require too much creativity for that. And in the case of libraries, a change in business concepts is conceivable, but not the replacement of the institution of the library itself.

It was interesting to observe that the topics that shaped the panel discussion kept popping up in the other presentations at the conference: Data, for example, in the form of training or evaluation data, was omnipresent. The discussants emphasised that the quality of the data is very important for AI, as it determines the quality of the results. Finding good and usable data is still complex and often related to licences, copyrights and other legal restrictions. The chatbot team from the ZBW also reported on the challenges surrounding the quality of training data in the poster session (PDF).

The question of trust in algorithms was also a major concern for the participants. On the one hand, it was about bias, which is difficult and requires great care to remove from AI systems. Again, data was the main issue: if the data was biased, it was almost impossible to remove the bias from the system. Sometimes it even leads to the systems not going live at all. On the other hand, it was about the trust in the results that an AI system delivers. Because AI systems are often non-transparent, it is difficult for users and information specialists to trust the search results provided by the AI system for a literature search. These are two of the key findings from the presentation (PDF) by Solveig Sandal Johnsen from AU Library, The Royal Library and Julie Kiersgaard Lyngsfeldt from Copenhagen University Library, The Royal Library. The team from Denmark investigated two AI systems designed to assist with literature searches. The aim was to investigate the extent to which different AI-based search programmes supported researchers and students in academic literature search. During the project, information specialists tested the functionality of the systems using the same search tasks. Among other results, they concluded that the systems could be useful in the exploratory phase of the search, but they functioned differently from traditional systems (such as classic library catalogues or search portals like EconBiz) and, according to the presenters, challenged the skills of information specialists.

This year, the conference took place exclusively online. As the participants came from different time zones, it was possible to attend the lectures asynchronously and after the conference. A selection of recorded lectures and presentations (videos) is available on the TIB AV portal.

Links to INCONECSS 2022:

  • Programme INCONECSS
  • Interactive Virtual Assistant (iVA) – Enabling Data Collaboration by Conveying Legal Knowledge: Abstract and poster (PDF)
  • ENOEL toolkit: Open Education Benefits: Abstract and poster (PDF)
  • Systematic Literature Review – Enhancing methodology competencies of young researchers: Abstract and slides (PDF)
  • Onboarding in a Hybrid Work Environment: Questions from a Library Administrator, Answers from a New Hire: Abstract and Poster (PDF)
  • Rethinking university librarianship in the post-pandemic scenario: Abstract and slides (PDF)
  • „Potential of AI for Libraries: A new level for knowledge organization?“: Abstract Panel Discussion
  • The EconDesk Chatbot: Work in Progress Report on the Development of a Digital Assistant for Information Provision: Abstract and slides (PDF)
  • AI-powered software for literature searching: What is the potential in the context of the University Library?: Abstract and slides (PDF)

This might also interest you:

About the Author:

Anastasia Kazakova is a research associate in the department Information Provision & Access and part of the EconBiz team at the ZBW – Leibniz Information Centre for Economics. Her focus is on user research, usability and user experience design, and research-based innovation. She can also be found on LinkedIn, ResearchGate and XING.
Potrait: Photographer: Carola Grübner, ZBW©

The post INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions first appeared on ZBW MediaTalk.

Open Access Barcamp 2022: Where the Community Met

by Hannah Schneider and Andreas Kirchner

This year’s Open Access Barcamp took place online once again, on 28 and 29 April 2022. From 9:00 a.m. to 2:30 p.m. on both days, the roughly 50 participants were able to put together their own varied programme, and engage in lively discussions about current Open Access topics.

Open Access Barcamp 2022 Agenda

What worked well last year was repeated this year: The innovative conference tool Gather was again used to facilitate online discussions, and the organisers prioritised opportunities to have discussions and to network when designing the programme. They integrated a speed-dating format into the programme and offered an open round at topic tables. In the context of the project, the Communication, Information and Media Centre (KIM) of the University of Konstanz once again hosted the Barcamp. While interactively planning the sessions on the first day, it became clear that the Open Access community is currently dealing with a very wide range of topics.

Illustration 1: tweet about the topic tables

The study recently published by the TIB – Leibniz Information Centre for Science and Technology University Library entitled “Effects of Open Access” (German) was presented in the first session. This review of literature examined 61 empirical papers from the period 2010-2021, analysing various impact dimensions of Open Access, including aspects such as attention garnered in academia, the quality of publications, inequality in the science system or the economic impact on the publication system.

The result on the citation advantage of Open Access publications was discussed particularly intensively. Here, the data turned out to be less clear than expected. However, it was also noted that methodological difficulties could occur during measurement in this field. The result of the discussion was that a citation advantage of Open Access can continue to be assumed and can also be cited in advisory discussions. “All studies that show no advantage do not automatically prove a citation disadvantage,” as one participant commented.

Tools and projects to support Open Access

Various tools to support Open Access publishing were particularly popular this year. With “B!SON”, a recommendation service was presented that is helpful for many scientists and scholars in finding a suitable Open Access journal for articles that have already been written. The title, abstract and references are entered into the tool, which then displays suitable Open Access journals on this basis, and awards them a score which can be used to determine a “match”. B!SON was/is developed by the TIB and the Saxon State and University Library Dresden (SLUB).

Another useful service with a similar goal was introduced in the form of the “oa.finder”, developed by the Bielefeld University Library in the context of the project. Authors can use this research tool to find suitable publication locations by entering their own role in the submission process, as well as the scientific institution where they work. It is possible to tailor the result to suit individual needs using different search and filter options. Both tools are currently in the beta version – the developers are still particularly keen to receive feedback.

A further session was dedicated to the question of what needs to be considered when converting from PDF to PDF/a in the context of long-term archiving, and which tools can be drawn upon to validate PDF/a files. This provided an opportunity to discuss the advantages and disadvantages of tools such as JHOVE, veraPDF and AvePDF.

The KOALA project (building consortial Open Access solutions) showed us which standards (German) apply for journals and publication series that participate in financing through KOALA consortia. Based upon these standards, the project aims to create an instrument that contributes to safeguarding processes and quality in journals and publishing houses. The project is developing sustainable, collaborative funding through scientific libraries, in order to establish an alternative to the dominant APC model.

Illustration 2: Results on the User Experience of the website

In addition, the project gave the Barcamp participants the opportunity to give feedback on its services. On the one hand, they evaluated the range of information and discussed the newly designed website. On the other, they focussed on the project events, discussing achievements and making suggestions for improvement. Here, the breadth of the different formats received particular praise, as did the fact that offers such as the “Open Access Talk” series have become very well established in the German-speaking countries.

Open Access communication: Reaching the target audience

Many members of the community are still working on how best to bring OA issues to different audiences. One of the sessions emphasised that, although the aspect of communication in Open Access work was regarded as very important, the required skills are often lacking – not least, because it has hardly played a role in library training to date. One of the central challenges in reaching the individual target groups is that different communication routes need to be served, which in turn requires strategic know-how. In order to stabilise and intensify the exchange, the idea of founding a focus group within the framework of the project was proposed; this will be pursued further during a preparatory meeting at the end of June 2022.

Illustration 3: Screenshot of MIRO whiteboard for documenting the Barcamp

Another session also considered the question of communicative ways to disseminate Open Access. Here the format of low-threshold exchange formats was discussed. The Networking and Competence Centre Open Access Brandenburg relocated its own “Open Access Smalltalk” series (German) to the Barcamp – very much in the spirit of openness, and initiated a discussion on how to get interested people around the table. In particular, it was argued that virtual formats offer a lower barrier to participation in such exchanges and that warm-ups can really lead to mobilising participants.

Challenges faced by libraries

The issues and challenges of practical day-to-day Open Access at libraries were also discussed a great deal this year. The topic of how to monitor publication costs found great resonance, for example, and was discussed both in a session and in one of the subsequent discussions one of the topic tables. Against the backdrop of increasing Open Access quotas and costs, libraries face the urgent challenge of getting an overview of the central and decentral publication costs. Here they are applying various techniques, such as de-centrally using their own inventory accounts, but also through their own research and with the help of the Open Access monitor.

A further session explored the topic of secondary publication service, specifically looking at which metadata can be gathered on research funders in repositories, and how. The discussion covered specific practical tips for implementation, including recommendations for the metadata schemata Crossref and RADAR/DataCite, for example.

One of the final sessions at the Barcamp explored the issue of how libraries can ensure that they provide “appropriate” publication opportunities. In doing so, reference was made to the “Recommendations for Moving Scientific Publishing Towards Open Access” (German), published by the German Council of Science and Humanities in 2022. To find out which publication routes researchers want and need, it is necessary to be in close contact with the various scientific communities. The session considered how contacts could be improved within the participants’ own institutions. Various communication channels were mentioned, such as via subject specialists, faculty councils/ representatives or seminars for doctoral candidates.

Illustration 4: Screenshot of feedback from the community


We can look back on a multifaceted and lively Open Access Barcamp 2022. The open concept was well received, and there was considerable willingness from the participants to actively join in and help shape the sessions. The jointly compiled programme offered a wide range of topics and opportunities to discuss everyday Open Access issues. In this virtual setting, people also joined in and contributed to the collegial atmosphere. After the two days, the community returned to everyday life armed with new input and fresh ideas; we would like to thank all those who took part, and look forward to the next discussion.

You may also be interested in:

Workshop Retrodigitisation 2022: Do It Yourself or Have It Done?

by Ulrich Blortz, Andreas Purkert, Thorsten Siegmann, Dawn Wehrhahn and Monika Zarnitz

Workshop Retrodigitisation: topics

Under the workshop title “Do It Yourself or Have It Done? Collaboration With External Partners and Service Providers in Retrodigitisation”, around 230 practitioners specialised in the retrodigitisation of library and archive materials met in March 2022. This year, the Berlin State Library – Prussian Cultural Heritage hosted the retrodigitisation workshop (German), which was held online due to the pandemic. For the first time in 2019, it had been initiated by the three central specialist German libraries – ZB MED, TIB Hannover and ZBW. All four institutions jointly organised a programme which, on the one hand, was about “Do it yourself or have it done?” and, on the other hand, about the question “Is good = good enough?” about quality assurance in retrodigitisation. After each of the eight presentations, there were many interesting questions and lively discussions developed.

Keynote: colourful and of high quality

The keynote on „Inhouse or Outsource? Two Contrasting Case Studies for the Digitisation of 20th Century Photographic Collections“ (PDF) was given by two English colleagues, Abby Matthews (Archive and Family History Centre) and Julia Parks (Signal Film & Media/Cooke’s Studios). They reported on their projects on digitisation of photographic records and old photographs from municipal archives, which they have carried out in cooperation with volunteers.

This was also a big challenge because of the Corona pandemic. Both were able to say that by involving those who later became interested in this offer, a special relationship to this local cultural heritage was developed. The experience of the volunteers also contributed a lot – especially to the documentation of the images, the speakers said.

Cooperation: many models

The first focus of the workshop was on collaboration in retrodigitisation. There were five presentations on this, which had a wide range:

Nele Leiner and Maren Messerschmidt (SUB Hamburg) reported in their presentation on “Class Despite Mass: Implementing Digitisation Projects with Service Providers” (PDF, German) on two retrodigitisation projects in which they worked together with service providers. It was about the projects “Hamburg’s Cultural Property on the Net” (German) and a project that was funded by the German Research Foundation (DFG) in which approx. 1.3 million pages from Hamburg newspapers are being digitised.

Andreas Purkert and Monika Zarnitz (ZBW) gave a presentation on “Cooperation With Service Providers – Tips for the Preparation of Specifications” (PDF, German). They gave clues on tips and tricks for preparing procurement procedures for digitisation services.

Julia Boensch-Bär and Therese Burmeister (DAI) presented the “‘Retrodigitisation‘ Project of the German Archaeological Institute“, which is about having one’s own (co-)edited publications digitised. They described the work processes that ensured the smooth implementation of the project with service providers.

Natalie Przeperski (IJB Munich), Sigrun Putjenter (SBB-PK Berlin), Edith Rimmert (UB Bielefeld), Matthias Kissler (UB Braunschweig) are jointly running the Colibri project (German). In their presentation “Colibri – the Combination of All Essential Variants of the Digitisation Workflow in a Project of Four Partner Libraries” (PDF, German), they reported on how the work processes for the joint digitisation of children’s book collections are organised. The challenge was to coordinate both the cooperation of the participating libraries and that with a digitisation service provider.

Stefan Hauff-Hartig (Parliamentary Archives of the German Bundestag) reported on the “Retro-digitisation Project in the Parliamentary Archives of the German Bundestag: The Law Documentation” (PDF, German). 12,000 individual volumes covering the period from 1949 to 2009 are to be processed. Hauff-Hartig reported on how the coordination of the work was organised with a service provider.

Conclusion: In the presentations on cooperation with other institutions and service providers, it became clear that the success of the project depends heavily on intensive communication between all participants and careful preparation of joint work processes. The organisational effort for this is not insignificant, but the speakers were nevertheless able to show that the synergy effects of cooperation outweigh the costs and that projects only become possible when others are involved.

Quality assurance: Is “good” = good enough?

This question was posed somewhat self-critically by the speakers in this thematic block. Procedures and possibilities for quality assurance of the digitised material were presented:

Stefanie Pöschl and Anke Spille (Digital German Women’s Archive) contrasted the quality, effort and cost considerations of “doing it yourself” with those of purchasing services. In their presentation on “Quality? What for? The Digital German Women’s Archive Reports From Its Almost 6-year Experience With Retrodigitisation” (PDF, German) they looked at the use of standards to ensure the highest possible level of quality.

Yvonne Pritzkoleit and Silke Jagodzinski (Secret State Archives – Prussian Cultural Heritage) presented under the title “Is Good Good Enough? Quality Assurance in Digitisation” their institution’s quality assurance concept. This is based on the ISO/TS 19264-1:2017 standard for image quality. The concept can provide many suggestions for other institutions.

Andreas Romeyke (SLUB Dresden) explained in his presentation “Less is More – the Misunderstanding of Resolution” (PDF, German) why less is often more when it comes to the resolution of images. He described what is meant by resolution, how to determine a suitable resolution and what effects wrongly chosen resolutions can have.

Conclusion: Increasingly, digitised material is not only used as a document to be received for academic work, but it itself becomes research data that the users use, e.g. in the context of the digital humanities. This results in special quality requirements that are not always easy to implement. The three presentations on this topic showed different approaches to the topic and also that it is an important concern for quality management to put effort and benefit in a reasonable relationship. It became clear that standards such as ISO 19264-1 are increasingly being applied, even if this is still not always done according to the textbook, but within the range of technical and personnel possibilities.

Workshop Retrodigitisation 2022: lively discussions – good feedback

In the first part of the workshop, all presentations contained concrete recommendations and useful tips for the design of digitisation projects with service providers. Many aspects that were described in the presentations and discussed afterwards were strongly oriented towards practice, so that they could be incorporated by the participants for their own implementation of projects with service providers and offered a good basis for future planning of their own projects. It was particularly interesting to hear which quantity structures for the pages to be scanned can be implemented in projects with service providers and how projects could be successfully implemented with several institutions despite the pandemic.

The presentations on the topic of quality in the second block of the workshop also met with great interest. Again, all contributions included many practical tips that can be applied to the audience’s own organisations.

In summary, it can be said that the workshop with its many interesting contributions showed the many different ways of working with service providers and the increasing importance of quality management.

The feedback survey showed that the workshop was again very well received this year. All participants were able to take away many new impulses and ideas. The organising institutions will offer another workshop next year. In 2023, it will be hosted by the ZBW.

This text has been translated from German.

Further readings:

About the authors:

Ulrich Ch. Blortz is a qualified librarian for the higher service in academic libraries and a library official. He has worked at the former Central Library of Agricultural Sciences in Bonn since 1981 and has also been responsible for retrodigitisation at the ZB MED – Information Centre for Life Sciences since 2003.

Andreas Purkert is a freight forwarding and logistics merchant. In the private sector, he worked as a certified quality representative and quality manager and as part of the industry certificate REFA basic certificate work organisation. Since May 2020, he has been head of the Digitisation Centre of the ZBW – Leibniz Information Centre for Economics.

Thorsten Siegmann is Head of Unit at the Berlin State Library and responsible for managing retrodigitisation. He holds a degree in cultural studies and has worked in various functions at the Foundation Prussian Cultural Heritage for 15 years.

Dawn Wehrhahn has been a qualified librarian since 1992. Since then she has worked, with a short interruption, at TIB – Leibniz Information Centre for Technology and Natural Sciences and University Library. Her areas of work were: Head of the Wunstorf Municipal Library, Head of the Physics Department Library at TIB, from 2001 Team MyBib Operations within TIB’s full text supply. Since October 2021, she has headed the retrodigitisation team.

Dr Monika Zarnitz is an economist and Head of the Programme Area User Services and Preservation at the ZBW – Leibniz Information Centre for Economics.

The post Workshop Retrodigitisation 2022: Do It Yourself or Have It Done? first appeared on ZBW MediaTalk.

Barcamp Open Science 2022: Connecting and Strengthening the Communities!

by Yvana Glasenapp, Esther Plomp, Mindy Thuna, Antonia Schrader, Victor Venema, Mika Pflüger, Guido Scherp and Claudia Sittner

As a pre-event of the Open Science Conference , the Leibniz Research Alliance Open Science and Wikimedia Germany once again invited participants to the annual Barcamp Open Science (#oscibar) on 7 March. The Barcamp was once again held completely online. By now well-versed in online events, a good 100 participants turned up. They came to openly discuss a diverse range of topics from the Open Science universe with like-minded people.

As at the Barcamp Open Science 2021, the spontaneous compilation of the programme showed that the majority of the sessions had already been planned and prepared in advance. After all, the spectrum of topics ranged from very broad topics such as “How to start an Open Science community?” to absolutely niche discussions, such as the one about the German Data Use Act (Datennutzungsgesetz). But no matter how specific the topic, there were always enough interested people in the session rooms for a fruitful discussion.

Ignition Talk by Rima-Maria Rahal

In this year’s “Ignition Talk”, Rima-Maria Rahal skilfully summed up the precarious working conditions in the science system. These include, on the one hand, temporary positions and the competitive pressure in the science system (in Germany, this is currently characterised by the #IchBinHanna debate, German), and on the other hand, the misguided incentive system with its focus on the impact factor. Not surprisingly, her five thoughts on more sustainable employment in science also met with great approval on Twitter.

Rima-Maria Rahal: Fünf Thoughts for More Sustainable Employment

Those interested in her talk “On the Importance of Permanent Employment Contracts for Research Quality and Robustness” can watch it on YouTube (recording of the same talk at the Open Science Conference).

In the following, some of the session initiators have summarised the highlights and most interesting insights from their discussions:

How to start an Open Science community?
by Yvana Glasenapp, Leibniz University Hannover

Open Science activities take place at many institutions at the level of individuals or working groups, without there being any exchange between them.

In this session we discussed the question of what means can be used to build a community of those interested in Open Science: What basic requirements are needed? What best practice examples are there? Ideas can be found, for example, in this “Open Science Community Starter Kit”.

Die The Four Sages of Developing an Open Science Community from the „Open Science Community Starter Kit “ (CC BY NC SA 4.0)

There is a perception among many that there is a gap between the existing information offered by central institutions such as libraries and research services and the actual implementer community. These central bodies can take on a coordinating role to promote existing activities and network participating groups. It is important to respect the specialisation within the Open Science community. Grassroots initiatives often form in their field due to specific needs in the professional community.

Key persons such as data stewards, who are in direct contact with researchers, can establish contacts for stronger networking among Open Science actors. The communication of Open Science principles should not be too abstract. Incentives and the demonstration of concrete advantages can increase the motivation to use Open Science practices.

Conclusion: If a central institution from the research ecosystem wants to establish an Open Science community, it would do well to focus, for example, on promoting existing grassroots initiatives and to offer concrete, directly applicable Open Science tools.

Moving Open Science at the
institutional/departmental level
by Esther Plomp, Delft University of Technology

In this session all 22 participants introduced themselves and presented a successful (or not so successful!) case study from their institution.

Opportunities for Open Science

A wide variety of examples of improving awareness or rewarding Open Research practices were shared: Several universities have policies in place on Research Data or Open Access. These can be used to refer researchers to and are especially helpful when combined with personal success stories. Some universities offer (small) grants to support Open Science practices (Nanyang Technological University Singapore, University of Mannheim, German). Several universities offer trainings to improve Open Science practices, or support staff who can help.

Offering recommendations or tools that facilitate researchers to open up their workflows are welcome. Bottom-up communities or grassroots initiatives are important drivers for change.

Conferences, such as the Scholarship Values Summit, or blogs could be a way to increase awareness about Open Science (ZBW Blog on Open Science). You can also share your institute’s progress on Open Science practices via a dashboard, an example is the Charité Dashboard on Responsible Research.

Challenges for Open Science

On the other hand, some challenges were also mentioned: For example, Open Science is not prioritised as the current research evaluation system is still very focused on traditional research impact metrics. It can also be difficult to enthuse researchers to attend events. It works better to meet them where they are.

Not everyone is aware of all the different aspects of Open Science (sometimes it is equated with Open Access) and it can also be quite overwhelming. It may be helpful to use different terms such as research integrity or sustainable science to engage people more successfully with Open Science practices. More training is also needed.

There is no one-size-fits-all solution! If new tools are offered to researchers, they should ideally be robust and simplify existing workflows without causing additional problems.

Conclusion: Our main conclusions from the session were that we have a lot of experts and successful case studies to learn from. It is also important to have enthusiastic people who can push for progress in the departments and institutes!

How can libraries support researchers for Open Science?
by Mindy Thuna, University of Toronto Libraries

There were ten participants in this session from institutions in South Africa, Germany, Spain, Luxembourg and Canada.

Four key points that arose:

1. One of the first things that came up in dialogue was that Open Science is a very large umbrella that contains a LOT of pieces/separate things within it. Because there are so many moving parts in this giant ecosystem, it is hard to get started in offering supports, and some areas get a lot less attention than others. Open Access and Open Data seem to consistently be flagged first as the areas that generate a lot of attention/support while Open Software and even Citizen Science received a lot less attention from libraries.

2. Come to us versus go to them: Another point of conversation was whether or not the researchers are coming to us (as a library) to get support for their own Open Science endeavours. It was consistently noted that they are not generally thinking about the library when they are thinking, e.g., about research data or Open Access publishing. The library is not on their radar as a natural place to find this type of support/help until they have experienced it for themselves and realise the library might offer supports in these areas.

From this starting point, the conversation morphed to focus on the educational aspect of what libraries offer – i.e. making information available. But it was flagged that we often have a bubble where the information is located that is not often browsed. So the community is a key player in getting the conversation started, particularly as part of everyday research life. This way, the library can be better integrated into the regular flow of research activities when information/help is needed.

3. The value of face-to-face engagement: People discussed the need to identify and work with the “cheerleaders” to get an active word-of-mouth network going to educate more university staff and students about Open Science (rather than relying on Libguides and webpages to do so more passively). Libraries could be more proactive and work more closely with the scientific community to co-create Open Science related products. Provision of information is something we do well, but we often spend less time on personal interactions and more on providing things digitally. Some of the attendees felt this might be detrimental to really understanding the needs of our faculty. More time and energy should be spend on understanding the specific needs of scientists and shaping the scientific communication system rather than reacting to whatever comes our way.

4. The role of libraries as a connecting element: The library is uniquely placed to see across subject disciplines and serve in the role of connector. In this way, it can help facilitate collaborations/build partnerships across other units of the organisation and assist in enabling the exchange of knowledge between people. It was suggested that libraries should be more outgoing in what they (can) do and get more involved in the dialogue with researchers. One point that was debated is the need for the library to acknowledge that it is not and cannot really be a neutral space – certainly not if Open Science is to be encouraged rather than just supported.

Persistent identifiers and how they can foster Open Science
by Antonia Schrader, Helmholtz Open Science Office

Whether journal article, book chapter, data set or sample – these results of science and research must be made openly accessible in an increasingly digital scientific landscape, and at the same time made unambiguously and permanently findable. This should support the exchange of information within science from “closed” to “open” science and promote the transfer of findings to society.

Persistent identifiers (PIDs) play a central role here. They ensure that scientific resources can be cited and referenced. Once assigned, the PID always remains the same, even if the name or URL of an information object changes.

The participants in the spontaneous barcamp session all agreed on this central importance of PIDs for the digital science landscape. All of them were familiar with the principle of PIDs and have contact points in their daily work, especially with DOIs and ORCID iDs (Open Researcher and Contributor iD). In addition to the enormous potential of PIDs, however, the participants also saw challenges in their use and establishment. It became clear that there are still technical as well as ethical and data protection issues to consider.

There was consensus that these questions must be accompanied by a broad education on PIDs, their purpose and how they work; among the scientific staff of research institutions as well as among researchers. Websites tailored to the topic from ORCID DE (German) or (German) offer a good introduction.

Translating scholarly works opens science
by Victor Venema, Translate Science

Translating scholarly works opens science for more contributors (who do important work, but are not proficient in writing English), avoids double work and it opens the fruits of science to larger communities. Translated scientific articles open science to science enthusiasts, activists, advisors, trainers, consultants, architects, doctors, journalists, planners, administrators, technicians and scientists. Such a lower barrier to participating in science is especially important on topics such as climate change, environment, agriculture and health.

In this session we discussed why translations are important, tools that could help making and finding translations and foreign language works. An interesting thought was that currently blogs are important for finding foreign scientific articles, which illustrates how much harder it is to find such works and suggests allies to work with. The difficulty of finding foreign works emphasises the importance of at least translating titles and abstracts. Search engines that include automatically translated keywords can also help discovery.

The slides of the session “Translating scholarly articles opens science” can be found here.

Open Data before publication
by Mika Pflüger, Potsdam Institute for Climate Impact Research

In this session we discussed approaches and tools to collaborate on scientific data openly. The starting point of the discussion was the assessment that publishing scientific data openly is already quite well supported and works smoothly thanks to platforms like Zenodo. In contrast, open pre-publication collaboration is difficult because the available platforms impose restrictions, either on the size of the datasets or on the research area supported. Self-hosting a data collaboration platform like gin – Modern Research Data Management for Neuroscience is one solution, but usually not feasible for individual researchers or working groups.

We also talked briefly about experiences with open pre-publication collaboration. Experiences are limited so far, but fruitful collaboration can establish when the datasets in question are useful to a broader group of scientists and contribution is easy and quick. Furthermore, adapting data workflows so that intermediate results and workflows are openly accessible also has benefits for reproducibility and data organisation in general.

Conclusion of the Barcamp Open Science 2022

The Barcamp once again proved to be a suitable opportunity to meet both Open Science veterans and newcomers and to engage in low-threshold conversation. Particularly popular this time were the extensive rounds of introductions in the individual sessions, which not only minimised the inhibition threshold for speaking, but also helped all those present to classify their video conference counterpart in a professional manner and, if desired, to make a note of the contact for later. Topics were dealt with in breadth by many or in depth by a few. Sometimes two people are enough for the latter. In the end, it became clear that the most important thing is to network representatives from the different communities and to promote their exchange.

Thank you and see you next year!

Behind the scenes this year, the organising team had taken up feedback from the community that had arisen in the context of a survey on the future of the Barcamp Open Science. For example, there was an onboarding session especially for newcomers to the Barcamp to explain the format and procedure again and to “break the ice” beforehand. Even though we would like to hold the Barcamp in presence again, and this is also desired, there is also a clear vote for an online format. This is more inclusive and important for international participation. Ultimately, our goal is to further develop and consolidate the format together with the community. And we are open to new partners.

This text has been translated from German.

Web links to the Barcamp Open Science

More tips for events

You may also find this interesting

About the authors (alphabetical)

Dr Yvana Glasenapp is a research officer specialising in research data management and Open Science at Leibniz University Hannover (LUH). Her professional background is in biology. She can be found on XING, LinkedIn and ORCID.
Portrait: Yvana Glasenapp©

Dr Mika Pflüger works in the research software engineering group at Potsdam Institute for Climate Impact Research. He currently works on a better integration of simple climate models into the PIAM suite of integrated assessment models. Mika Pflüger can be found on Twitter.
Portrait: PIK/Klemens Karkow©

Dr Esther Plomp is a Data Steward at the Faculty of Applied Sciences, Delft University of Technology, in the Netherlands. She works towards contributing to a more equitable way of knowledge generation and facilitating others in working more transparently through her involvements in various open research communities including The Turing Way, Open Research Calendar, IsoArcH and Open Life Science. Esther Plomp can be found on Twitter, LinkedIn and GitHub.
Portrait: Esther Plomp©

Dr Guido Scherp is Head of the “Open-Science-Transfer” department at the ZBW – Leibniz Information Centre for Economics and Coordinator of the Leibniz Research Alliance Open Science. He can also be found on LinkedIn and Twitter.
Portrait: ZBW©, photographer: Sven Wied

Antonia Schrader has been working in the Helmholtz Open Science Office since 2020. There she supports the Helmholtz Association in shaping the cultural change towards Open Science. She promotes the dialogue on Open Science within and outside Helmholtz and regularly organises forums and online seminars (German) together with her colleagues. Antonia Schrader is active in ORCID DE, a project funded by the German Research Foundation to promote and disseminate ORCID iD (German), a persistent identifier (PID) for the permanent and unique identification of individuals. Antonia Schrader can be found on Twitter, LinkedIn and XING.
Portrait: Antonia Schrader, CC BY-ND

Claudia Sittner studied journalism and languages in Hamburg and London. She was a long time lecturer at the ZBW publication Wirtschaftsdienst – a journal for economic policy, and is now the managing editor of the blog ZBW MediaTalk. She is also a freelance travel blogger (German), speaker and author. She can also be found on LinkedIn, Twitter and Xing.
Portrait: Claudia Sittner©

Mindy Thuna has been a librarian since 2005. Before, she has worked as an educator in a variety of eclectic locations, including The National Museum of Kenya in Nairobi. Wearing her librarian hat, Mindy has had numerous fabulous librarian titles including the AstraZeneca Science liaison librarian, the Research Enterprise Librarian, Head of the Engineering & Computer Science Library and her current role as the Associate Chief Librarian for Science Research & Information at the University of Toronto Libraries in Canada. Her research is also rather eclectic but focuses on people’s interactions with and perception of concepts relating to information, with her current focus being on faculty and Open Science practices. Mindy Thuna can also be found on ORCID and Twitter.
Portrait: Mindy Thuna©

Victor Venema works on historical climate data with colleagues all around the world where descriptions of the measurement methods are normally in local languages. He organised the barcamp session as member of Translate Science, an initiative that was recently founded to promote the translation of scientific articles. Translate Science has a Wiki, a blog, an email distribution list and can be found on the Fediverse.

The post Barcamp Open Science 2022: Connecting and Strengthening the Communities! first appeared on ZBW MediaTalk.

Open Access goes Barcamp, Part 1: A new networking opportunity for the Open Access community

by Hannah Schneider (KIM), Maximilian Heber (KIM) and Andreas Kirchner (KIM)

The first Open Access Barcamp took place on 22 and 23 April 2021 – virtually, owing to the pandemic. However, the approximately 80 participants were very enthusiastic about the unusual format. Between 9:00 and 14:30 each day, the participants focussed on a varied programme which they had put together themselves in animated discussions about Open Access topics.

Alongside the annual Open Access Days (German) – a central German-speaking conference on the topic of Open Access – this year’s Open Access Barcamp, which was organised by the Communication, Information, Media Centre (KIM) at the University of Konstanz , also offered the community the chance to exchange ideas, network and learn from each other. The Barcamp format is designed to be more open than a classic conference and deliberately does not use a pre-determined programme. Instead, the participants can suggest topics and hold sessions on issues of their choice. This means that everyone can discuss the topics that they find the most interesting.

Screenshot #1: Session-Voting (CC BY 4.0)

Great interest in legal topics

During the session planning it became clear that the Open Access community is currently concerned with many diverse topics.

The great majority of participants were interested in legal topics. One of the sessions included a workshop on legal issues in Open Access consulting in which three groups in parallel worked on two typical consulting cases. The first case was the critical evaluation of a publishing house contract. The issue was broached that contracts like these could entail problems such as substantial cost risks or a restrictive transfer of rights. Regarding service offers, participants discussed how to recognise these disruptive elements and the best way of proceeding in a consultative capacity. The second case was a discussion on the topic of image copyright. The topics of who has rights to an image, how the quotation law applies here and how images are regulated in a publishing house contract were discussed.

During one session on Creative Commons licenses, an intensive discussion developed on the extent to which these were suitable for Open Access books. Using the example of the Saint Philip Street Press publishing house, participants critically discussed the aspect of how publishing houses publish again an Open Access book because open licenses allow reuse and editing of the work. Everyone agreed that this problem exists not only for books but also for all works with open licenses. The group came to the conclusion that honesty and transparency are important for Open Access consulting despite this circumstance: “We’re not sales people, we want to help scientists”.

Exchange about the design of secondary publication services

The topic of secondary publication took up a lot of space owing to the considerable interest of the Barcamp participants. Practitioners met for a major discussion session that dealt with the concrete implementation of secondary publication services. In doing so, they not only discussed the services institutions offer for green Open Access but also how these can be implemented at technical, organisational and legal levels. Together, they discussed challenges in the daily dealings with secondary publications such as automatised imports, publishing house requests or legal checks. Google Scholar alerts, data imports from the Web of Science and the integration of Sherpa Romeo into the institutional repository were mentioned as solution approaches. The scope of secondary publication services for scientists in the individual institutions was also discussed. It became clear that the institutions differ very strongly in their activities but also in their capacities.

Publication data management and establishment of a digital focus group

Another topic that was discussed was how the publication data management is implemented in the different institutional repositories. Here the role of the Open Access Monitor (German) of the Forschungszentrum Jülich in measuring the publication occurrence was mentioned, but also the problem that metadata are used very inconsistently and must sometimes also be entered later by hand. The discussion on the topic of secondary publication was continued in depth after the session and ultimately led to the establishment of a new digital focus group.

Screenshot #2: Session room in (CC BY 4.0)

A discussion on the support possibilities of Diamond Open Access and an exchange of ideas about Open Access advocacy as well as promoting Open Access services at one’s own institution also enticed many people to the session rooms. The sessions spanning the publication of research data and the further development of existing Open Access policies as far as suitable Open Science policies all demonstrated that the Open Access community is also interested them. The technical perspectives of publication software were examined as well as the requirements placed on them

Swarm intelligence on the further development of the information platform

The collective know-how of the participants was used to gather recommendations from the community for the further development of the information platform into a skills and networking portal. For this purpose, not only was it determined how the current site is used, but also what demands and expectations are placed on a skills and network portal. Among other things, the provision of materials to support Open Access consultation cases and a clearer and more intuitive site structure were mentioned here. These and other ideas will flow into the further development of the website.

Discussion about special publication topics

Concrete publication topics were also discussed: For example, there was a session about the Gender Publication Gap in Open Access. The general issue of the impact of gender in science was taken up and participants discussed whether this effect would be increased or reduced by Open Access. The discussion came to the conclusion that this is a very multi-faceted topic and the data situation is still very thin.

Communication of Open Access transformation

The topic of scholar-led publishing in the field of Open Access books was examined and the project COPIM was presented. Participants also discussed the topic of transformation, including the aspect of how the project DEAL and the Open Access transformation could be communicated at one’s own institution. Challenges mentioned here included the reallocation of budgets as well as the difficulty of convincing authors to choose always the Open Access option for DEAL publications. The group agreed that active communication within subject departments and committees as well as information material on the website are currently the most promising methods.

Screenshot #3: Collaboration and transcripts of the sessions on MIRO (CC BY 4.0)

Diverse opportunities to chat about the everyday challenges of Open Access

The programme’s flexible design offered the participants in the Open Access Barcamp a variety of possibilities to chat about current and everyday Open Access topics. Everyday challenges and issues were discussed in direct dialogue with other practitioners, both in the big sessions and in smaller groups.

Even though, as mentioned at the beginning, the Open Access Barcamp took place in a virtual setting, the readiness of the participants to get actively involved and help shape the event was considerable. Our organisational team found that it was important to create an appealing virtual environment to enable an exchange of ideas and networking to take place online too. In the next blog post we describe the chances and challenges that planning such a dynamic event as an online format brings with it. Stay tuned!

More blog posts about the Open Access Barcamp

You may also find this interesting

This text has been translated from German.

The post Open Access goes Barcamp, Part 1: A new networking opportunity for the Open Access community first appeared on ZBW MediaTalk.