INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions

by Anastasia Kazakova

The third INCONECSS – International Conference on Economics and Business Information – took place online from 17 to 19 May 2022. The panels and presentations focused on artificial intelligence, Open Access and (research) data. INCONECSS also addressed collaboration in designing services for economics research and education and how these may have been influenced by the corona crisis.

Unleash the future and decentralise research!

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, gave the keynote address “The next chapter for research information: decentralised, digital and disrupted”. With this, she wanted to inspire the participants to “unleash the future” and decentralise research. The first topic of her presentation was about German universities. Isabell Welpe took us on a journey through three stations:

  1. What happens at universities?
  2. What does the work of students, researchers and teachers and the organisation at universities look like?
  3. How can universities and libraries be made future-proof?

In her lecture, she pointed out that hierarchically organised teaching is currently often unable to cope with the rapid social changes and new developments in the world of work. Isabell Welpe therefore suggested opening up teaching and organising it “bottom up”. This means relying on the decentralised self-organisation of students, offering (digital) spaces for exchange and tailoring teaching to their needs. Through these changes, students can learn while actively participating in research, which simultaneously promotes their creativity and agility. This is a cornerstone for disruptive innovation; that is, innovation that breaks and radically changes existing structures.

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, drawing: Karin Schliehe

Libraries could support and even drive the upcoming changes. In any case, they should prepare themselves for enormous changes due to the advancing digitisation of science. Isabell Welpe observed the trend towards “digital first” in teaching – triggered by the coronavirus situation. In the long term, this trend will influence the role of libraries as places of learning, but will also determine interactions with libraries as sources of information. Isabell Welpe therefore encouraged libraries to become a market-place in order to promote exchange, creativity and adaptability. The transformation towards this is both a task and an opportunity to make academic libraries future-proof.

In her keynote speech, Isabell Welpe also focused on the topic of decentralisation. One of the potentials of decentralisation is that scientists exchange data directly and share research data and results with each other, without, for example, publishers in between. Keywords were: Web 3.0, Crypto Sci-Hub and Decentralisation of Science.

In the Q&A session, Isabell Welpe addressed the image of libraries: Libraries could be places where people would go and do things, where they would exchange and would be creative; they could be places where innovation took place. She sees libraries as a Web 3.0 ecosystem with different services and encouraged them to be more responsive to what users need. Her credo: “Let the users own a part of the library!”

How can libraries support researchers?

Following on from the keynote, many presentations at INCONECSS dealt with how libraries can succeed even better in supporting researchers. On the first day, Markus Herklotz and Lars Oberländer from the University of Mannheim presented their ideas on this topic with a Poster (PDF, partly in German). The focus was on the interactive virtual assistant (iVA), which enables data collaboration by imparting legal knowledge. Developed by the BERD@BW and BERD@NFDI initiatives, the iVA helps researchers to understand the basic data protection regulations in each case and thereby helps them to evaluate their legal options for data use. The selfdirected assistant is an open-source learning module and can be extended.

Paola Corti from SPARC Europe introduced the ENOEL toolkit with her poster (PDF). It is a collection of templates for slides, brochures and Twitter posts to help communicate the benefits of Open Education to different user groups. The aim is to raise awareness of the importance of Open Education. It is openly designed, available in 16 language versions and can be adapted to the needs of the organisation.

On the last day of INCONECSS, Franziska Klatt from the Economics and Management Library of the TU Berlin reported in her presentation (PDF) on another toolkit that supports researchers in applying the Systematic Literature Review (SLRM) method. Originating from the medical field, the method was adapted to the economic context. SLRM helps researchers to reduce bias and redundancy in their work by following a formalised and transparent process that is reproducible. The toolkit provides a collection of information on the stages of this process, as well as SLR sources, tutorial videos and sample articles. Through the use of the toolkit and the information on the associated website, the media competence of the young researchers could be improved. An online course is also planned.

Field reports: How has the pandemic changed the library world?

The coronavirus is not yet letting go of the world, which also applies to the world of the INCONECSS community: In the poster session, Scott Richard St. Louis from the Federal Reserve Bank of St. Louis presented his experiences of onboarding in a hybrid work environment. He addressed individual aspects of remote onboarding, such as getting to know new colleagues or the lack of a physical space for meetings.

The poster (PDF) is worth a look, as it contains a number of suggestions for new employees and management, e.g.:

  • “Be direct, and even vulnerable”,
  • “Be approachable” or
  • “What was once implicit or informal needs to become explicit or conscious”.

Arjun Sanyal from the Central University of Himachal Pradesh (CUHP) reported in his presentation (PDF) on a project of his library team. They observed that the long absence from campus triggered a kind of indifference towards everyday academic life and an “informational anxiety” among students. The latter manifests itself in a reluctance to use information resources for studying, out of a fear of searching for them. To counteract this, the librarians used three types of measures: Mind-map sessions, an experimental makerspace and supportive motivational events. In the mind-map session, for example, the team collected ideas for improving library services together with the students. The effort had paid off, they said, because after a while they noticed that the campus and the libraries in particular were once again popular. In addition, Makerspace and motivational events helped students to rediscover the joy of learning, reports Arjun Sanyal.

Artificial Intelligence in Libraries

One of the central topics of the conference was without doubt the use of artificial intelligence (AI) in the library context. On the second day of INCONECSS, the panel participants from the fields of research, AI, libraries and thesaurus/ontology looked at aspects of the benefits of AI for libraries from different perspectives. They discussed the support of researchers through AI and the benefits for library services, but also the added value and the risks that arise through AI.

Discussion, drawing: Karin Schliehe

The panellists agreed that new doors would open up through the use of AI in libraries, such as new levels of knowledge organisation or new services and products. In this context, it was interesting to hear Osma Suominen from the National Library of Finland say that AI is not a game changer at the moment: it has the potential, but is still too immature. In the closing statements, the speakers took up this idea again: They were optimistic about the future of AI, yet a sceptical approach to this technology is appropriate. It is still a tool. According to the panellists, AI will not replace librarians or libraries, nor will it replace research processes. The latter require too much creativity for that. And in the case of libraries, a change in business concepts is conceivable, but not the replacement of the institution of the library itself.

It was interesting to observe that the topics that shaped the panel discussion kept popping up in the other presentations at the conference: Data, for example, in the form of training or evaluation data, was omnipresent. The discussants emphasised that the quality of the data is very important for AI, as it determines the quality of the results. Finding good and usable data is still complex and often related to licences, copyrights and other legal restrictions. The chatbot team from the ZBW also reported on the challenges surrounding the quality of training data in the poster session (PDF).

The question of trust in algorithms was also a major concern for the participants. On the one hand, it was about bias, which is difficult and requires great care to remove from AI systems. Again, data was the main issue: if the data was biased, it was almost impossible to remove the bias from the system. Sometimes it even leads to the systems not going live at all. On the other hand, it was about the trust in the results that an AI system delivers. Because AI systems are often non-transparent, it is difficult for users and information specialists to trust the search results provided by the AI system for a literature search. These are two of the key findings from the presentation (PDF) by Solveig Sandal Johnsen from AU Library, The Royal Library and Julie Kiersgaard Lyngsfeldt from Copenhagen University Library, The Royal Library. The team from Denmark investigated two AI systems designed to assist with literature searches. The aim was to investigate the extent to which different AI-based search programmes supported researchers and students in academic literature search. During the project, information specialists tested the functionality of the systems using the same search tasks. Among other results, they concluded that the systems could be useful in the exploratory phase of the search, but they functioned differently from traditional systems (such as classic library catalogues or search portals like EconBiz) and, according to the presenters, challenged the skills of information specialists.

This year, the conference took place exclusively online. As the participants came from different time zones, it was possible to attend the lectures asynchronously and after the conference. A selection of recorded lectures and presentations (videos) is available on the TIB AV portal.

Links to INCONECSS 2022:

  • Programme INCONECSS
  • Interactive Virtual Assistant (iVA) – Enabling Data Collaboration by Conveying Legal Knowledge: Abstract and poster (PDF)
  • ENOEL toolkit: Open Education Benefits: Abstract and poster (PDF)
  • Systematic Literature Review – Enhancing methodology competencies of young researchers: Abstract and slides (PDF)
  • Onboarding in a Hybrid Work Environment: Questions from a Library Administrator, Answers from a New Hire: Abstract and Poster (PDF)
  • Rethinking university librarianship in the post-pandemic scenario: Abstract and slides (PDF)
  • „Potential of AI for Libraries: A new level for knowledge organization?“: Abstract Panel Discussion
  • The EconDesk Chatbot: Work in Progress Report on the Development of a Digital Assistant for Information Provision: Abstract and slides (PDF)
  • AI-powered software for literature searching: What is the potential in the context of the University Library?: Abstract and slides (PDF)

This might also interest you:

About the Author:

Anastasia Kazakova is a research associate in the department Information Provision & Access and part of the EconBiz team at the ZBW – Leibniz Information Centre for Economics. Her focus is on user research, usability and user experience design, and research-based innovation. She can also be found on LinkedIn, ResearchGate and XING.
Potrait: Photographer: Carola Grübner, ZBW©

The post INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions first appeared on ZBW MediaTalk.

Open Access Barcamp 2022: Where the Community Met

by Hannah Schneider and Andreas Kirchner

This year’s Open Access Barcamp took place online once again, on 28 and 29 April 2022. From 9:00 a.m. to 2:30 p.m. on both days, the roughly 50 participants were able to put together their own varied programme, and engage in lively discussions about current Open Access topics.

Open Access Barcamp 2022 Agenda

What worked well last year was repeated this year: The innovative conference tool Gather was again used to facilitate online discussions, and the organisers prioritised opportunities to have discussions and to network when designing the programme. They integrated a speed-dating format into the programme and offered an open round at topic tables. In the context of the open-access.network project, the Communication, Information and Media Centre (KIM) of the University of Konstanz once again hosted the Barcamp. While interactively planning the sessions on the first day, it became clear that the Open Access community is currently dealing with a very wide range of topics.

Illustration 1: open-access.network tweet about the topic tables

The study recently published by the TIB – Leibniz Information Centre for Science and Technology University Library entitled “Effects of Open Access” (German) was presented in the first session. This review of literature examined 61 empirical papers from the period 2010-2021, analysing various impact dimensions of Open Access, including aspects such as attention garnered in academia, the quality of publications, inequality in the science system or the economic impact on the publication system.

The result on the citation advantage of Open Access publications was discussed particularly intensively. Here, the data turned out to be less clear than expected. However, it was also noted that methodological difficulties could occur during measurement in this field. The result of the discussion was that a citation advantage of Open Access can continue to be assumed and can also be cited in advisory discussions. “All studies that show no advantage do not automatically prove a citation disadvantage,” as one participant commented.

Tools and projects to support Open Access

Various tools to support Open Access publishing were particularly popular this year. With “B!SON”, a recommendation service was presented that is helpful for many scientists and scholars in finding a suitable Open Access journal for articles that have already been written. The title, abstract and references are entered into the tool, which then displays suitable Open Access journals on this basis, and awards them a score which can be used to determine a “match”. B!SON was/is developed by the TIB and the Saxon State and University Library Dresden (SLUB).

Another useful service with a similar goal was introduced in the form of the “oa.finder”, developed by the Bielefeld University Library in the context of the open-access.network project. Authors can use this research tool to find suitable publication locations by entering their own role in the submission process, as well as the scientific institution where they work. It is possible to tailor the result to suit individual needs using different search and filter options. Both tools are currently in the beta version – the developers are still particularly keen to receive feedback.

A further session was dedicated to the question of what needs to be considered when converting from PDF to PDF/a in the context of long-term archiving, and which tools can be drawn upon to validate PDF/a files. This provided an opportunity to discuss the advantages and disadvantages of tools such as JHOVE, veraPDF and AvePDF.

The KOALA project (building consortial Open Access solutions) showed us which standards (German) apply for journals and publication series that participate in financing through KOALA consortia. Based upon these standards, the project aims to create an instrument that contributes to safeguarding processes and quality in journals and publishing houses. The project is developing sustainable, collaborative funding through scientific libraries, in order to establish an alternative to the dominant APC model.

Illustration 2: Results on the User Experience of the open-access.network website

In addition, the open-access.network project gave the Barcamp participants the opportunity to give feedback on its services. On the one hand, they evaluated the range of information and discussed the newly designed website. On the other, they focussed on the project events, discussing achievements and making suggestions for improvement. Here, the breadth of the different formats received particular praise, as did the fact that offers such as the “Open Access Talk” series have become very well established in the German-speaking countries.

Open Access communication: Reaching the target audience

Many members of the community are still working on how best to bring OA issues to different audiences. One of the sessions emphasised that, although the aspect of communication in Open Access work was regarded as very important, the required skills are often lacking – not least, because it has hardly played a role in library training to date. One of the central challenges in reaching the individual target groups is that different communication routes need to be served, which in turn requires strategic know-how. In order to stabilise and intensify the exchange, the idea of founding a focus group within the framework of the open-access.network project was proposed; this will be pursued further during a preparatory meeting at the end of June 2022.

Illustration 3: Screenshot of MIRO whiteboard for documenting the Barcamp

Another session also considered the question of communicative ways to disseminate Open Access. Here the format of low-threshold exchange formats was discussed. The Networking and Competence Centre Open Access Brandenburg relocated its own “Open Access Smalltalk” series (German) to the Barcamp – very much in the spirit of openness, and initiated a discussion on how to get interested people around the table. In particular, it was argued that virtual formats offer a lower barrier to participation in such exchanges and that warm-ups can really lead to mobilising participants.

Challenges faced by libraries

The issues and challenges of practical day-to-day Open Access at libraries were also discussed a great deal this year. The topic of how to monitor publication costs found great resonance, for example, and was discussed both in a session and in one of the subsequent discussions one of the topic tables. Against the backdrop of increasing Open Access quotas and costs, libraries face the urgent challenge of getting an overview of the central and decentral publication costs. Here they are applying various techniques, such as de-centrally using their own inventory accounts, but also through their own research and with the help of the Open Access monitor.

A further session explored the topic of secondary publication service, specifically looking at which metadata can be gathered on research funders in repositories, and how. The discussion covered specific practical tips for implementation, including recommendations for the metadata schemata Crossref and RADAR/DataCite, for example.

One of the final sessions at the Barcamp explored the issue of how libraries can ensure that they provide “appropriate” publication opportunities. In doing so, reference was made to the “Recommendations for Moving Scientific Publishing Towards Open Access” (German), published by the German Council of Science and Humanities in 2022. To find out which publication routes researchers want and need, it is necessary to be in close contact with the various scientific communities. The session considered how contacts could be improved within the participants’ own institutions. Various communication channels were mentioned, such as via subject specialists, faculty councils/ representatives or seminars for doctoral candidates.

Illustration 4: Screenshot of feedback from the community

Conclusion

We can look back on a multifaceted and lively Open Access Barcamp 2022. The open concept was well received, and there was considerable willingness from the participants to actively join in and help shape the sessions. The jointly compiled programme offered a wide range of topics and opportunities to discuss everyday Open Access issues. In this virtual setting, people also joined in and contributed to the collegial atmosphere. After the two days, the community returned to everyday life armed with new input and fresh ideas; we would like to thank all those who took part, and look forward to the next discussion.

You may also be interested in:

Workshop Retrodigitisation 2022: Do It Yourself or Have It Done?

by Ulrich Blortz, Andreas Purkert, Thorsten Siegmann, Dawn Wehrhahn and Monika Zarnitz

Workshop Retrodigitisation: topics

Under the workshop title “Do It Yourself or Have It Done? Collaboration With External Partners and Service Providers in Retrodigitisation”, around 230 practitioners specialised in the retrodigitisation of library and archive materials met in March 2022. This year, the Berlin State Library – Prussian Cultural Heritage hosted the retrodigitisation workshop (German), which was held online due to the pandemic. For the first time in 2019, it had been initiated by the three central specialist German libraries – ZB MED, TIB Hannover and ZBW. All four institutions jointly organised a programme which, on the one hand, was about “Do it yourself or have it done?” and, on the other hand, about the question “Is good = good enough?” about quality assurance in retrodigitisation. After each of the eight presentations, there were many interesting questions and lively discussions developed.

Keynote: colourful and of high quality

The keynote on „Inhouse or Outsource? Two Contrasting Case Studies for the Digitisation of 20th Century Photographic Collections“ (PDF) was given by two English colleagues, Abby Matthews (Archive and Family History Centre) and Julia Parks (Signal Film & Media/Cooke’s Studios). They reported on their projects on digitisation of photographic records and old photographs from municipal archives, which they have carried out in cooperation with volunteers.

This was also a big challenge because of the Corona pandemic. Both were able to say that by involving those who later became interested in this offer, a special relationship to this local cultural heritage was developed. The experience of the volunteers also contributed a lot – especially to the documentation of the images, the speakers said.

Cooperation: many models

The first focus of the workshop was on collaboration in retrodigitisation. There were five presentations on this, which had a wide range:

Nele Leiner and Maren Messerschmidt (SUB Hamburg) reported in their presentation on “Class Despite Mass: Implementing Digitisation Projects with Service Providers” (PDF, German) on two retrodigitisation projects in which they worked together with service providers. It was about the projects “Hamburg’s Cultural Property on the Net” (German) and a project that was funded by the German Research Foundation (DFG) in which approx. 1.3 million pages from Hamburg newspapers are being digitised.

Andreas Purkert and Monika Zarnitz (ZBW) gave a presentation on “Cooperation With Service Providers – Tips for the Preparation of Specifications” (PDF, German). They gave clues on tips and tricks for preparing procurement procedures for digitisation services.

Julia Boensch-Bär and Therese Burmeister (DAI) presented the “‘Retrodigitisation‘ Project of the German Archaeological Institute“, which is about having one’s own (co-)edited publications digitised. They described the work processes that ensured the smooth implementation of the project with service providers.

Natalie Przeperski (IJB Munich), Sigrun Putjenter (SBB-PK Berlin), Edith Rimmert (UB Bielefeld), Matthias Kissler (UB Braunschweig) are jointly running the Colibri project (German). In their presentation “Colibri – the Combination of All Essential Variants of the Digitisation Workflow in a Project of Four Partner Libraries” (PDF, German), they reported on how the work processes for the joint digitisation of children’s book collections are organised. The challenge was to coordinate both the cooperation of the participating libraries and that with a digitisation service provider.

Stefan Hauff-Hartig (Parliamentary Archives of the German Bundestag) reported on the “Retro-digitisation Project in the Parliamentary Archives of the German Bundestag: The Law Documentation” (PDF, German). 12,000 individual volumes covering the period from 1949 to 2009 are to be processed. Hauff-Hartig reported on how the coordination of the work was organised with a service provider.

Conclusion: In the presentations on cooperation with other institutions and service providers, it became clear that the success of the project depends heavily on intensive communication between all participants and careful preparation of joint work processes. The organisational effort for this is not insignificant, but the speakers were nevertheless able to show that the synergy effects of cooperation outweigh the costs and that projects only become possible when others are involved.

Quality assurance: Is “good” = good enough?

This question was posed somewhat self-critically by the speakers in this thematic block. Procedures and possibilities for quality assurance of the digitised material were presented:

Stefanie Pöschl and Anke Spille (Digital German Women’s Archive) contrasted the quality, effort and cost considerations of “doing it yourself” with those of purchasing services. In their presentation on “Quality? What for? The Digital German Women’s Archive Reports From Its Almost 6-year Experience With Retrodigitisation” (PDF, German) they looked at the use of standards to ensure the highest possible level of quality.

Yvonne Pritzkoleit and Silke Jagodzinski (Secret State Archives – Prussian Cultural Heritage) presented under the title “Is Good Good Enough? Quality Assurance in Digitisation” their institution’s quality assurance concept. This is based on the ISO/TS 19264-1:2017 standard for image quality. The concept can provide many suggestions for other institutions.

Andreas Romeyke (SLUB Dresden) explained in his presentation “Less is More – the Misunderstanding of Resolution” (PDF, German) why less is often more when it comes to the resolution of images. He described what is meant by resolution, how to determine a suitable resolution and what effects wrongly chosen resolutions can have.

Conclusion: Increasingly, digitised material is not only used as a document to be received for academic work, but it itself becomes research data that the users use, e.g. in the context of the digital humanities. This results in special quality requirements that are not always easy to implement. The three presentations on this topic showed different approaches to the topic and also that it is an important concern for quality management to put effort and benefit in a reasonable relationship. It became clear that standards such as ISO 19264-1 are increasingly being applied, even if this is still not always done according to the textbook, but within the range of technical and personnel possibilities.

Workshop Retrodigitisation 2022: lively discussions – good feedback

In the first part of the workshop, all presentations contained concrete recommendations and useful tips for the design of digitisation projects with service providers. Many aspects that were described in the presentations and discussed afterwards were strongly oriented towards practice, so that they could be incorporated by the participants for their own implementation of projects with service providers and offered a good basis for future planning of their own projects. It was particularly interesting to hear which quantity structures for the pages to be scanned can be implemented in projects with service providers and how projects could be successfully implemented with several institutions despite the pandemic.

The presentations on the topic of quality in the second block of the workshop also met with great interest. Again, all contributions included many practical tips that can be applied to the audience’s own organisations.

In summary, it can be said that the workshop with its many interesting contributions showed the many different ways of working with service providers and the increasing importance of quality management.

The feedback survey showed that the workshop was again very well received this year. All participants were able to take away many new impulses and ideas. The organising institutions will offer another workshop next year. In 2023, it will be hosted by the ZBW.

This text has been translated from German.

Further readings:

About the authors:

Ulrich Ch. Blortz is a qualified librarian for the higher service in academic libraries and a library official. He has worked at the former Central Library of Agricultural Sciences in Bonn since 1981 and has also been responsible for retrodigitisation at the ZB MED – Information Centre for Life Sciences since 2003.

Andreas Purkert is a freight forwarding and logistics merchant. In the private sector, he worked as a certified quality representative and quality manager and as part of the industry certificate REFA basic certificate work organisation. Since May 2020, he has been head of the Digitisation Centre of the ZBW – Leibniz Information Centre for Economics.

Thorsten Siegmann is Head of Unit at the Berlin State Library and responsible for managing retrodigitisation. He holds a degree in cultural studies and has worked in various functions at the Foundation Prussian Cultural Heritage for 15 years.

Dawn Wehrhahn has been a qualified librarian since 1992. Since then she has worked, with a short interruption, at TIB – Leibniz Information Centre for Technology and Natural Sciences and University Library. Her areas of work were: Head of the Wunstorf Municipal Library, Head of the Physics Department Library at TIB, from 2001 Team MyBib Operations within TIB’s full text supply. Since October 2021, she has headed the retrodigitisation team.

Dr Monika Zarnitz is an economist and Head of the Programme Area User Services and Preservation at the ZBW – Leibniz Information Centre for Economics.

The post Workshop Retrodigitisation 2022: Do It Yourself or Have It Done? first appeared on ZBW MediaTalk.

Barcamp Open Science 2022: Connecting and Strengthening the Communities!

by Yvana Glasenapp, Esther Plomp, Mindy Thuna, Antonia Schrader, Victor Venema, Mika Pflüger, Guido Scherp and Claudia Sittner

As a pre-event of the Open Science Conference , the Leibniz Research Alliance Open Science and Wikimedia Germany once again invited participants to the annual Barcamp Open Science (#oscibar) on 7 March. The Barcamp was once again held completely online. By now well-versed in online events, a good 100 participants turned up. They came to openly discuss a diverse range of topics from the Open Science universe with like-minded people.

As at the Barcamp Open Science 2021, the spontaneous compilation of the programme showed that the majority of the sessions had already been planned and prepared in advance. After all, the spectrum of topics ranged from very broad topics such as “How to start an Open Science community?” to absolutely niche discussions, such as the one about the German Data Use Act (Datennutzungsgesetz). But no matter how specific the topic, there were always enough interested people in the session rooms for a fruitful discussion.

Ignition Talk by Rima-Maria Rahal

In this year’s “Ignition Talk”, Rima-Maria Rahal skilfully summed up the precarious working conditions in the science system. These include, on the one hand, temporary positions and the competitive pressure in the science system (in Germany, this is currently characterised by the #IchBinHanna debate, German), and on the other hand, the misguided incentive system with its focus on the impact factor. Not surprisingly, her five thoughts on more sustainable employment in science also met with great approval on Twitter.

Rima-Maria Rahal: Fünf Thoughts for More Sustainable Employment

Those interested in her talk “On the Importance of Permanent Employment Contracts for Research Quality and Robustness” can watch it on YouTube (recording of the same talk at the Open Science Conference).

In the following, some of the session initiators have summarised the highlights and most interesting insights from their discussions:

How to start an Open Science community?
by Yvana Glasenapp, Leibniz University Hannover

Open Science activities take place at many institutions at the level of individuals or working groups, without there being any exchange between them.

In this session we discussed the question of what means can be used to build a community of those interested in Open Science: What basic requirements are needed? What best practice examples are there? Ideas can be found, for example, in this “Open Science Community Starter Kit”.

Die The Four Sages of Developing an Open Science Community from the „Open Science Community Starter Kit “ (CC BY NC SA 4.0)

There is a perception among many that there is a gap between the existing information offered by central institutions such as libraries and research services and the actual implementer community. These central bodies can take on a coordinating role to promote existing activities and network participating groups. It is important to respect the specialisation within the Open Science community. Grassroots initiatives often form in their field due to specific needs in the professional community.

Key persons such as data stewards, who are in direct contact with researchers, can establish contacts for stronger networking among Open Science actors. The communication of Open Science principles should not be too abstract. Incentives and the demonstration of concrete advantages can increase the motivation to use Open Science practices.

Conclusion: If a central institution from the research ecosystem wants to establish an Open Science community, it would do well to focus, for example, on promoting existing grassroots initiatives and to offer concrete, directly applicable Open Science tools.

Moving Open Science at the
institutional/departmental level
by Esther Plomp, Delft University of Technology

In this session all 22 participants introduced themselves and presented a successful (or not so successful!) case study from their institution.

Opportunities for Open Science

A wide variety of examples of improving awareness or rewarding Open Research practices were shared: Several universities have policies in place on Research Data or Open Access. These can be used to refer researchers to and are especially helpful when combined with personal success stories. Some universities offer (small) grants to support Open Science practices (Nanyang Technological University Singapore, University of Mannheim, German). Several universities offer trainings to improve Open Science practices, or support staff who can help.

Offering recommendations or tools that facilitate researchers to open up their workflows are welcome. Bottom-up communities or grassroots initiatives are important drivers for change.

Conferences, such as the Scholarship Values Summit, or blogs could be a way to increase awareness about Open Science (ZBW Blog on Open Science). You can also share your institute’s progress on Open Science practices via a dashboard, an example is the Charité Dashboard on Responsible Research.

Challenges for Open Science

On the other hand, some challenges were also mentioned: For example, Open Science is not prioritised as the current research evaluation system is still very focused on traditional research impact metrics. It can also be difficult to enthuse researchers to attend events. It works better to meet them where they are.

Not everyone is aware of all the different aspects of Open Science (sometimes it is equated with Open Access) and it can also be quite overwhelming. It may be helpful to use different terms such as research integrity or sustainable science to engage people more successfully with Open Science practices. More training is also needed.

There is no one-size-fits-all solution! If new tools are offered to researchers, they should ideally be robust and simplify existing workflows without causing additional problems.

Conclusion: Our main conclusions from the session were that we have a lot of experts and successful case studies to learn from. It is also important to have enthusiastic people who can push for progress in the departments and institutes!

How can libraries support researchers for Open Science?
by Mindy Thuna, University of Toronto Libraries

There were ten participants in this session from institutions in South Africa, Germany, Spain, Luxembourg and Canada.

Four key points that arose:

1. One of the first things that came up in dialogue was that Open Science is a very large umbrella that contains a LOT of pieces/separate things within it. Because there are so many moving parts in this giant ecosystem, it is hard to get started in offering supports, and some areas get a lot less attention than others. Open Access and Open Data seem to consistently be flagged first as the areas that generate a lot of attention/support while Open Software and even Citizen Science received a lot less attention from libraries.

2. Come to us versus go to them: Another point of conversation was whether or not the researchers are coming to us (as a library) to get support for their own Open Science endeavours. It was consistently noted that they are not generally thinking about the library when they are thinking, e.g., about research data or Open Access publishing. The library is not on their radar as a natural place to find this type of support/help until they have experienced it for themselves and realise the library might offer supports in these areas.

From this starting point, the conversation morphed to focus on the educational aspect of what libraries offer – i.e. making information available. But it was flagged that we often have a bubble where the information is located that is not often browsed. So the community is a key player in getting the conversation started, particularly as part of everyday research life. This way, the library can be better integrated into the regular flow of research activities when information/help is needed.

3. The value of face-to-face engagement: People discussed the need to identify and work with the “cheerleaders” to get an active word-of-mouth network going to educate more university staff and students about Open Science (rather than relying on Libguides and webpages to do so more passively). Libraries could be more proactive and work more closely with the scientific community to co-create Open Science related products. Provision of information is something we do well, but we often spend less time on personal interactions and more on providing things digitally. Some of the attendees felt this might be detrimental to really understanding the needs of our faculty. More time and energy should be spend on understanding the specific needs of scientists and shaping the scientific communication system rather than reacting to whatever comes our way.

4. The role of libraries as a connecting element: The library is uniquely placed to see across subject disciplines and serve in the role of connector. In this way, it can help facilitate collaborations/build partnerships across other units of the organisation and assist in enabling the exchange of knowledge between people. It was suggested that libraries should be more outgoing in what they (can) do and get more involved in the dialogue with researchers. One point that was debated is the need for the library to acknowledge that it is not and cannot really be a neutral space – certainly not if Open Science is to be encouraged rather than just supported.

Persistent identifiers and how they can foster Open Science
by Antonia Schrader, Helmholtz Open Science Office

Whether journal article, book chapter, data set or sample – these results of science and research must be made openly accessible in an increasingly digital scientific landscape, and at the same time made unambiguously and permanently findable. This should support the exchange of information within science from “closed” to “open” science and promote the transfer of findings to society.

Persistent identifiers (PIDs) play a central role here. They ensure that scientific resources can be cited and referenced. Once assigned, the PID always remains the same, even if the name or URL of an information object changes.

The participants in the spontaneous barcamp session all agreed on this central importance of PIDs for the digital science landscape. All of them were familiar with the principle of PIDs and have contact points in their daily work, especially with DOIs and ORCID iDs (Open Researcher and Contributor iD). In addition to the enormous potential of PIDs, however, the participants also saw challenges in their use and establishment. It became clear that there are still technical as well as ethical and data protection issues to consider.

There was consensus that these questions must be accompanied by a broad education on PIDs, their purpose and how they work; among the scientific staff of research institutions as well as among researchers. Websites tailored to the topic from ORCID DE (German) or Forschungsdaten.org (German) offer a good introduction.

Translating scholarly works opens science
by Victor Venema, Translate Science

Translating scholarly works opens science for more contributors (who do important work, but are not proficient in writing English), avoids double work and it opens the fruits of science to larger communities. Translated scientific articles open science to science enthusiasts, activists, advisors, trainers, consultants, architects, doctors, journalists, planners, administrators, technicians and scientists. Such a lower barrier to participating in science is especially important on topics such as climate change, environment, agriculture and health.

In this session we discussed why translations are important, tools that could help making and finding translations and foreign language works. An interesting thought was that currently blogs are important for finding foreign scientific articles, which illustrates how much harder it is to find such works and suggests allies to work with. The difficulty of finding foreign works emphasises the importance of at least translating titles and abstracts. Search engines that include automatically translated keywords can also help discovery.

The slides of the session “Translating scholarly articles opens science” can be found here.

Open Data before publication
by Mika Pflüger, Potsdam Institute for Climate Impact Research

In this session we discussed approaches and tools to collaborate on scientific data openly. The starting point of the discussion was the assessment that publishing scientific data openly is already quite well supported and works smoothly thanks to platforms like Zenodo. In contrast, open pre-publication collaboration is difficult because the available platforms impose restrictions, either on the size of the datasets or on the research area supported. Self-hosting a data collaboration platform like gin – Modern Research Data Management for Neuroscience is one solution, but usually not feasible for individual researchers or working groups.

We also talked briefly about experiences with open pre-publication collaboration. Experiences are limited so far, but fruitful collaboration can establish when the datasets in question are useful to a broader group of scientists and contribution is easy and quick. Furthermore, adapting data workflows so that intermediate results and workflows are openly accessible also has benefits for reproducibility and data organisation in general.

Conclusion of the Barcamp Open Science 2022

The Barcamp once again proved to be a suitable opportunity to meet both Open Science veterans and newcomers and to engage in low-threshold conversation. Particularly popular this time were the extensive rounds of introductions in the individual sessions, which not only minimised the inhibition threshold for speaking, but also helped all those present to classify their video conference counterpart in a professional manner and, if desired, to make a note of the contact for later. Topics were dealt with in breadth by many or in depth by a few. Sometimes two people are enough for the latter. In the end, it became clear that the most important thing is to network representatives from the different communities and to promote their exchange.

Thank you and see you next year!

Behind the scenes this year, the organising team had taken up feedback from the community that had arisen in the context of a survey on the future of the Barcamp Open Science. For example, there was an onboarding session especially for newcomers to the Barcamp to explain the format and procedure again and to “break the ice” beforehand. Even though we would like to hold the Barcamp in presence again, and this is also desired, there is also a clear vote for an online format. This is more inclusive and important for international participation. Ultimately, our goal is to further develop and consolidate the format together with the community. And we are open to new partners.

This text has been translated from German.

Web links to the Barcamp Open Science

More tips for events

You may also find this interesting

About the authors (alphabetical)

Dr Yvana Glasenapp is a research officer specialising in research data management and Open Science at Leibniz University Hannover (LUH). Her professional background is in biology. She can be found on XING, LinkedIn and ORCID.
Portrait: Yvana Glasenapp©

Dr Mika Pflüger works in the research software engineering group at Potsdam Institute for Climate Impact Research. He currently works on a better integration of simple climate models into the PIAM suite of integrated assessment models. Mika Pflüger can be found on Twitter.
Portrait: PIK/Klemens Karkow©

Dr Esther Plomp is a Data Steward at the Faculty of Applied Sciences, Delft University of Technology, in the Netherlands. She works towards contributing to a more equitable way of knowledge generation and facilitating others in working more transparently through her involvements in various open research communities including The Turing Way, Open Research Calendar, IsoArcH and Open Life Science. Esther Plomp can be found on Twitter, LinkedIn and GitHub.
Portrait: Esther Plomp©

Dr Guido Scherp is Head of the “Open-Science-Transfer” department at the ZBW – Leibniz Information Centre for Economics and Coordinator of the Leibniz Research Alliance Open Science. He can also be found on LinkedIn and Twitter.
Portrait: ZBW©, photographer: Sven Wied

Antonia Schrader has been working in the Helmholtz Open Science Office since 2020. There she supports the Helmholtz Association in shaping the cultural change towards Open Science. She promotes the dialogue on Open Science within and outside Helmholtz and regularly organises forums and online seminars (German) together with her colleagues. Antonia Schrader is active in ORCID DE, a project funded by the German Research Foundation to promote and disseminate ORCID iD (German), a persistent identifier (PID) for the permanent and unique identification of individuals. Antonia Schrader can be found on Twitter, LinkedIn and XING.
Portrait: Antonia Schrader, CC BY-ND

Claudia Sittner studied journalism and languages in Hamburg and London. She was a long time lecturer at the ZBW publication Wirtschaftsdienst – a journal for economic policy, and is now the managing editor of the blog ZBW MediaTalk. She is also a freelance travel blogger (German), speaker and author. She can also be found on LinkedIn, Twitter and Xing.
Portrait: Claudia Sittner©

Mindy Thuna has been a librarian since 2005. Before, she has worked as an educator in a variety of eclectic locations, including The National Museum of Kenya in Nairobi. Wearing her librarian hat, Mindy has had numerous fabulous librarian titles including the AstraZeneca Science liaison librarian, the Research Enterprise Librarian, Head of the Engineering & Computer Science Library and her current role as the Associate Chief Librarian for Science Research & Information at the University of Toronto Libraries in Canada. Her research is also rather eclectic but focuses on people’s interactions with and perception of concepts relating to information, with her current focus being on faculty and Open Science practices. Mindy Thuna can also be found on ORCID and Twitter.
Portrait: Mindy Thuna©

Victor Venema works on historical climate data with colleagues all around the world where descriptions of the measurement methods are normally in local languages. He organised the barcamp session as member of Translate Science, an initiative that was recently founded to promote the translation of scientific articles. Translate Science has a Wiki, a blog, an email distribution list and can be found on the Fediverse.

The post Barcamp Open Science 2022: Connecting and Strengthening the Communities! first appeared on ZBW MediaTalk.

Open Access goes Barcamp, Part 1: A new networking opportunity for the Open Access community

by Hannah Schneider (KIM), Maximilian Heber (KIM) and Andreas Kirchner (KIM)

The first Open Access Barcamp took place on 22 and 23 April 2021 – virtually, owing to the pandemic. However, the approximately 80 participants were very enthusiastic about the unusual format. Between 9:00 and 14:30 each day, the participants focussed on a varied programme which they had put together themselves in animated discussions about Open Access topics.

Alongside the annual Open Access Days (German) – a central German-speaking conference on the topic of Open Access – this year’s Open Access Barcamp, which was organised by the Communication, Information, Media Centre (KIM) at the University of Konstanz , also offered the community the chance to exchange ideas, network and learn from each other. The Barcamp format is designed to be more open than a classic conference and deliberately does not use a pre-determined programme. Instead, the participants can suggest topics and hold sessions on issues of their choice. This means that everyone can discuss the topics that they find the most interesting.

Screenshot #1: Session-Voting (CC BY 4.0)

Great interest in legal topics

During the session planning it became clear that the Open Access community is currently concerned with many diverse topics.

The great majority of participants were interested in legal topics. One of the sessions included a workshop on legal issues in Open Access consulting in which three groups in parallel worked on two typical consulting cases. The first case was the critical evaluation of a publishing house contract. The issue was broached that contracts like these could entail problems such as substantial cost risks or a restrictive transfer of rights. Regarding service offers, participants discussed how to recognise these disruptive elements and the best way of proceeding in a consultative capacity. The second case was a discussion on the topic of image copyright. The topics of who has rights to an image, how the quotation law applies here and how images are regulated in a publishing house contract were discussed.

During one session on Creative Commons licenses, an intensive discussion developed on the extent to which these were suitable for Open Access books. Using the example of the Saint Philip Street Press publishing house, participants critically discussed the aspect of how publishing houses publish again an Open Access book because open licenses allow reuse and editing of the work. Everyone agreed that this problem exists not only for books but also for all works with open licenses. The group came to the conclusion that honesty and transparency are important for Open Access consulting despite this circumstance: “We’re not sales people, we want to help scientists”.

Exchange about the design of secondary publication services

The topic of secondary publication took up a lot of space owing to the considerable interest of the Barcamp participants. Practitioners met for a major discussion session that dealt with the concrete implementation of secondary publication services. In doing so, they not only discussed the services institutions offer for green Open Access but also how these can be implemented at technical, organisational and legal levels. Together, they discussed challenges in the daily dealings with secondary publications such as automatised imports, publishing house requests or legal checks. Google Scholar alerts, data imports from the Web of Science and the integration of Sherpa Romeo into the institutional repository were mentioned as solution approaches. The scope of secondary publication services for scientists in the individual institutions was also discussed. It became clear that the institutions differ very strongly in their activities but also in their capacities.

Publication data management and establishment of a digital focus group

Another topic that was discussed was how the publication data management is implemented in the different institutional repositories. Here the role of the Open Access Monitor (German) of the Forschungszentrum Jülich in measuring the publication occurrence was mentioned, but also the problem that metadata are used very inconsistently and must sometimes also be entered later by hand. The discussion on the topic of secondary publication was continued in depth after the session and ultimately led to the establishment of a new digital focus group.

Screenshot #2: Session room in gather.town (CC BY 4.0)

A discussion on the support possibilities of Diamond Open Access and an exchange of ideas about Open Access advocacy as well as promoting Open Access services at one’s own institution also enticed many people to the session rooms. The sessions spanning the publication of research data and the further development of existing Open Access policies as far as suitable Open Science policies all demonstrated that the Open Access community is also interested them. The technical perspectives of publication software were examined as well as the requirements placed on them

Swarm intelligence on the further development of the information platform

The collective know-how of the participants was used to gather recommendations from the community for the further development of the information platform open-access.net into a skills and networking portal. For this purpose, not only was it determined how the current site is used, but also what demands and expectations are placed on a skills and network portal. Among other things, the provision of materials to support Open Access consultation cases and a clearer and more intuitive site structure were mentioned here. These and other ideas will flow into the further development of the website.

Discussion about special publication topics

Concrete publication topics were also discussed: For example, there was a session about the Gender Publication Gap in Open Access. The general issue of the impact of gender in science was taken up and participants discussed whether this effect would be increased or reduced by Open Access. The discussion came to the conclusion that this is a very multi-faceted topic and the data situation is still very thin.

Communication of Open Access transformation

The topic of scholar-led publishing in the field of Open Access books was examined and the project COPIM was presented. Participants also discussed the topic of transformation, including the aspect of how the project DEAL and the Open Access transformation could be communicated at one’s own institution. Challenges mentioned here included the reallocation of budgets as well as the difficulty of convincing authors to choose always the Open Access option for DEAL publications. The group agreed that active communication within subject departments and committees as well as information material on the website are currently the most promising methods.

Screenshot #3: Collaboration and transcripts of the sessions on MIRO (CC BY 4.0)

Diverse opportunities to chat about the everyday challenges of Open Access

The programme’s flexible design offered the participants in the Open Access Barcamp a variety of possibilities to chat about current and everyday Open Access topics. Everyday challenges and issues were discussed in direct dialogue with other practitioners, both in the big sessions and in smaller groups.

Even though, as mentioned at the beginning, the Open Access Barcamp took place in a virtual setting, the readiness of the participants to get actively involved and help shape the event was considerable. Our organisational team found that it was important to create an appealing virtual environment to enable an exchange of ideas and networking to take place online too. In the next blog post we describe the chances and challenges that planning such a dynamic event as an online format brings with it. Stay tuned!

More blog posts about the Open Access Barcamp

You may also find this interesting

This text has been translated from German.

The post Open Access goes Barcamp, Part 1: A new networking opportunity for the Open Access community first appeared on ZBW MediaTalk.