AI in Academic Libraries, Part 3: Prerequisites and Conditions for Successful Use

Interview with Frank Seeliger (TH Wildau) and Anna Kasprzik (ZBW)

We recently had a long talk with experts Anna Kasprzik (ZBW – Leibniz Information Centre for Economics) and Frank Seeliger (Technical University of Applied Sciences Wildau – TH Wildau) about the use of artificial intelligence in academic libraries. The occasion: Both of them were involved in two wide-ranging articles: “On the promising use of AI in libraries: Discussion stage of a white paper in progress – part 1” (German) and “part 2” (German).

In their working context, both of them have an intense connection and great interest in the use of AI in the context of infrastructure institutions and libraries. Dr Frank Seeliger is the director of the university library at the TH Wildau and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology. Anna Kasprzik is the coordinator of the automation of subject indexing (AutoSE) at the ZBW.

This slightly shortened, three-part series has emerged from our spoken interview. These two articles are also part of the series:

What are the basic prerequisites for the successful and sustainable use of AI at academic libraries and information institutions?

Anna Kasprzik: I have a very clear opinion here and have already written several articles about it. For years, I have been fighting for the necessary resources and I would say that we have manoeuvred ourselves into a really good starting position by now, even if we are not out of the woods yet. The main issue for me is commitment – right up to the level of decision makers. I’ve developed an allergy to the “project” format. Decision makers often say things like, “Oh yes, we should also do something with AI. Let’s do a project, then a working service will develop from it and that’s it.” But it’s not that easy. Things that are developed as projects tend to disappear without a trace in most cases.

We also had a forerunner project at the ZBW. We deliberately raised it to the status of a long-term commitment together with the management. We realised that automation with machine learning methods is a long-term endeavour. This commitment was essential. It was an important change of strategy. We have a team of three people here and I coordinate the whole thing. There’s a doctoral position for a scientific employee who is carrying out applied research, i.e. research that is very much focused on practice. When we received this long-term commitment status, we started a pilot phase. In this pilot phase, we recruited an additional software architect. We therefore have three positions for this, which correspond to three roles and I regard all three of them as very important.

The ZBW has also purchased a lot of hardware because machine learning experiments require serious computing power. We have then started to develop the corresponding software infrastructure. This system is already productive, but will be continually developed based on the results of our in-house applied research. What I’m trying to say is this: the commitment is important and the resources must reflect this commitment.

Frank Seeliger: This is naturally the answer of a Leibniz institution that is well endowed with research professors. However, apart from some national state libraries and larger libraries, this is usually difficult to achieve. Most libraries do not have a corresponding research mandate nor the personnel resources to finance such projects on a long-term basis. Nevertheless, there are also technologies that smaller institutions need to invest in such as cloud-based services or infrastructure as service. But they need to commit to this, including beyond the project phases. It is anchored in the Agenda 2025/30 that it is a long-term commitment within the context of the automation that is coming up anyway. This has been boosted by the coronavirus pandemic in particular, when people saw how well things can function even when they take place online. The fact that people regard this as a task and seek out information about it correspondingly. The mandate is to explore the technology deliberately. Only in this way can people at working or management level see not only the degree of investment required, but also what successes they can expect.

But it’s not only libraries that have recently, i.e. in the last ten years, begun to explore the topic of AI. It is comparable with small and medium-sized businesses or other public institutions that deal with the Online Access Act and other issues. They are also exploring these kinds of algorithms, in order to find solidarity. Libraries are not the only ones here. This is very important because many of the measures, particularly those at the level of the German federal states, were not necessarily designed with libraries in mind in respect of the distribution of AI tasks or funding.

That’s why we intended our publication (German) also as a political paper. Political in the sense of informing politicians or decision-makers about financial possibilities that we also need the framework to be able to apply. In order to then test things and decide whether we want to use any indexing or other tools such as language tools permanently in the library world and to network with other organisations.

The task for smaller libraries who cannot manage to have research groups is definitely to explore the technology and to develop their position for the next five to ten years. This requires such counterpoints to what is commonly covered by meta-search engines such as Wikipedia. Especially as libraries have a completely different lifespan than companies, in terms of their way of thinking and sustainability. Libraries are designed to last as long as the state or the university exists. Our lifecycles are therefore measured differently. And we need to position ourselves accordingly.

Not all libraries and infrastructure institutions have the capacity to develop a comprehensive AI department with corresponding personnel. So does it make sense to bundle competences and use synergy effects?

Anna Kasprzik:Yes and no. We are in touch with other institutions such as the German National Library. Our scientific employee and developer is working on the further development of the Finnish toolkit Annif with colleagues from the National Library of Finland, for example. This toolkit is also interesting for many other institutions for primary use. I think it’s very good to exchange ideas, also regarding our experiences with toolkits such as this one.

However, I discover time and again that there are limits to this when I advise other institutions; for example, just last week I advised some representatives from Swiss libraries. You can’t do everything for the other institutions. If they want to use these instruments, institutions have to train them on their own data. You can’t just train the models and then plant them one-to-one into other institutions. For sure, we can exchange ideas, give support and try to develop central hubs where at least structures or computing power resources are provided. However, nothing will be developed in this kind of hub that is an off-the-shelf solution for everyone. This is not how machine learning works.

Frank Seeliger: The library landscape in Germany is like a settlement and not like a skyscraper. In the past, there was a German library institute (DBI) that tried to bundle many matters in the academic libraries in Germany across all sectors. This kind of central unit no longer exists, merely several library groups relating to institutions and library associations relating to personnel. So a central library structure that could take on the topic of AI doesn’t exist. There was an RFID working group (German) (or also Special Interest Group RFID at the IFLA), and there should actually also be a working group for robots (German), but of course someone has to do it, usually alongside their actual job.

In any case, there is no central library infrastructure that could take up this kind of topic as a lobby organisation, such as Bitkom, and break it down into the individual companies. The route that we are pursuing is broadly based. This is related to the fact that we operate in very different ways in the different German federal states, owing to the relationship between national government and federal states. The latter have sovereignty in many areas, meaning that we have to work together on a project basis. It will be important to locate cooperation partners and not try to work alone, because it is simply too much. There is definitely not going to be a central contact point. The German Research Center for Artificial Intelligence (DFKI) does not have libraries on its radar either. There’s no one to call. Everything is going to run on a case-by-case and interest-related basis.

How do you find the right cooperation partners?

Frank Seeliger: That’s why there are library congresses where people can discuss issues. Someone gives a presentation about something they have done and then other people are interested: they get together, write applications for third-party funding or articles together, or try to organise a conference themselves. Such conference already exist, and thus a certain structure of exchange has been established.

I am the conservative type. I read articles in library journals, listen to conference news or attend congresses. That’s where you have the informal exchange – you meet other people. Alongside social media, which is also important. But if you don’t reach people via the social media channels, then there is (hopefully soon to return) physical exchange on site via certain section days, for example. Next week we have another Section IV meeting of the German Library Association (DBV) in Dresden where 100 people will get together. The chances of finding colleagues who have similar issues or are dealing with a similar topic are high. Then you can exchange ideas – the traditional way.

Anna Kasprzik: But there are also smaller workshops for specialists. For example, the German National Library has been organising a specialist congress of the network for automated subject indexing (German) (FNMVE) for those who are interested in automated approaches to subject indexing.

I also enjoy networking via social media. You can also find most people who are active in the field on the internet, e.g. on Twitter or Mastodon. I started using Twitter in 2016 and deliberately developed my account by following people with an interest in semantic web technologies. These are individuals, but they represent an entire network. I can’t name individual institutions; what is relevant are individual community members.

And how did you get to know each other? I’m referring to the working group that compiled this non-white paper.

Anna Kasprzik: It’s all Frank’s fault.

Frank Seeliger: Anna came here once. I had invited Mr Puppe in the context of a digitalisation project in which AI methods supported optical character recognition (OCR) and image identification of historical works. Exactly via the traditional route that I’ve just described, i.e. via a symposium; this was how the first people were invited..

Then the need to position ourselves on this topic developed. I had spoken with a colleague from the Netherlands at a conference shortly before. He said that they had been too late with their AI white paper, meaning that politics had not taken them into account and libraries had not received any special funding for AI tools. That was the wake-up call for me and I thought, here in Germany there is also nothing I am aware of that is specifically for information institutions. I then researched who had publications on the topic. That’s how the network, which is still active, developed. We are working on the English translation at the moment.

What is your plea to the management of information institutions? At the beginning, Anna, you already spoke about commitment, also from “the very top”, being a crucial factor. But going beyond this: what course needs to be set now and which resources need to be built up, to ensure that libraries don’t lose out in the age of AI?

Anna Kasprzik: For institutions who can, it’s important to develop long-term expertise. But I completely understand Frank’s point of view: it is valid to say that not every institution can afford this. So two aspects are important for me: one is to cluster expertise and resources at certain central institutions. The other is to develop communication structures across institutions or to share a cloud structure or something similar. To create a network in order to spread it around. To enable dissemination, i.e. the sharing of these experiences for reuse.

Frank Seeliger: Perhaps there is a third aspect: to reflect on the business process that you are responsible for so that you can identify whether it is suitable for an AI-supported automation, for example. To reflect on this yourself, but to encourage your colleagues to reflect on their own workflows too, as to whether routine tasks can be taken over by machines and thereby relieve them of some of the workload. For example, in our library association, the Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV), we had the problem that we would have liked to set up a lab. Not only to play, but also to see together how we can technically support tasks that are really very close to real life. I don’t want to say that the project failed, but the problem was that first you needed the ideas: What can you actually tackle with AI? What requires a lot of time? Is it the indexing? Other work processes that are done over and over again like a routine with a high degree of similarity? We wanted the lab to look at exactly these processes and check if we could automate them, independently of what library management systems do or all the other tools with which we work.

It’s important to initiate the process of self-reflection on automation and digitalisation in order to identify fields of work. Some have expertise in AI, others in their own fields, and they have to come together. The path leads through one’s own reflection to enter into conversation and to sound out whether solutions can be found..

And to what extent can the management support?

Frank Seeliger: Leadership is about bringing people together and giving impetus. The coronavirus pandemic and digitalisation have put a lot of pressure on many people. There is a saying by Angela Merkel. She once said that she only got around to thinking during the Christmas period. However, you want to interpret that now. Out of habit and because you want to clear the pile of work on your desk during working hours, it’s often difficult to reflect on what you are doing and if there isn’t already a tool that could help. Then it’s the task of the management level to look at these processes and where appropriate to say, yes, maybe the person could be helped with this. Let’s organise a project and take a closer look.

Anna Kasprzik: Yes, that’s one of the tasks, but for me the role of management is above all to take the load off the employees and clear a path for them. This brings another buzzword into play: agile working. It’s not only about giving an impetus, but also about supporting people by giving them some leeway so that they can work in a self-dependent manner. The agile manifesto, so to speak, which also leads to the fact that one creates space for experimenting and allows for failure sometimes. Otherwise, nothing will come to fruition.

Frank Seeliger:We will soon be doing a “Best of Failure” survey, because we want to ask what kind of error culture we really have, as it is sacrosanct. This will also be the topic of the Wildau Library Symposium (German) from 13 to 14 September 2022. In it, we will explore this error culture more intensively. Because it is right. Even in IT projects, you simply have to allow things to go wrong. Of course, they don’t have to be taken on as a permanent task if they don’t go well. But sometimes it’s good to just try, because you can’t predict whether a service will be accepted or not. What do we learn from these mistakes? We talk about it relatively little, mostly about successful projects that go well and attract crazy amounts of funding. But the other part also has to come into focus in order to learn better from it and be able to utilise aspects of it for the next project.

Is there anything else that you would like to say at the end?

Frank Seeliger: AI is not just a task for large institutions.

Anna Kasprzik: Exactly, AI concerns everyone. Even though AI should not be dealt with just for the sake of AI, but rather to develop new innovative services that would otherwise not be possible.

Frank Seeliger: There are naturally other topics, no question about that. But you have to address it and sort out the various topics.

Anna Kasprzik: : It’s important that we get the message across to people that automated approaches should not be regarded as a threat, but rather that by now this digital jungle exists anyway, so we need tools to find our way through it. AI therefore represents new potential and added value, and not a threat that will be used to eliminates people’s jobs..

Frank Seeliger: We have also been asked the question: What is the added value of automation? Of course, you spend less time on routine processes that are very manually. This creates scope to explore new technologies, to do advanced training or to have more time for customers. And we need this scope to develop new services. You simply have to create that scope, also for agile project management, so that you don’t spend 100% of your time clearing some pile of work or other from your desks, but can instead use 20% for something new. AI can help give us this time.

Thank you for the interview, Anna and Frank.

Part 1 of the interview on “AI in Academic Libraries” is about areas of activity, the big players and the automation of indexing.
In part 2 of the interview on “AI in Academic Libraries” we explore interesting projects, the future of chatbots and the problem of discrimination through AI.

This might also interest you:

We were talking to:

Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Twitter and Mastodon.
Portrait: Photographer: Carola Gruebner, ZBW©

Dr Frank Seeliger (German) has been the director of the university library at the Technical University of Applied Sciences Wildau since 2006 and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology since 2015. One module explores AI. You can find Frank on ORCID.
Portrait: TH Wildau

Featured Image: Alina Constantin / Better Images of AI / Handmade A.I / Licensed by CC-BY 4.0

The post AI in Academic Libraries, Part 3: Prerequisites and Conditions for Successful Use first appeared on ZBW MediaTalk.

AI in Academic Libraries, Part 2: Interesting Projects, the Future of Chatbots and Discrimination Through AI

Interview with Frank Seeliger (TH Wildau) and Anna Kasprzik (ZBW)

We recently had an intense discussion with Anna Kasprzik (ZBW) and Frank Seeliger (Technical University of Applied Sciences Wildau – TH Wildau) on the use of artificial intelligence in academic libraries. Both of them were recently involved in two wide-ranging articles: “On the promising use of AI in libraries: Discussion stage of a white paper in progress – part 1” (German) and “part 2” (German).

Dr Anna Kasprzik coordinates the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Dr Frank Seeliger (German) is the director of the university library at the Technical University of Applied Sciences Wildau and is jointly responsible for part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology.

This slightly shortened, three-part series has been drawn up from our spoken interview. These two articles are also part of it:

What are currently the most interesting AI projects in libraries and infrastructure institutions?

Anna Kasprzik: Of course, there are many interesting AI projects. Off the top of my head, the following two come to mind: The first one is interesting for you if you are interested in the issue of optical character recognition (OCR). Because, before you can even start to think about automated subject indexing, you have to create metadata, i.e. “food” for the machine. So to speak: segmenting digital texts into their structural fragments, extracting an abstract automatically. In order to do this, you run OCR on the scanned text. Qurator (German) is an interesting project in which machine learning methods are used as well. The Staatsbibliothek zu Berlin (Berlin State Library) and the German Research Center for Artificial Intelligence (DFKI) are involved, among others. This is interesting because at some point in the future it might give us the tools we need in order to be able to obtain the data input required for automated subject indexing.

The other project is the Open Research Knowledge Graph (ORKG) of the TIB Hannover. The Open Research Knowledge Graph is a way of representing scientific results no longer as a document, i.e. as a PDF, but rather in an entity-based way. Author, research topic or method – all nodes in one graph. This is the semantic level and one could use machine learning methods in order to populate it.

Frank Seeliger: Only one project: it is running at the ZBW and the TH Wildau and explores the development of a chatbot with new technologies. The idea of chatbots is actually relatively old. A machine conducts a dialogue with a human being. In the best case, the human being does not recognise that a machine is running in the background – the Turing Test. Things are not quite this advanced yet, but the issue we are all concerned with is that libraries are being consulted – in chat rooms, for example. Many libraries aim to offer a high level of service at the times when researchers and students work, i.e. round the clock. This can only take place if procedures are automated, via chatbots for example, so that difficult questions can be also answered outside the opening hours, at weekends and on public holidays.

I am therefore hoping firstly that the input we receive concerning chatbot development means that it will become a high-quality standard service that offers fast orientation and gives information with excellent predictive quality about a library or special services. This would create the starting point for other machines such as moving robots. Many people are investing in robots, playing around with them and trying out various things. People are expecting that they will be able to go to them and ask, “Where is book XY?” or “How do I find this and that?”, and that these robots can deal with such questions profitably and show “there’s that” in an oriented way and point their finger at it. That’s one thing.

The second thing that I find very exciting for projects is to win people over to AI at an early stage. Not just to save AI as a buzzword, but to look behind the scenes of this technology complex. We tried to offer a certificate course (German). However, demand has been too low for us to offer the course. But we will try it again. The German National Library provides a similar course that was well attended. I think it’s important to make a low-threshold offer across the board, i.e. for a one-person library or for small municipal libraries that are set up on a communal basis, as well as for larger university libraries. That people get to grips with the subject matter and find their own way, where they can reuse something, where there are providers or cooperation partners. I find this kind of project is very interesting and important for the world of libraries.

But this too can only be the starting point for many other offers of special workshops, on Annif for example or other topics that can be discussed at a level that non-informaticians can understand as well. It’s an offer to colleagues who are concerned with it, but not necessarily at an in-depth level. As with a car – they don’t manufacture the vehicle themselves, but want to be able to repair or fine-tune it sometimes. At this level, we definitely need more dialogue with the people who are going to have to work with it, for example as system administrators who set up or manage such projects. The offers must also be focused towards the management level – the people who are in charge of budgeting, i.e. those who sign third-party funding applications.

At both institutions, the TH Wildau and the ZBW, you are working on the use of chatbots. Why is this AI application area for academic libraries so promising? What are the biggest challenges?

Frank Seeliger: The interesting perspective for me is that we can operate the development of a chatbot together with other libraries. It is nice when not only one library serves as a knowledge base in the background for the typical examples. This is not possible with locally specific information such as opening hours or spatial conditions. Nevertheless, many synergy effects are created. We can bring them together and be in a position to generate as large a quantity of data as possible, so that the quality of the assertions that are automatically generated is simply better than if we were to set it up individually. The output quality has a lot to do with the data quality. Although it is not true that the more data, the better the information. Other factors also play a role. But generally, small solutions tend to fail because of the small quantity of data.

Especially in view of the fact that a relatively high number of libraries are keen to invest in robot solutions that “walk” through the library outside the opening hours and offer services, like the robot librarian. If the service is used, it therefore makes twice as much sense to offer something online, but also to retrieve it using a machine that rolls through the premises and offers the service. This is important, because the personal approach from the library to the clients is a very decisive and differentiating feature as opposed to the large meta levels that offer their services in the commercial field. Looking for dialogue and paying attention to the special requirements of the users: this is what makes the difference.

Anna Kasprzik: Even though I am not involved in the chatbot project at ZBW, I can think of three challenges. The first is that you need an incredible amount of training data. Getting hold of that much data is relatively difficult. Here at ZBW we have had a chat feature for a long time – without a bot. These chats have been recorded but first they had to be cleaned of all personal data. This was an immense amount of editorial work. That is the first challenge.

The second challenge: it’s a fact that relatively trivial questions, such as the opening hours, are easily answered. But as soon as things become more complex, i.e. when there are specialised questions, you need a knowledge graph behind the chatbot. And setting this up is relatively complex.

Which brings me to the third challenge: during the initial runs, the project team established that quite a few of the users had reservations and quickly thought, “It doesn’t understand me”. So there were reservations on both sides. We therefore have to be mindful of the quality aspect and also of the “trust” of the users.

Frank Seeliger: But the interactions also follow the direction of speech, particularly from the younger generations who are now coming through as students in the libraries. This generation communicates via voice messages: the students speak with Siri or Alexa and they are informal when speaking to technologies. FIZ Karlsruhe has attempted to define search queries using Alexa. That went well in itself, but it failed because of the European General Data Protection Regulation (GDPR), the privacy of information and the fact that data was processed somewhere in the USA. Naturally, that is not acceptable.

That’s why it is good that libraries are doing their own thing – they have data sovereignty and can therefore ensure that the GDPR is maintained and that user data is treated carefully. But it would be a strategic mistake if libraries did not adapt to the corresponding dialogue. Very simply because a lot of these interactions no longer take place with writing and reading alone, but via speech. As far as apps and features are concerned, much is communicated via voice messages, and libraries need to adapt to this fact. It starts with chatbots, but the question is whether search engines will be able to cope with (voice) messages at some point and then filter out the actual question. Making a chatbot functional and usable in everyday life is only the first step. With spoken language, this then incorporates listening and understanding.

Is there a timeframe for the development of the chatbot?

Anna Kasprzik: I’m not sure, when the ZBW is planning to put its chatbot online; it could take one or two years. The real question is: when will such chatbots become viable solutions in libraries globally? This may take at least ten years or longer – without wanting to crush hopes too much.

Frank Seeliger: There are always unanticipated revivals popping up, for which a certain impetus is needed. For example, I was in the IT section of the International Federation of Library Associations and Institutions (IFLA) on statistics. We considered whether we could determine statistics clearly and globally, and depict them as a portfolio. Initially it didn’t work – it was limited to one continent: Latin America. Then the section received a huge surprise donation from the Bill and Melinda Gates Foundation and with it, the project IFLA Library Map of the World could be implemented.

It was therefore a very special impetus that led to something that we would normally not have achieved with ten years’ work. And when this impetus exists through tenders, funding, third-party donors that accelerate exactly this kind of project, perhaps also from a long-term perspective, the whole thing takes on a new dynamic. If the development of chatbots in libraries continues to stagnate like this, they will not use them on a market-wide scale. There was also a movement with contactless object recognition via radio waves (Radio-Frequency Identification, RFID). It started in 2001 in Siegburg, then Stuttgart and Munich. Now, it is used in 2,000 to 3,000 libraries. I don’t see this impetus with chatbots at all. That’s why I don’t think that, in ten or 15 years, chatbots will be used in 10% to 20% of libraries. It’s an experimental field. Maybe some libraries will introduce them, but it will be a handful, perhaps a dozen. However if a driving force occurs owing to external factors such as funding or a network initiative, the whole concept may receive new momentum.

The fact that AI-based systems make discriminatory decisions is often regarded as a general problem. Does this also apply to the library context? How can this be prevented?

Anna Kasprzik: That’s a very tricky question. Not many people are aware that potential difficulties almost always arise from the training data because training data is human data. These data sources contain our prejudices. In other words, whether the results may have a discriminating effect or not depends on the data itself and on the knowledge organisation systems that underpin it.

One movement that is gathering pace is known as de-colonisalisation. People are therefore taking a close look at the vocabularies they use, thesauri and ontologies. The problem has come up for us as well: since we also provide historical texts, terms that have racist connotations today appeared in the thesaurus . Naturally, we primarily incorporate terms that are considered politically correct. But these definitions can shift over time. The question is: what do you do with historical texts where this word occurs in the title? The task is then to find different ways to provide them as hidden elements of the thesaurus but not to display them in the interface.

There are knowledge organisation systems that are very old and have developed in times very different from ours. We need to restructure them completely as a matter of urgency. It’s always a balancing act if you want to display texts from earlier periods with the structures that were in use at that time. Because I must both not falsify the historical context, but also not offend anyone who wants to search in these texts and feel represented or at least not discriminated against. This is a very difficult question, particularly in libraries. People often think: that’s not an issue for libraries, it’s only relevant in politics, or that sort of thing. But on the contrary, libraries reflect the times in which they exist, and rightly so.

Frank Seeliger: Everything that you can use can also be misused. This applies to every object. For example, I was very impressed in Turkey. They are working with a big Koha approach (library software), meaning that more than 1,000 public libraries are using the open source solution Koha as their library management software. They therefore know, among other things, which book is most often borrowed in Turkey. We do not have this kind of information at all in Germany via the German Library Statistics (DBS, German). This doesn’t mean that this knowledge discredits the other books, that they are automatically “leftovers”. You can do a lot with knowledge. The bias that exists with AI is certainly the best known. But it is the same for all information: should monuments be pulled down or left standing? We need to find a path through the various moral phases that we live through as a society.

In my own studies, I specialised in pre-Colombian America. To name one example, the Aztecs never referred to themselves as Aztecs. If you searched in catalogues of libraries pre-1763, the term “Aztec” did not exist. They called themselves Mexi‘ca. Or we could take the Kerensky Offensive – search engines do not have much to offer on that. It was a military offensive that was only named that afterwards. It used to be called something else. It is the same challenge: to refer to both terms, even if the terminology has changed, or if it is no longer “en vogue” to work with a certain term.

Anna Kasprzik: This is also called concept drift and it is generally a big problem. It’s why you always have to retrain the machines: concepts are continually developing, new ones emerge or old terms change their meaning. Even if there is no discrimination, terminology is constantly evolving
.

And who does this work?

Anna Kasprzik: The machine learning experts at the institution.

Frank Seeliger: The respective zeitgeist and its intended structure.

Thank you for the interview, Anna and Frank.

Part 1 of the interview on “AI in Academic Libraries” is about areas of activity, the big players and the automation of indexing.
Part 3 of the interview on “AI in Academic Libraries” focuses on prerequisites and conditions for successful use
We will share the link here as soon as the post is published

This text has been translated from German.

This might also interest you:

We were talking to:

Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Twitter and Mastodon.
Portrait: Photographer: Carola Gruebner, ZBW©

Dr Frank Seeliger (German) has been the director of the university library at the Technical University of Applied Sciences Wildau since 2006 and has been jointly responsible for the part-time programme part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology since 2015. One module explores AI. You can find Frank on ORCID.
Portrait: TH Wildau

Featured Image: Alina Constantin / Better Images of AI / Handmade A.I / Licensed by CC-BY 4.0

The post AI in Academic Libraries, Part 2: Interesting Projects, the Future of Chatbots and Discrimination Through AI first appeared on ZBW MediaTalk.

INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions

by Anastasia Kazakova

The third INCONECSS – International Conference on Economics and Business Information – took place online from 17 to 19 May 2022. The panels and presentations focused on artificial intelligence, Open Access and (research) data. INCONECSS also addressed collaboration in designing services for economics research and education and how these may have been influenced by the corona crisis.

Unleash the future and decentralise research!

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, gave the keynote address “The next chapter for research information: decentralised, digital and disrupted”. With this, she wanted to inspire the participants to “unleash the future” and decentralise research. The first topic of her presentation was about German universities. Isabell Welpe took us on a journey through three stations:

  1. What happens at universities?
  2. What does the work of students, researchers and teachers and the organisation at universities look like?
  3. How can universities and libraries be made future-proof?

In her lecture, she pointed out that hierarchically organised teaching is currently often unable to cope with the rapid social changes and new developments in the world of work. Isabell Welpe therefore suggested opening up teaching and organising it “bottom up”. This means relying on the decentralised self-organisation of students, offering (digital) spaces for exchange and tailoring teaching to their needs. Through these changes, students can learn while actively participating in research, which simultaneously promotes their creativity and agility. This is a cornerstone for disruptive innovation; that is, innovation that breaks and radically changes existing structures.

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, drawing: Karin Schliehe

Libraries could support and even drive the upcoming changes. In any case, they should prepare themselves for enormous changes due to the advancing digitisation of science. Isabell Welpe observed the trend towards “digital first” in teaching – triggered by the coronavirus situation. In the long term, this trend will influence the role of libraries as places of learning, but will also determine interactions with libraries as sources of information. Isabell Welpe therefore encouraged libraries to become a market-place in order to promote exchange, creativity and adaptability. The transformation towards this is both a task and an opportunity to make academic libraries future-proof.

In her keynote speech, Isabell Welpe also focused on the topic of decentralisation. One of the potentials of decentralisation is that scientists exchange data directly and share research data and results with each other, without, for example, publishers in between. Keywords were: Web 3.0, Crypto Sci-Hub and Decentralisation of Science.

In the Q&A session, Isabell Welpe addressed the image of libraries: Libraries could be places where people would go and do things, where they would exchange and would be creative; they could be places where innovation took place. She sees libraries as a Web 3.0 ecosystem with different services and encouraged them to be more responsive to what users need. Her credo: “Let the users own a part of the library!”

How can libraries support researchers?

Following on from the keynote, many presentations at INCONECSS dealt with how libraries can succeed even better in supporting researchers. On the first day, Markus Herklotz and Lars Oberländer from the University of Mannheim presented their ideas on this topic with a Poster (PDF, partly in German). The focus was on the interactive virtual assistant (iVA), which enables data collaboration by imparting legal knowledge. Developed by the BERD@BW and BERD@NFDI initiatives, the iVA helps researchers to understand the basic data protection regulations in each case and thereby helps them to evaluate their legal options for data use. The selfdirected assistant is an open-source learning module and can be extended.

Paola Corti from SPARC Europe introduced the ENOEL toolkit with her poster (PDF). It is a collection of templates for slides, brochures and Twitter posts to help communicate the benefits of Open Education to different user groups. The aim is to raise awareness of the importance of Open Education. It is openly designed, available in 16 language versions and can be adapted to the needs of the organisation.

On the last day of INCONECSS, Franziska Klatt from the Economics and Management Library of the TU Berlin reported in her presentation (PDF) on another toolkit that supports researchers in applying the Systematic Literature Review (SLRM) method. Originating from the medical field, the method was adapted to the economic context. SLRM helps researchers to reduce bias and redundancy in their work by following a formalised and transparent process that is reproducible. The toolkit provides a collection of information on the stages of this process, as well as SLR sources, tutorial videos and sample articles. Through the use of the toolkit and the information on the associated website, the media competence of the young researchers could be improved. An online course is also planned.

Field reports: How has the pandemic changed the library world?

The coronavirus is not yet letting go of the world, which also applies to the world of the INCONECSS community: In the poster session, Scott Richard St. Louis from the Federal Reserve Bank of St. Louis presented his experiences of onboarding in a hybrid work environment. He addressed individual aspects of remote onboarding, such as getting to know new colleagues or the lack of a physical space for meetings.

The poster (PDF) is worth a look, as it contains a number of suggestions for new employees and management, e.g.:

  • “Be direct, and even vulnerable”,
  • “Be approachable” or
  • “What was once implicit or informal needs to become explicit or conscious”.

Arjun Sanyal from the Central University of Himachal Pradesh (CUHP) reported in his presentation (PDF) on a project of his library team. They observed that the long absence from campus triggered a kind of indifference towards everyday academic life and an “informational anxiety” among students. The latter manifests itself in a reluctance to use information resources for studying, out of a fear of searching for them. To counteract this, the librarians used three types of measures: Mind-map sessions, an experimental makerspace and supportive motivational events. In the mind-map session, for example, the team collected ideas for improving library services together with the students. The effort had paid off, they said, because after a while they noticed that the campus and the libraries in particular were once again popular. In addition, Makerspace and motivational events helped students to rediscover the joy of learning, reports Arjun Sanyal.

Artificial Intelligence in Libraries

One of the central topics of the conference was without doubt the use of artificial intelligence (AI) in the library context. On the second day of INCONECSS, the panel participants from the fields of research, AI, libraries and thesaurus/ontology looked at aspects of the benefits of AI for libraries from different perspectives. They discussed the support of researchers through AI and the benefits for library services, but also the added value and the risks that arise through AI.

Discussion, drawing: Karin Schliehe

The panellists agreed that new doors would open up through the use of AI in libraries, such as new levels of knowledge organisation or new services and products. In this context, it was interesting to hear Osma Suominen from the National Library of Finland say that AI is not a game changer at the moment: it has the potential, but is still too immature. In the closing statements, the speakers took up this idea again: They were optimistic about the future of AI, yet a sceptical approach to this technology is appropriate. It is still a tool. According to the panellists, AI will not replace librarians or libraries, nor will it replace research processes. The latter require too much creativity for that. And in the case of libraries, a change in business concepts is conceivable, but not the replacement of the institution of the library itself.

It was interesting to observe that the topics that shaped the panel discussion kept popping up in the other presentations at the conference: Data, for example, in the form of training or evaluation data, was omnipresent. The discussants emphasised that the quality of the data is very important for AI, as it determines the quality of the results. Finding good and usable data is still complex and often related to licences, copyrights and other legal restrictions. The chatbot team from the ZBW also reported on the challenges surrounding the quality of training data in the poster session (PDF).

The question of trust in algorithms was also a major concern for the participants. On the one hand, it was about bias, which is difficult and requires great care to remove from AI systems. Again, data was the main issue: if the data was biased, it was almost impossible to remove the bias from the system. Sometimes it even leads to the systems not going live at all. On the other hand, it was about the trust in the results that an AI system delivers. Because AI systems are often non-transparent, it is difficult for users and information specialists to trust the search results provided by the AI system for a literature search. These are two of the key findings from the presentation (PDF) by Solveig Sandal Johnsen from AU Library, The Royal Library and Julie Kiersgaard Lyngsfeldt from Copenhagen University Library, The Royal Library. The team from Denmark investigated two AI systems designed to assist with literature searches. The aim was to investigate the extent to which different AI-based search programmes supported researchers and students in academic literature search. During the project, information specialists tested the functionality of the systems using the same search tasks. Among other results, they concluded that the systems could be useful in the exploratory phase of the search, but they functioned differently from traditional systems (such as classic library catalogues or search portals like EconBiz) and, according to the presenters, challenged the skills of information specialists.

This year, the conference took place exclusively online. As the participants came from different time zones, it was possible to attend the lectures asynchronously and after the conference. A selection of recorded lectures and presentations (videos) is available on the TIB AV portal.

Links to INCONECSS 2022:

  • Programme INCONECSS
  • Interactive Virtual Assistant (iVA) – Enabling Data Collaboration by Conveying Legal Knowledge: Abstract and poster (PDF)
  • ENOEL toolkit: Open Education Benefits: Abstract and poster (PDF)
  • Systematic Literature Review – Enhancing methodology competencies of young researchers: Abstract and slides (PDF)
  • Onboarding in a Hybrid Work Environment: Questions from a Library Administrator, Answers from a New Hire: Abstract and Poster (PDF)
  • Rethinking university librarianship in the post-pandemic scenario: Abstract and slides (PDF)
  • „Potential of AI for Libraries: A new level for knowledge organization?“: Abstract Panel Discussion
  • The EconDesk Chatbot: Work in Progress Report on the Development of a Digital Assistant for Information Provision: Abstract and slides (PDF)
  • AI-powered software for literature searching: What is the potential in the context of the University Library?: Abstract and slides (PDF)

This might also interest you:

About the Author:

Anastasia Kazakova is a research associate in the department Information Provision & Access and part of the EconBiz team at the ZBW – Leibniz Information Centre for Economics. Her focus is on user research, usability and user experience design, and research-based innovation. She can also be found on LinkedIn, ResearchGate and XING.
Potrait: Photographer: Carola Grübner, ZBW©

The post INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions first appeared on ZBW MediaTalk.

Open Access: Is It Fostering Epistemic Injustice?

by Nicki Lisa Cole and Thomas Klebel

One of the key aims of Open Science is to foster equity with transparent, participatory and collaborative processes and by providing access to research materials and outputs. Yet, the academic context in which Open Science operates is unequal. For example, core-periphery dynamics are present, with researchers from the global north dominating authorship and collaborative research networks. Sexism is present, with women experiencing underrepresentation within academia (see also) and especially within senior career positions (PDF); and racism manifests within academia, with white people being over-represented among higher education faculty. Inequality is the water in which we swim, therefore we cannot be naive about the promises of Open Science.

In light of this reality, the ON-MERRIT project set out to investigate whether Open Science policies actually worsen existing inequalities by creating cumulative advantage for already privileged actors. We investigated this question within the contexts of academia, industry and policy. We found that, indeed, some manifestations of Open Science are fostering cumulative advantage and disadvantage in a variety of ways, including epistemic injustice.

Miranda Fricker defines epistemic injustice in two ways. She explains that testimonial injustice “occurs when prejudice causes a hearer to give a deflated level of credibility to a speaker’s word,” while hermeneutical injustice “occurs at a prior stage, when a gap in collective interpretive resources puts someone at an unfair disadvantage when it comes to making sense of their social experiences”. Here, we take a look at ways in which Open Access (OA) publishing, as it currently operates, is fostering both kinds of epistemic injustice.

APCs and the stratification of OA publishing

Research shows that article processing charges (APCs) lead to unequal opportunities for researchers to participate in Open Access publishing. The likelihood of US researchers publishing OA, especially when APCs are involved, is higher for male researchers from prestigious institutions, having received federal grant funding. Similarly, APCs are associated with lower geographic diversity of authors within journals, suggesting that they act as a barrier for researchers from the Global South, in particular. In our own research, specifically investigating the role of institutional resources, we found that authors from well-resourced institutions both publish and cite more Open Access literature, and in particular, publish in journals with higher APCs than authors from less-resourced institutions. Disparities in policies that promote and fund OA publication is likely a significant driver of these trends.

While these policies are obviously helpful to those who benefit from them, they are reproducing existing structural inequalities within academia, by fuelling cumulative advantages of already privileged actors, and further side-lining the voices of those with fewer resources. This form of testimonial injustice is historically rooted and widespread within academia, with research from the Global South often deemed less relevant and less credible (see also). With the rise of APC-based Open Access, actors with fewer resources face additional barriers to contributing to the most recognised outlets hosting scientific knowledge, since journal prestige and APC amounts have been found to be moderately correlated. Given that scientific research is expected to aid in tackling urgent societal challenges, it is alarming that current trends in scholarly communications are exacerbating the marginalisation of research and knowledge from the Global South and from less-resourced scholars more generally.

Access Isn’t Enough

One of the arguments in support of Open Access is that it fosters greater scientific use by societal actors. This is a commonly cited refrain in the literature, but we found that OA has virtually no impact in this way. Rather, we heard from policy-makers that they rely on existing personal relationships with researchers and other experts when they seek expert advice. Moreover, we heard from researchers that it is far more important that scientific outputs be cognitively accessible, or understandable, when disseminating research to lay audiences.

Communicating scientific results to lay audiences requires time, resources, and a particular skill set, and failing to account for this reality limits the pool of actors able to do it (to those already well-resourced and ‘at the table’) and inhibits the potential for science to impact policy-making and to be useful to impacted communities. In this way, Open Access absent understandability creates hermeneutical injustice among any population that would benefit from understanding research and how it impacts their lives, but especially among those who are marginalised, who may have participated in research or been the subjects of study, and to whom the outcomes of research could provide a direct benefit. People cannot advocate for their rights and for their communities if they are not provided with the tools to understand social, environmental and economic problems and possible solutions. In this way, the concept of Open Access must go beyond removing a paywall to readership and provide understandability, aligning with the “right to research”, as articulated by Arjun Appadurai.

What We Can Do About It

In response to these and other equity issues within Open Science, the ON-MERRIT team worked with a diverse stakeholder community from across the EU and beyond to co-create actionable recommendations aimed at funders, leaders of research institutions, and researchers. We produced and published 30 consensus-based recommendations, and here we spotlight a few that can respond to epistemic injustice and that may be actionable by libraries.

  • Supporting alternative, more inclusive publishing models without author-facing charges and the use of sustainable, shared and Open Source publishing infrastructure could help to ameliorate the inequitable stratification of Open Access publishing.
  • Supporting researchers to create more open and understandable outputs, including in local languages when appropriate, could help to ameliorate the hermeneutical injustice that results from the inaccessibility of academic language. In conjunction, supporting partnerships with other societal actors in the translation and dissemination of understandable research findings could also help to achieve this.
  • We believe that librarians could be especially helpful by supporting (open and sustainable) infrastructure that enables the findability and understandability of research by lay audiences.

Visit our project website to learn more about ON-MERRIT and our results, and click here to read our full recommendations briefing.

This might also be interesting for you:

About the Authors:

Nicki Lisa Cole, PhD is a Senior Researcher at Know-Center and a member of the Open and Reproducible Research Group. She is a sociologist with a research focus on issues of equity in the transition to Open and Responsible Research and Innovation. She was a contributor across multiple work packages within ON-MERRIT. You can find her on ORCID, ResearchGate and LinkedIn.
Portrait: Nicki Lisa Cole: Copyright private, Photographer: Thomas Klebel

Thomas Klebel, MA is a member of the Open and Reproducible Research Group and a Researcher at Know-Center. He is a sociologist with a research focus on scholarly communication and reproducible research. He was project manager of ON-MERRIT, as well as investigating Open Access publishing, and opinions and policies on promotion, review and tenure. You can find him on Twitter, ORCID and LinkedIn.
Portrait: Thomas Klebel: Copyright private, Photographer: Stefan Reichmann©

The post Open Access: Is It Fostering Epistemic Injustice? first appeared on ZBW MediaTalk.

The Ideal Place for Students to Learn: Results of a ZBW Photo Study

by Alena Behrens and Nicole Clasen

In this article, Alena Behrens and Nicole Clasen from the User Services team at the ZBW – Leibniz Information Centre for Economics report on the background, method, questions and results of their photo study among students. The key feature: the participants were only allowed to answer the five questions with photos. Text answers or comments were not possible. 19 students took part and sent 108 photos: of how they work, take their breaks and what their after-work rituals are. Alena Behrens and Nicole Clasen present the most interesting findings, draw conclusions about how new learning spaces in libraries need to be designed, and reveal what role candles play in this:

Pandemic challenges

User experience research (UX research) is characterised by spending a lot of time with your users, including their emotional level and questioning behaviours to learn as much as possible about the users. But how can you build this connection when libraries are closed for weeks and people are called to physically distance themselves from each other? The ZBW’s User Services team has dared to attempt a UX survey during the pandemic.

Approach and setting

Due to the pandemic-related requirements at the time of implementation in autumn 2021, it quickly became clear that the project should be carried out online as far as possible. The opening hours of the libraries were very limited. Only a few users worked in the library on site, and most of the staff worked from their home offices.

The question for us, however, was obvious: How do students learn at home during the pandemic? What stresses or disturbs them about this work situation? How do students deal with these changed learning conditions without a lecture hall or library? And what can we learn from this to adapt and improve the future design of the learning spaces?

A suitable UX method quickly emerged for these questions: the Photo Studies (term after Andy Priestner).

Photo Studies from home

In the Photo Studies method, the participants answer the questions posed with photos they have taken themselves. This was suitable for our question for two reasons: First, it gives us a very good insight into how the students set themselves up to study at home. Second, we were able to comply with all hygiene measures by establishing contact via email and sending the photos to us digitally. In addition, the students were quite flexible in terms of when they answered the questions. They could take the photos at their leisure and decide what should be in the photos.

The following five questions were to be answered with photos:

  1. Where is the favourite place to study/work and what is the most important object?
  2. What did the workplace look like (during an online lecture)?
  3. How is the break organised?
  4. What was the most annoying/challenging thing in the last few months?
  5. What does the after-work ritual look like?

Photos and findings

A total of 19 students participated in the study with 108 photos. So not everyone sent the exact number of five photos. The User Services team analysed the photos anonymously. By sending them, the students agreed to this and also that we could use the photos in presentations, articles, etc. The number of photos gave us a good insight into the working and learning conditions of home studying.

Workplaces and stress points

Important for working are a stable internet connection and good work equipment, such as technical equipment, a desk and chair. These are also the biggest stress points if they do not meet the requirements: An interference-prone internet connection is a hindrance for online lectures, and uncomfortable chairs cause back pain.

Only half of the participants work at a proper desk, the other half sit at the kitchen table or other converted tables. The space situation in general is often cramped. It is usually not possible to switch between work and leisure time.

Breaks and after-work rituals

The participants like to spend their breaks outside and in motion, e.g. on a walk, also with friends. After work, on the other hand, they spend most of their time at home. This is also in line with the usual pandemic-related requirements at the time of implementation.

As an after-work ritual, we received many sports pictures, from boxing and running to the yoga mat, many individual sports were included. The cosy sofa for relaxing should not be missing either.

Environment and decoration

As we already found out in our 2018 survey, the environment and atmosphere of the learning space play a major role. Implementing these needs in their own homes presented challenges for the students, but they were able to solve them. For a pleasant dose of daylight and fresh air, the learning spaces were often close to the window. They decorated the space with plants and candles. Drinks, especially coffee and tea, and snacks were also not to be missed.

  • Conclusion 1: Equip learning spaces well

    For us, it was rather surprising that after three semesters of purely digital study, many students still work with rather provisional solutions. Many work at the dining table or have placed a small table in the corner of the room. In most cases, there is only one laptop available, and there are no additional monitors. This is definitely a starting point for libraries to provide well-equipped learning spaces. This starts with large tables and comfortable, ergonomic chairs, and can be extended by technical equipment, e.g. by offering additional monitors to make working easier. Areas where you can work alone and still participate in online seminars were rare in libraries before the pandemic. We will consider this form of work in the future.

  • Conclusion 2: Create spaces for social interaction

    What has often been missing since the beginning of the Corona pandemic, but is all the more essential, is social contact. For libraries, this means on the one hand that places to work together in groups are important. There is often not enough space for this in small shared rooms. Areas for common breaks and social meeting places to exchange ideas and continue working creatively are also desired. Areas where small yoga and relaxation breaks can be taken can also offer added value. After sitting for a long time, many people feel the need to move, as the photos have confirmed.

  • Conclusion 3: Developing the library together with students

    It is very exciting to get an impression of students’ personal workplaces. The very positive feedback from the participants also showed us that they appreciate it when you want to respond to their personal needs. What was surprising for us was that we were given such open and personal insights. Thus, we can draw on an instructive and informative pool of knowledge and inspiration to design user services for the changing needs of learning and studying after the pandemic. With this knowledge, we can further develop the services in a targeted and needs-oriented manner.

Reflection on method and procedure

For the circumstances (Corona pandemic, home office/studying) and the question from this context, the method of photo studies was very well suited. We gained an insight into students’ private learning environments that we could hardly have gained otherwise. In this online implementation, in contrast to previous face-to-face on-site studies, we did not conduct any subsequent interviews. If we were to conduct them again, we would also combine the online studies with a small interview. This would give the participants the opportunity to explain their images. For some, there was a lot of room for interpretation and an explanation would have facilitated the exact interpretation.

However, this kind of implementation does not replace personal contact. Being able to talk to the students on site and to personally guide the UX methods is a great benefit. It enables a fluent dialogue and exchange.

This text has been translated from German.

This might also interest you:

About the authors:

Nicole Clasen is Head of User Services at ZBW – Leibniz Information Centre for Economics. Her work focuses on information transfer, digital user services and the usability experience. LinkedIn and Twitter.
Portrait: ZBW©, photographer Sven Wied

Alena Behrens works as a librarian in the user services department at the ZBW – Leibniz Information Centre for Economics. In addition to working at the service desk, her work focuses on information mediation and user experience. She can also be found on Twitter.
Portrait: Alena Behrens©

The post The Ideal Place for Students to Learn: Results of a ZBW Photo Study first appeared on ZBW MediaTalk.

Open Access Barcamp 2022: Where the Community Met

by Hannah Schneider and Andreas Kirchner

This year’s Open Access Barcamp took place online once again, on 28 and 29 April 2022. From 9:00 a.m. to 2:30 p.m. on both days, the roughly 50 participants were able to put together their own varied programme, and engage in lively discussions about current Open Access topics.

Open Access Barcamp 2022 Agenda

What worked well last year was repeated this year: The innovative conference tool Gather was again used to facilitate online discussions, and the organisers prioritised opportunities to have discussions and to network when designing the programme. They integrated a speed-dating format into the programme and offered an open round at topic tables. In the context of the open-access.network project, the Communication, Information and Media Centre (KIM) of the University of Konstanz once again hosted the Barcamp. While interactively planning the sessions on the first day, it became clear that the Open Access community is currently dealing with a very wide range of topics.

Illustration 1: open-access.network tweet about the topic tables

The study recently published by the TIB – Leibniz Information Centre for Science and Technology University Library entitled “Effects of Open Access” (German) was presented in the first session. This review of literature examined 61 empirical papers from the period 2010-2021, analysing various impact dimensions of Open Access, including aspects such as attention garnered in academia, the quality of publications, inequality in the science system or the economic impact on the publication system.

The result on the citation advantage of Open Access publications was discussed particularly intensively. Here, the data turned out to be less clear than expected. However, it was also noted that methodological difficulties could occur during measurement in this field. The result of the discussion was that a citation advantage of Open Access can continue to be assumed and can also be cited in advisory discussions. “All studies that show no advantage do not automatically prove a citation disadvantage,” as one participant commented.

Tools and projects to support Open Access

Various tools to support Open Access publishing were particularly popular this year. With “B!SON”, a recommendation service was presented that is helpful for many scientists and scholars in finding a suitable Open Access journal for articles that have already been written. The title, abstract and references are entered into the tool, which then displays suitable Open Access journals on this basis, and awards them a score which can be used to determine a “match”. B!SON was/is developed by the TIB and the Saxon State and University Library Dresden (SLUB).

Another useful service with a similar goal was introduced in the form of the “oa.finder”, developed by the Bielefeld University Library in the context of the open-access.network project. Authors can use this research tool to find suitable publication locations by entering their own role in the submission process, as well as the scientific institution where they work. It is possible to tailor the result to suit individual needs using different search and filter options. Both tools are currently in the beta version – the developers are still particularly keen to receive feedback.

A further session was dedicated to the question of what needs to be considered when converting from PDF to PDF/a in the context of long-term archiving, and which tools can be drawn upon to validate PDF/a files. This provided an opportunity to discuss the advantages and disadvantages of tools such as JHOVE, veraPDF and AvePDF.

The KOALA project (building consortial Open Access solutions) showed us which standards (German) apply for journals and publication series that participate in financing through KOALA consortia. Based upon these standards, the project aims to create an instrument that contributes to safeguarding processes and quality in journals and publishing houses. The project is developing sustainable, collaborative funding through scientific libraries, in order to establish an alternative to the dominant APC model.

Illustration 2: Results on the User Experience of the open-access.network website

In addition, the open-access.network project gave the Barcamp participants the opportunity to give feedback on its services. On the one hand, they evaluated the range of information and discussed the newly designed website. On the other, they focussed on the project events, discussing achievements and making suggestions for improvement. Here, the breadth of the different formats received particular praise, as did the fact that offers such as the “Open Access Talk” series have become very well established in the German-speaking countries.

Open Access communication: Reaching the target audience

Many members of the community are still working on how best to bring OA issues to different audiences. One of the sessions emphasised that, although the aspect of communication in Open Access work was regarded as very important, the required skills are often lacking – not least, because it has hardly played a role in library training to date. One of the central challenges in reaching the individual target groups is that different communication routes need to be served, which in turn requires strategic know-how. In order to stabilise and intensify the exchange, the idea of founding a focus group within the framework of the open-access.network project was proposed; this will be pursued further during a preparatory meeting at the end of June 2022.

Illustration 3: Screenshot of MIRO whiteboard for documenting the Barcamp

Another session also considered the question of communicative ways to disseminate Open Access. Here the format of low-threshold exchange formats was discussed. The Networking and Competence Centre Open Access Brandenburg relocated its own “Open Access Smalltalk” series (German) to the Barcamp – very much in the spirit of openness, and initiated a discussion on how to get interested people around the table. In particular, it was argued that virtual formats offer a lower barrier to participation in such exchanges and that warm-ups can really lead to mobilising participants.

Challenges faced by libraries

The issues and challenges of practical day-to-day Open Access at libraries were also discussed a great deal this year. The topic of how to monitor publication costs found great resonance, for example, and was discussed both in a session and in one of the subsequent discussions one of the topic tables. Against the backdrop of increasing Open Access quotas and costs, libraries face the urgent challenge of getting an overview of the central and decentral publication costs. Here they are applying various techniques, such as de-centrally using their own inventory accounts, but also through their own research and with the help of the Open Access monitor.

A further session explored the topic of secondary publication service, specifically looking at which metadata can be gathered on research funders in repositories, and how. The discussion covered specific practical tips for implementation, including recommendations for the metadata schemata Crossref and RADAR/DataCite, for example.

One of the final sessions at the Barcamp explored the issue of how libraries can ensure that they provide “appropriate” publication opportunities. In doing so, reference was made to the “Recommendations for Moving Scientific Publishing Towards Open Access” (German), published by the German Council of Science and Humanities in 2022. To find out which publication routes researchers want and need, it is necessary to be in close contact with the various scientific communities. The session considered how contacts could be improved within the participants’ own institutions. Various communication channels were mentioned, such as via subject specialists, faculty councils/ representatives or seminars for doctoral candidates.

Illustration 4: Screenshot of feedback from the community

Conclusion

We can look back on a multifaceted and lively Open Access Barcamp 2022. The open concept was well received, and there was considerable willingness from the participants to actively join in and help shape the sessions. The jointly compiled programme offered a wide range of topics and opportunities to discuss everyday Open Access issues. In this virtual setting, people also joined in and contributed to the collegial atmosphere. After the two days, the community returned to everyday life armed with new input and fresh ideas; we would like to thank all those who took part, and look forward to the next discussion.

You may also be interested in:

Horizon Report 2022: Trends Such as Hybrid Learning, Micro-certificates and Artificial Intelligence are Gaining Traction

by Claudia Sittner and Birgit Fingerle

The 2022 EDUCAUSE Horizon Report Teaching and Learning Edition was published in mid-April 2022. It examines which trends, technologies and practices will have a significant effect on teaching and learning at universities in the future. As with previous editions, the report uses four different scenarios to envision what the future of university education could look like. We outline some of these trends, which could be of interest to academic libraries and information infrastructure facilities.

We outline some of these trends, which could be of interest to academic libraries and information infrastructure facilities.

Hybrid learning: Here to stay

After around two years of the Corona pandemic, most of us are now aware that there will be no return to pre-Corona normality; online and hybrid learning are the new normal. One trend identified by the Horizon Report is a continuation of synchronous and asynchronous learning experiences, coupled with minimal compulsory attendance on campus. According to the Horizon Report (p. 7), this will require “more sustainable and evidence-based models of hybrid and online teaching and learning”. These have now gradually superseded the contingency plans hastily put in place at the start of the pandemic, and will be accompanied by recently developed, reliable hybrid and online education, as well as an investment in additional staff and services. Higher education institutions now have to focus on making sure their students are ready for the online learning experiences. This is certainly an area where academic libraries can also play their part. The new motto is: Education for everyone, from anywhere.
Example: ‘Attend anywhere’ model, Portland, USA

Micro-certificates are winning out over classic university degrees

Lifelong, tailored learning is gaining importance over typical, drawn-out degrees, according to the Horizon Report. Both microcredentialing and online/hybrid education are particularly useful in this regard. That is why libraries and digital infrastructure facilities should be increasingly focussed on more practical, personalised and competence-based courses and micro-certificates, which according to the Horizon Report could potentially provide more attractive options for career advancement than a traditional university education. For example, libraries could think about providing their own courses to make their offers more visible, and thus prevent the big tech companies from dominating the field entirely.

Furthermore, the fact that many people experienced significant financial losses as a result of the pandemic has led them to think more carefully about whether it pays to opt for a typical university degree. Micro-certificates, especially those awarded for free by institutions such as libraries, are thus becoming more attractive.

Read more:

Artificial intelligence: learning analysis and learning tools

Even if it still often gets stuck in the teething stage, the application of artificial intelligence (AI) plays a role in two respects in this year’s Horizon Report: in relation to both learning analysis and learning tools. In relation to learning analysis, institutions would primarily use AI to encourage students’ learning progress based on existing data. When it comes to learning tools it is the students themselves who use them, and are thus able to improve their learning experience at university.

The digital re-orientation brought about by the pandemic has also heralded a flood of digital data. For academic libraries too this means engaging more directly with the potential of the data that has been generated, and ultimately providing their users and staff with an improved learning and working experience.

One challenge in this regard could emerge from the data silos of individual departments, divisions or institutes. These have to be more closely integrated in order to optimise the user experience and encourage operating efficiency. Despite the great potential of AI, there are also some risks to be aware of, such as the fact that AI systems often adopt existing biases and thus favour certain groups of users. This can increase inequalities. What’s more, it is important to clearly communicate what data is being gathered for what purpose, so that users do not lose trust in the institutions.

Read more:

Data trails demand critical engagement with media

In light of growing data volumes, users of infrastructure facilities are leaving more data trails behind them online, whether in the cloud or on social networks. This means it is even more important to equip them with sufficient information literacy and a media-critical mindset, so that they can recognise fake news, dubious conferences and predatory journals, for example. In this regard, academic libraries have a more important role to play than ever when it comes to offering relevant courses and further support.

Strengthening sustainable practices and reducing the ecological footprint

Environmental aspects are also becoming increasingly relevant in how all higher education institutions conduct themselves. It will be a question of them reducing their own ecological footprint on site, and leading by example. Here, libraries can take a look at the permanently altered behaviour brought about by the pandemic, as well as the new demands of users and staff. The ‘Planetary Health Education Framework’ and the 17 Sustainable Development Goals (SDGs) proposed by the United Nations could provide possible points of reference. Is this perhaps a good time for academic libraries to think about how they can become more sustainable and strengthen environmentally friendly practices?

Allegation of political interference

In times of increasing nationalism and populism in some parts of the world, along with global uncertainties, it would be advisable for educational institutions to safeguard their autonomy. However, due to the financing that they require, it is not always possible to withdraw from political matters completely. “In these instances, institutions must be prepared to offer compelling evidence of the benefits of the education and training they provide, as well as to accommodate the needs of increasingly strained and distracted students and families.” (p. 13). In light of increasingly scarce financial resources, more focus could also be afforded to academic libraries in this regard.

This text has been translated from German.

You may also be interested in:

About the Authors:

Birgit Fingerle holds a diploma in economics and business administration and works at ZBW, among others, in the fields innovation management, open innovation, open science and currently in particular with the “Open Economics Guide”. Birgit Fingerle can also be found on Twitter.
Portrait, photographer: Northerncards©

Claudia Sittner studied journalism and languages in Hamburg and London. She was a long time lecturer at the ZBW publication Wirtschaftsdienst – a journal for economic policy, and is now the managing editor of the blog ZBW MediaTalk. She is also a freelance travel blogger (German), speaker and author. She can also be found on LinkedIn, Twitter and Xing.
Portrait: Claudia Sittner©

The post Horizon Report 2022: Trends Such as Hybrid Learning, Micro-certificates and Artificial Intelligence are Gaining Traction first appeared on ZBW MediaTalk.

Open Access Survey in Greece: Status Quo, Surprising Findings and Starting Points

An interview with Maria Frantzi, Athanasia Salamoura and Giannis Tsakonas

The representative nationwide survey on Open Access in Greece took place in March and April 2021. 500 authors from seven disciplines were surveyed. The subject areas were: Natural Sciences, Humanities, Computer Science and Engineering, Health Sciences, Economics and Management, Social Sciences and Environmental Sciences.

The researchers surveyed varied in age and career stage, with around 80% of respondents having a great deal of academic work experience (16 or more years). The survey asked respondents questions about their opinions and experiences with different aspects of Open Access and its implementation methods.

Staff from the Scholarly Communication Unit (EESC) of the Hellenic Academic Libraries Link (HEAL-Link) designed and analysed the survey. HEAL-Link is the national consortium of academic libraries in Greece.

In the interview the team of EESC, namely Maria Frantzi, Athanasia Salamoura and Giannis Tsakonas, answer our questions about the background, results and consequences of the survey.

What is the state of Open Access in Greece?

Maria: In Greece there is no national mandate for Open Access though there are two Laws (4310/2014 and 4485/2017), which refer to the conditions of Open Access for publicly funded research and resources in education, research, technology, and culture. There is no Greek Research Funding Organisation (RFO) signatory of PlanS and, except HEAL-Link, there are no other centralised funds for OPEN ACCESS publications, but only a handful of institutional ones. The strong base of Social Sciences and Humanities (SSH) and the Greek language, has led to the development of a few platforms for diamond Open Access journals.

Guided by a declaration on Open Access, which was supported by the Ministry of Education, Research and Religious Affairs, in May 2018, HEAL-Link, which is the consortium of Greek academic libraries, has taken many initiatives to foster Open Access. This includes agreements with an Open Access element in most of the collaborating publishers, as well as community-led initiatives, such as Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP³) and the Open Library for the Humanities (OLH). In March 2019, the Council of Rectors, as suggested by HEAL-Link, highly recommended to Universities‘ Senates a mandate for faculty and researchers to self-archive their scientific publications, but it hasn’t yet been implemented by the majority of the universities. Currently, except for two or three universities, all the institutional repositories have a mandatory policy for the deposit of theses and dissertations and a recommendation for the researchers’ and faculty publications. As a result, only one third of the Greek output seems to be published in Open Access with HEAL-Link being the only coordinated action.

What were the most surprising findings of the survey for you?

Athanasia: While the evidence that Maria mentioned is not very positive, it is encouraging to know that most of the researchers are aware of at least one Open Access route and that only a very small percentage has a negative opinion of Open Access.

Also, it’s interesting that only few of them publish in Open Access mode because it’s obligatory by their funding programme. If they have the resources, then they strongly consider opting for Open Access. A negatively surprising finding is that almost one third of the respondents said that they don’t know the repository of their institution although the institutional repositories (IRs) have been implemented for more than ten years.

Finally, we were surprised to see in the interviews that followed the survey, that some researchers consider other aspects of openness, such as Open Educational Resources, Open Source, etc., in tandem to Open Access publications.

How strong is the awareness of scientific authors for Open Access in Greece? On the other hand: To what extent has this type of publication already become established in practice?

Giannis: There is a high awareness among the Greek researchers of Open Access in general. However, through their replies we see that this type of publication has been only partly established in practice. Almost two thirds of the researchers mentioned that they have published in an Open Access journal, but then less than a quarter has published in a repository.

The pattern seems consistent across the disciplines. Together with the evidence that we have about the growth of Open Access in Greece, we are led to a state of fragmentation that is further increased by a considerable percentage of researchers that prefer other, for-profit, platforms to self-archive their publications..

Why do academic authors in Greece decide not to publish in Open Access? What are the most common reasons?

Maria: Mainly, it’s the cost of APCs. In our survey, 42,6% of respondents said that they consider the cost of APCs to be very high. There seem to be two more main reasons discouraging researchers to publish in Open Access in Greece. There is a concerning percentage, a bit more than 12%, of those claiming to be hesitant due to the questionable quality of Open Access publishing, meaning they consider it of lower quality. And similar percentage, close to 14%, that said that they have not been properly informed about Open Access publishing.

More than 70% of the respondents have a (rather) positive opinion of Open Access. In fact, however, only just under 37% pursue the goal of publishing in Open Access in Greece. How do you explain this difference?

Maria: : We think that it is a matter of cost, quality, and adequate number of Open Access journals in their field. Most of them would pursue to publish in Open Access if the above-mentioned criteria were met. Also, it’s worth mentioning that a certain number of the respondents regards Open Access only as a way of accessing scholarly content and not as a publication venue.

Only 22% of researchers have ever published their work in a repository. Why are there so few of them?

Athanasia: Well, apart from the fact that almost one third of the respondents said that they don’t know the repository of their institution and a quarter of them prefers to publish and/or post their papers on other platforms and services, we found out, mainly thanks to the interviews, that Open Access is probably regarded and viewed as a way of getting to and accessing scholarly content and not as a publishing act. To a certain extent, researchers also associate repositories primarily with the publication of dissertations and theses.

Open Access in Greece: Perceptions in Academic Institutions by Athanasia Salamoura, Maria Frantzi, Giannis Tsakonas / Scholarly Communication Unit, HEAL-Link

What would have to change for Open Access to become widespread in Greece?

Maria: The most pressing issue is to provide more information and training about Open Access and Open Science to every researcher in Greece, especially to the early career ones, through an institutionalised course of action. Furthermore, a national policy, instrumentalising a mandate for Open Access publishing, would substantially help.

In parallel, the universities and institutions should adopt the Open Access recommendations of HEAL-Link, which will enable both the green and gold Open Access routes in addition to other Open Access models in an effective and sustainable way. In countries like Greece, a multitude of options would work complementary to cover the wide range of publications that varies across language, formats, and disciplinary cultures. To this end, a new approach for the research assessment should be implemented while it’s important to involve and engage all the stakeholders in everything concerning Open Access in Greece.

What actions and what prioritisation do you see in the survey?

Athanasia: A priority would be a long-term information campaign and intensive training about Open Access and Open Science to convince the research community about their benefits. It is important to make the researchers aware not only about the various forms of Open Access, but also about all the developments that transform productively scholarly communication, such as Open Peer Review. Moreover, the researchers should be informed about all the agreements and Open Access initiatives supported by HEAL-Link and get familiar with their institutional repository.

All in all, what do you think – which results can also be transferred to other countries and what is specifically the case in Greece? Why?

Giannis: That more coordinated effort is needed. Libraries cannot proceed alone with Open Access, unless they join forces with other stakeholders to raise awareness, inform and train the researchers. In countries like Greece, the main issue is the lack of culture; and this can change only if all the stakeholders, including the universities’ administration, are persuaded about the benefits of Open Access and are eager to detach from the established forms.

We are happy that, after the survey, we have found some allies to carry our work and we look forward seeing how this will help Open Access in Greece. Finally, the financial aspect of Open Access is influencing disproportionally countries with developing economies to gain ground in a sustainable way. The transition to Open Access is still costly and although one can argue that there are savings in comparison to paywalled publishing, the hardships remain for researchers that cannot afford to cover the expenses. If so, then the good intentions will remain as such, and Open Access will not fulfil its potential as another paradigm for scholarly communication.

Further reading for Open Access enthusiasts

We were talking to:

Athanasia Salamoura Salamoura is a graduate of the Department of Archives, Library Science and Museology of the Ionian University, Greece. She currently is a member of the Scholarly Communication Unit of HEAL-Link, monitoring the Open Access agreements of HEAL-Link with different publishers. She can also be found on ORCID and Twitter.
Portrait: Athanasia Salamoura©

Maria Frantzi is a graduate of the Department of Archives, Library Science and Museology of the Ionian University, Greece, and holds a Master in Byzantine Philology from the University of Patras. Currently, she is an e-resources librarian at the Library and Information Center of the University of Patras, a member of the Steering Committee for Electronic Resources of HEAL-Link and a member of the Scholarly Communication Unit of HEAL-Link. She can also be found on ORCID.
Portrait: Maria Frantzi©

Dr Giannis Tsakonas is the Acting Director of the Library & Information Center, University of Patras, Greece. He also serves on LIBER’s Executive Board as head of the Innovative Scholarly Communication Steering Committee, and on the Board of Directors of Hellenic Academic Libraries Link. He coordinates the work of the Scholarly Communication Unit of HEAL-Link as well. He can also be found on ORCID and Twitter.
Portrait: Dr Giannis Tsakonas©

Featured Image: The building of the Library & Information Center of University of Patras that hosts the Scholarly Communication Unit [CC-BY], photographer: iD Studio

The post Open Access Survey in Greece: Status Quo, Surprising Findings and Starting Points first appeared on ZBW MediaTalk.

Best Practice: The First Six Month of Open Science at the University of North Carolina Wilmington

An Interview Lynnee Argabright and Allison Michelle Kittinger, William Madison Randall Bibliothek at the University of North Carolina Wilmington (UNCW)

A new central department was created for you with the posts of research data librarian and scholarly communications librarian. How did you go about filling these new roles?

Allison Michelle Kittinger (Scholarly Communications Librarian): As soon as I assumed this position, I became the voice of my institution in scholarly communications spaces. I was our representative for scholarly communications committees within our library and in our university system. This gave me a lot of connections and a kind of support network off the bat that gave me a good picture of what had been happening so far around scholarly communications and Open Science here. Many of my roles, such as managing Open Access and Open Education funding and overseeing the institutional repository were inherited from librarians that began this work on campus when it was not in their job description. Now, I am the point person to continue this work and grow it into a community.

Lynnee Argabright (Research Data Librarian): Lynnee Argabright (Research Data Librarian): I began thinking about this new role by considering the research data lifecycle—data collection, cleaning, analysis, visualisation, sharing …— and looking at academic literature to see what other data librarians have done. A good one was “Academic Libraries and Research Data Services” (PDF) and the follow-up study “Research Data Services in Academic Libraries: Where are We Today?”. It helped me scope out what a data librarian could do, and then I scaled down to thinking what I could do immediately versus in the future. I also thought about my support capacity as a single unit servicing the campus, with potential collaborations with non-data-specific others. I talked with many people on campus about their data needs and about the current data infrastructure and support. Based on that, I am allocating my time on a rollout schedule (see discussion of “maturity models” in “Maturing research data services and the transformation of academic libraries”) to learn about/plan/develop services for particular data lifecycle areas—such as reviewing Data Management Plans and teaching data analysis in R workshops—before I market those specific services to campus. Data discovery was a lifecycle area I could start on right away, joining the subject librarians in their course instruction sessions about finding research results and getting follow-up consultations for finding Open Data.

What are your goals in the new jobs, i.e. for the first year of Open Science at UNCW?

Allison: Awareness, always! Faculty are hungry for the services we offer but not all of them know we are here and doing the work now. My main goal now that much of my role has been established is to raise awareness of the

Lynnee: A big priority for me is to intentionally and transparently fit in Open Science to as many of my data services as possible. Am I teaching about data discovery? I could show Open Data sources. Am I consulting on data privacy? I could bring up how to de-identify data so the data could potentially be shared. Did I get a question about data analysis? I could recommend Open Source tools.

One particular initiative I want to get started in my first year is data sharing. Promoting data sharing on campus would be of value to a campus with newly increased research intensity expectations; not only because researchers new to getting grants now often face the expectation to share their data, but also because sharing data will help showcase UNCW-produced research to the world. However, repository deposit participation does not happen overnight—as another OSC poster explains—so a first year goal to get involved with data sharing has been to get a feel for administering the technical Dataverse infrastructure we have, begin mentioning the benefits of data sharing in other data conversations to fuel awareness, and start looking into how to ease the experience of preparing data to be shared.

An Increased Use of the Institutional Repository by Researchers from 7% to 45%: Lessons from the Open Access Campaign at the School of Economics and Business, University of Ljubljana

I began too excitedly by offering a workshop about data sharing and Dataverse, to generally go over the benefits of sharing data, as well as to demo how to use Dataverse … and only one person showed up, so Allison’s point about awareness is super important.

What were the biggest road blocks so far? How did you manage to overcome them?

Allison: Being new in part, but that is overcome by time and making connections. Sometimes not knowing who to reach out to or collaborate with because we’ve never made those connections before on campus. Everyone is learning together. I think a lack of awareness can be a roadblock, but in general once I’ve explained my role and what I do to people who weren’t aware of me they are very receptive. I credit that to the culture at our institution.

Lynnee: My new department has been asked to go through the subject librarians if they want to reach out to researchers, so a roadblock I’m facing in my new role is getting patrons to know I exist and even to think that the library could be involved in data in the first place. One of the strategies I tried within my first six months was to begin planning campus-wide programming that celebrated international events.

I helped Allison with planning Open Access Week in October 2021, and proposed to co-host a Love Data Week in February 2022 with another campus office partner. Hosting these programmes could simultaneously teach researchers data skills, build a campus community for data activity, and boost awareness that the library is involved with data.

Since then, I’ve gotten more researcher participation in workshops and consultations, and other research staff are reaching out to collaborate. I recognise running campus-wide programming takes a lot of work up front to plan and it may not take off at first, but it did help me get recognised, and it will slowly build up the library’s brand in the data sphere. Here are my reflections about making event planning sustainable.

The job profiles of modern librarians have diversified greatly in recent years. However, many people still have the image of the old lady with a bun putting dusty books on shelves in their minds when they think of libraries. Where do you think this perception gap comes from?

Allison: The public perception of librarians I would guess comes from media stereotypes about public libraries. I’d think academic libraries are not the first type of library people think about when they think of libraries. Especially in roles like ours, they can be removed from students and the public and focused more on faculty and research activity. Open Science shows a path for us to engage with all these populations and stay research-focused at the same time. Our institution is known for student and community engagement, so I always have an eye towards the research happening in those spaces too. Visibility is the key to closing existing perception gaps.

Lynnee: This is a classic case of “You do not know what you do not know”; if nothing intervenes in an individual’s interactions with the library, the use of the library as a quiet place for books will remain. How do we change this perception? Library spaces that remove the books in exchange for group work areas, that provide classroom and exhibit and maker spaces, and that allow food can begin to change what the physical library means. Librarians embedding into classes to cover more than journal subscriptions and participating in campus committees can begin to change what library representation means.

Whenever I hear “the library can help with that?” (which I hear frequently in this new research role), I consider it a huge win. Yes, we are getting involved in active research engagement and collaborations. Yes, we are moving the needle on infrastructure that supports Open Science. Each small thing we do in our answers to everyday consultations or in flyers around the campus can be a perception shift away from “Bun Lady.”

Why is it so important for modern library staff to do marketing and public relations for their services?

Allison: I’ve seen direct marketing work firsthand. Our library dean sends out personal congratulatory emails to researchers when they publish an article, and includes a sentence about depositing their work in the institutional repository with me copied. Faculty love this recognition, and they are happy to use the repository when they are made aware of it. In addition, press releases have worked really well for Open Access and Open Publishing initiatives. We published a press release about a faculty member publishing our first open textbook with the library in partnership with UNC Press, and now we have more faculty interested in publishing their work in the same way.

Lynnee: Marketing highlights what services the library offers and is especially important when participating in new areas of research support. Since the library had not really provided data support previously, I started by developing partnerships with the other research support offices, such as the grants office, the Institutional Review Board office, the graduate school administrators, the faculty support office, and Campus IT.

These offices may have overlaps in data services, or may be contact points that researchers are coming to for help, and if these offices know about me, they can direct patrons with data needs to come to me. For example, I was preparing for a Data Management Plan workshop and told our grants office about it, since the deadline for their internal funding opportunities was approaching. They sent out the workshop news in their email listserv. Based on the timing of their email and of people’s registrations, this marketing was the cause of most of my attendees—none of whom had previously met me.

How can you build up a sustainable Open Science campus in times of temporary employment?

Allison: Not just positions; funding can be temporary, organisational structures can be temporary. My definition of sustainability is the work can be picked up if someone leaves off, and it has a continued commitment for support on a broader level. For example, our APC fund in the library was not funded next year. Only the library was funding it, and in the reorganisation we’ve had recently our funds are spread more thinly across more departments. Where I see us going is more diamond Open Access publishing and more institutional read-and-publish deals that cover these costs for faculty. And that shows that a lack of sustainability can be an opportunity to move closer to our true values as well. Sustainability should also be a path to growth.

Lynnee: I think this is where promoting data management practices can be particularly helpful for Open Science. Documentation of processes during data collection and data processing can greatly help a lab as students cycle in and out. Compiling documentation files can then be easier to share in a repository when the research project is completed. I can encourage the use of Open Source collaborative software, such as Open Science Framework and e-lab notebooks, which can show transparency of a team’s process through version logs, editing logs, and data file permissions. Influencing researchers to pick up use of these tools or practices and become familiar with them in their workflows can make Open Science a practical, efficient, and collaborative way to do research.

What are the lessons you have learned in the first six months of Open Science?

Allison: That sustainability also can’t exist without collaboration. That’s true in Open Science initiatives and in roles supporting them. It takes a team like our department and buy-in from the library and other campus entities to grow these programmes. If you’re the “one person” in charge of all of these things, and you can only use your own resources and nobody else’s, it can feel like you’re alone in the work, and it would all crumble if you leave. But I haven’t felt that way, and for anyone looking to establish Open Science roles, it is crucial that nobody feels so.

Lynnee Marie Argabright and Allison Michelle Kittinger: The First 6 Months of Open Science

Lynnee: I discovered I do not have to be a perfect expert in all areas of my job—often, what I know is already far more than what my patrons know, and if I am unsure about a question, I can explore with the patron for answers. Another lesson I picked up by learning the culture of my university is to think about Open Science in terms of my university’s and patrons’ needs. Our institution recently went from an R3 to an R2 Carnegie classification, which means the campus has a larger emphasis on research than before; thus, more of my patrons may need help with research-related skills — for example, how to write data management plans (DMPs) for grant applications. While reviewing DMPs, I can work in Open Science by asking them how they plan to share their data afterwards, which gets into what data repositories are reliable and how to be responsible about sharing sensitive data.

This might also be interesting for you:

We were talking to:

Allison Michelle Kittinger is the scholarly communications librarian at UNC Wilmington. She manages all things concerning Open Publishing, including an Open Education fund, Open Access initiatives, Open Journal support, and the campus institutional repository. She can also be found on ORCID.
Portrait: UNCW©, photographer: Jeff Janowski

Lynnee Marie Argabright is the research data librarian at UNC Wilmington. She provides guidance about collecting, using, managing, and sharing data in research, through instructional workshops or individual consultations. Lynnee has previous work experience in areas such as Open Access outreach, bibliometric network analysis visualisation, finding economic data, and higher education textbook and monograph publishing. She can also be found on Twitter and ORCID.
Portrait: UNCW©, photographer: Jeff Janowski

The post Best Practice: The First Six Month of Open Science at the University of North Carolina Wilmington first appeared on ZBW MediaTalk.

Workshop Retrodigitisation 2022: Do It Yourself or Have It Done?

by Ulrich Blortz, Andreas Purkert, Thorsten Siegmann, Dawn Wehrhahn and Monika Zarnitz

Workshop Retrodigitisation: topics

Under the workshop title “Do It Yourself or Have It Done? Collaboration With External Partners and Service Providers in Retrodigitisation”, around 230 practitioners specialised in the retrodigitisation of library and archive materials met in March 2022. This year, the Berlin State Library – Prussian Cultural Heritage hosted the retrodigitisation workshop (German), which was held online due to the pandemic. For the first time in 2019, it had been initiated by the three central specialist German libraries – ZB MED, TIB Hannover and ZBW. All four institutions jointly organised a programme which, on the one hand, was about “Do it yourself or have it done?” and, on the other hand, about the question “Is good = good enough?” about quality assurance in retrodigitisation. After each of the eight presentations, there were many interesting questions and lively discussions developed.

Keynote: colourful and of high quality

The keynote on „Inhouse or Outsource? Two Contrasting Case Studies for the Digitisation of 20th Century Photographic Collections“ (PDF) was given by two English colleagues, Abby Matthews (Archive and Family History Centre) and Julia Parks (Signal Film & Media/Cooke’s Studios). They reported on their projects on digitisation of photographic records and old photographs from municipal archives, which they have carried out in cooperation with volunteers.

This was also a big challenge because of the Corona pandemic. Both were able to say that by involving those who later became interested in this offer, a special relationship to this local cultural heritage was developed. The experience of the volunteers also contributed a lot – especially to the documentation of the images, the speakers said.

Cooperation: many models

The first focus of the workshop was on collaboration in retrodigitisation. There were five presentations on this, which had a wide range:

Nele Leiner and Maren Messerschmidt (SUB Hamburg) reported in their presentation on “Class Despite Mass: Implementing Digitisation Projects with Service Providers” (PDF, German) on two retrodigitisation projects in which they worked together with service providers. It was about the projects “Hamburg’s Cultural Property on the Net” (German) and a project that was funded by the German Research Foundation (DFG) in which approx. 1.3 million pages from Hamburg newspapers are being digitised.

Andreas Purkert and Monika Zarnitz (ZBW) gave a presentation on “Cooperation With Service Providers – Tips for the Preparation of Specifications” (PDF, German). They gave clues on tips and tricks for preparing procurement procedures for digitisation services.

Julia Boensch-Bär and Therese Burmeister (DAI) presented the “‘Retrodigitisation‘ Project of the German Archaeological Institute“, which is about having one’s own (co-)edited publications digitised. They described the work processes that ensured the smooth implementation of the project with service providers.

Natalie Przeperski (IJB Munich), Sigrun Putjenter (SBB-PK Berlin), Edith Rimmert (UB Bielefeld), Matthias Kissler (UB Braunschweig) are jointly running the Colibri project (German). In their presentation “Colibri – the Combination of All Essential Variants of the Digitisation Workflow in a Project of Four Partner Libraries” (PDF, German), they reported on how the work processes for the joint digitisation of children’s book collections are organised. The challenge was to coordinate both the cooperation of the participating libraries and that with a digitisation service provider.

Stefan Hauff-Hartig (Parliamentary Archives of the German Bundestag) reported on the “Retro-digitisation Project in the Parliamentary Archives of the German Bundestag: The Law Documentation” (PDF, German). 12,000 individual volumes covering the period from 1949 to 2009 are to be processed. Hauff-Hartig reported on how the coordination of the work was organised with a service provider.

Conclusion: In the presentations on cooperation with other institutions and service providers, it became clear that the success of the project depends heavily on intensive communication between all participants and careful preparation of joint work processes. The organisational effort for this is not insignificant, but the speakers were nevertheless able to show that the synergy effects of cooperation outweigh the costs and that projects only become possible when others are involved.

Quality assurance: Is “good” = good enough?

This question was posed somewhat self-critically by the speakers in this thematic block. Procedures and possibilities for quality assurance of the digitised material were presented:

Stefanie Pöschl and Anke Spille (Digital German Women’s Archive) contrasted the quality, effort and cost considerations of “doing it yourself” with those of purchasing services. In their presentation on “Quality? What for? The Digital German Women’s Archive Reports From Its Almost 6-year Experience With Retrodigitisation” (PDF, German) they looked at the use of standards to ensure the highest possible level of quality.

Yvonne Pritzkoleit and Silke Jagodzinski (Secret State Archives – Prussian Cultural Heritage) presented under the title “Is Good Good Enough? Quality Assurance in Digitisation” their institution’s quality assurance concept. This is based on the ISO/TS 19264-1:2017 standard for image quality. The concept can provide many suggestions for other institutions.

Andreas Romeyke (SLUB Dresden) explained in his presentation “Less is More – the Misunderstanding of Resolution” (PDF, German) why less is often more when it comes to the resolution of images. He described what is meant by resolution, how to determine a suitable resolution and what effects wrongly chosen resolutions can have.

Conclusion: Increasingly, digitised material is not only used as a document to be received for academic work, but it itself becomes research data that the users use, e.g. in the context of the digital humanities. This results in special quality requirements that are not always easy to implement. The three presentations on this topic showed different approaches to the topic and also that it is an important concern for quality management to put effort and benefit in a reasonable relationship. It became clear that standards such as ISO 19264-1 are increasingly being applied, even if this is still not always done according to the textbook, but within the range of technical and personnel possibilities.

Workshop Retrodigitisation 2022: lively discussions – good feedback

In the first part of the workshop, all presentations contained concrete recommendations and useful tips for the design of digitisation projects with service providers. Many aspects that were described in the presentations and discussed afterwards were strongly oriented towards practice, so that they could be incorporated by the participants for their own implementation of projects with service providers and offered a good basis for future planning of their own projects. It was particularly interesting to hear which quantity structures for the pages to be scanned can be implemented in projects with service providers and how projects could be successfully implemented with several institutions despite the pandemic.

The presentations on the topic of quality in the second block of the workshop also met with great interest. Again, all contributions included many practical tips that can be applied to the audience’s own organisations.

In summary, it can be said that the workshop with its many interesting contributions showed the many different ways of working with service providers and the increasing importance of quality management.

The feedback survey showed that the workshop was again very well received this year. All participants were able to take away many new impulses and ideas. The organising institutions will offer another workshop next year. In 2023, it will be hosted by the ZBW.

This text has been translated from German.

Further readings:

About the authors:

Ulrich Ch. Blortz is a qualified librarian for the higher service in academic libraries and a library official. He has worked at the former Central Library of Agricultural Sciences in Bonn since 1981 and has also been responsible for retrodigitisation at the ZB MED – Information Centre for Life Sciences since 2003.

Andreas Purkert is a freight forwarding and logistics merchant. In the private sector, he worked as a certified quality representative and quality manager and as part of the industry certificate REFA basic certificate work organisation. Since May 2020, he has been head of the Digitisation Centre of the ZBW – Leibniz Information Centre for Economics.

Thorsten Siegmann is Head of Unit at the Berlin State Library and responsible for managing retrodigitisation. He holds a degree in cultural studies and has worked in various functions at the Foundation Prussian Cultural Heritage for 15 years.

Dawn Wehrhahn has been a qualified librarian since 1992. Since then she has worked, with a short interruption, at TIB – Leibniz Information Centre for Technology and Natural Sciences and University Library. Her areas of work were: Head of the Wunstorf Municipal Library, Head of the Physics Department Library at TIB, from 2001 Team MyBib Operations within TIB’s full text supply. Since October 2021, she has headed the retrodigitisation team.

Dr Monika Zarnitz is an economist and Head of the Programme Area User Services and Preservation at the ZBW – Leibniz Information Centre for Economics.

The post Workshop Retrodigitisation 2022: Do It Yourself or Have It Done? first appeared on ZBW MediaTalk.

User Experience in Libraries: Insights from the Central Economics Library at the University of Ljubljana

At the University of Ljubljana (UL), there is not one central university library. In fact, each faculty or academy has its own library: 38 in total. One of these 38 libraries is the Central Economics Library (CEL) at the School of Economics and Business (SEB LU), where Tomaž Ul?akar works.

He attended a conference in Glasgow in 2017 that opened his eyes to User Experience (UX). Since then, a lot has happened at the CEL: there was a pop-up library, a shift in focus onto the main users and the whole concept of user training has been reworked.

An interview with Tomaž Ul?akar, Central Economics Library and Publishing Office at the School of Economics and Business, University of Ljubljana.

What are your goals with UX? Did you achieve them?

The goal of the Central Economics Library is to transform itself into a modern Centre of Knowledge (CeK), where activities such as the classical library, the digital library, the information center, the publishing, the Open Access, the infrastructure centre with two laboratories (behavioural lab and financial lab) will work together as one large modern knowledge incubator.

Which UX methods do you apply at the CEL?

We use mostly: brainstorming, stakeholders and users interviews, sometimes also kickoff meetings.

Can you give us a practical example that worked, where you applied UX to solve a problem?

A good example of the use of UX in CEL was the design of online services for users at the beginning of the pandemic in March 2020, when we used all of the above methods in combination in our Zoom meetings to launch the new online product CEL outside the library.

To apply UX methods, you need library users who are willing to participate. How do you manage to find and motivate them?

I find this part as the hardest. Yes, it is true; you need a lot of energy for persuading users to participate. We have some very enthusiastic colleagues who are willing to enter the user comfort zone and motivate them: with words as we are trying to improve our services. In the past, we also used our social media channels to encourage the participation in UX with an award for the best idea (when we were searching a new name for our study places or for the e-tutor).

When and why did you start working with UX? What does that mean practically?

Based on Andy Priestner’s presentation at the European Business School Librarian’s Group (EBSLG) Annual General Meeting 2017 (German) in Glasgow and on his book “User Experience in Libraries: Applying Ethnography and Human-Centred Design”, we decided to change the whole user concept of the library.

In 2017, we segmented the users, observed their habits and made the first decisions that we need to adapt the services to the main users: full-time students. Therefore, at the beginning of the academic year in October 2017, we went out of the library with the pop-up library and presented the services at the booth.

Pop-up-Library: Registration

In 2018, we worked a lot to change the focus of the librarians in the circulation department and also to do some research among users on how they behave in our spaces, what they are looking for, how they use our facilities, etc. We also have a young staff member who, with his fresh perspective on the library and its services, has motivated other colleagues to make even bigger changes in the UX dimension.

In 2019, after analysing the existing model and based on users’ wishes expressed at the counter, in personal conversations and surveys, we decided to change the whole concept of user training. We offered narrowly specialised presentations with e-resource workshops for areas of study. We also approached professors with this concept, inviting library experts to individual courses to present relevant e-resources.

All training presentations and workshops for an academic year are presented on the LibCal platform. We also use the platform as an e-tutor for all library services, such as membership and loans, remote access, trainings, etc. For each trainings promotion is prepared with leaflet and promotion channels.



For each trainings promotion is prepared with leaflet and promotion channels

This move toward users was critical during Corona 2020 and 2021, when the library kept in touch with users through brief online-zoom service presentations. We put almost all services online. Statistics show a sharp increase in the use of remote access to e-resources:

Statistics show a sharp increase in the use of remote access to e-resources

In 2020 and 2021, we also worked hard to provide a good user experience on Open Access, support for researchers, and a good information service on Open Access. The OA experience at our school is well represented in a colleague’s poster at the Open Science Conference 2022. In the colleague’s presentation, we could see what was done to achieve such a strong use of the institutional repository by researchers in the last year.

The results of the decision to use the methods of UX when introducing new services are reflected in the increased number of active users, increased use of resources, and, last but not least, greater awareness of the importance of the library among school administrators.

What are the most important lessons you have learned from applying UX?

UX is a quite convenient method for applying new services but it also takes a lot energy at the beginning, when you start planning it. You need a lot of strength to manage the process and to organise ideas. But it can also be very pleasant, you do some team building with colleagues and you are getting to know your users.

What are your tips for libraries that would like to start with UX? What is a good starting point?

To start UX, I would recommend an observation of library users, e.g., what they do, where they go, how they use the library, and then systematically start with the services you want to change or (re)design. Start with a UX method that you think is easiest to use, or rather, that you think can get you results.

This might also interest you:

We were talking to:

Tomaž Ul?akar is the head of the Central Economics Library (CEL), the European Documentation Centre and the Publishing Office at the School of Economics and Business at the University of Ljubljana (UL). From 2019 to 2021, he was the president of the Library Council, where the library activities of the 38 academic libraries at the faculties and academies of the UL are coordinated. Tomaž Ul?akar can be found on SICRIS, the Slovenian Current Research Information System.
Portrait: Tomaž Ul?akar©

Featured Image: SEB LU© Yearly Review, academic year 2020-2021. All other graphics: SEB LU©

The post User Experience in Libraries: Insights from the Central Economics Library at the University of Ljubljana first appeared on ZBW MediaTalk.

Barcamp Open Science 2022: Connecting and Strengthening the Communities!

by Yvana Glasenapp, Esther Plomp, Mindy Thuna, Antonia Schrader, Victor Venema, Mika Pflüger, Guido Scherp and Claudia Sittner

As a pre-event of the Open Science Conference , the Leibniz Research Alliance Open Science and Wikimedia Germany once again invited participants to the annual Barcamp Open Science (#oscibar) on 7 March. The Barcamp was once again held completely online. By now well-versed in online events, a good 100 participants turned up. They came to openly discuss a diverse range of topics from the Open Science universe with like-minded people.

As at the Barcamp Open Science 2021, the spontaneous compilation of the programme showed that the majority of the sessions had already been planned and prepared in advance. After all, the spectrum of topics ranged from very broad topics such as “How to start an Open Science community?” to absolutely niche discussions, such as the one about the German Data Use Act (Datennutzungsgesetz). But no matter how specific the topic, there were always enough interested people in the session rooms for a fruitful discussion.

Ignition Talk by Rima-Maria Rahal

In this year’s “Ignition Talk”, Rima-Maria Rahal skilfully summed up the precarious working conditions in the science system. These include, on the one hand, temporary positions and the competitive pressure in the science system (in Germany, this is currently characterised by the #IchBinHanna debate, German), and on the other hand, the misguided incentive system with its focus on the impact factor. Not surprisingly, her five thoughts on more sustainable employment in science also met with great approval on Twitter.

Rima-Maria Rahal: Fünf Thoughts for More Sustainable Employment

Those interested in her talk “On the Importance of Permanent Employment Contracts for Research Quality and Robustness” can watch it on YouTube (recording of the same talk at the Open Science Conference).

In the following, some of the session initiators have summarised the highlights and most interesting insights from their discussions:

How to start an Open Science community?
by Yvana Glasenapp, Leibniz University Hannover

Open Science activities take place at many institutions at the level of individuals or working groups, without there being any exchange between them.

In this session we discussed the question of what means can be used to build a community of those interested in Open Science: What basic requirements are needed? What best practice examples are there? Ideas can be found, for example, in this “Open Science Community Starter Kit”.

Die The Four Sages of Developing an Open Science Community from the „Open Science Community Starter Kit “ (CC BY NC SA 4.0)

There is a perception among many that there is a gap between the existing information offered by central institutions such as libraries and research services and the actual implementer community. These central bodies can take on a coordinating role to promote existing activities and network participating groups. It is important to respect the specialisation within the Open Science community. Grassroots initiatives often form in their field due to specific needs in the professional community.

Key persons such as data stewards, who are in direct contact with researchers, can establish contacts for stronger networking among Open Science actors. The communication of Open Science principles should not be too abstract. Incentives and the demonstration of concrete advantages can increase the motivation to use Open Science practices.

Conclusion: If a central institution from the research ecosystem wants to establish an Open Science community, it would do well to focus, for example, on promoting existing grassroots initiatives and to offer concrete, directly applicable Open Science tools.

Moving Open Science at the
institutional/departmental level
by Esther Plomp, Delft University of Technology

In this session all 22 participants introduced themselves and presented a successful (or not so successful!) case study from their institution.

Opportunities for Open Science

A wide variety of examples of improving awareness or rewarding Open Research practices were shared: Several universities have policies in place on Research Data or Open Access. These can be used to refer researchers to and are especially helpful when combined with personal success stories. Some universities offer (small) grants to support Open Science practices (Nanyang Technological University Singapore, University of Mannheim, German). Several universities offer trainings to improve Open Science practices, or support staff who can help.

Offering recommendations or tools that facilitate researchers to open up their workflows are welcome. Bottom-up communities or grassroots initiatives are important drivers for change.

Conferences, such as the Scholarship Values Summit, or blogs could be a way to increase awareness about Open Science (ZBW Blog on Open Science). You can also share your institute’s progress on Open Science practices via a dashboard, an example is the Charité Dashboard on Responsible Research.

Challenges for Open Science

On the other hand, some challenges were also mentioned: For example, Open Science is not prioritised as the current research evaluation system is still very focused on traditional research impact metrics. It can also be difficult to enthuse researchers to attend events. It works better to meet them where they are.

Not everyone is aware of all the different aspects of Open Science (sometimes it is equated with Open Access) and it can also be quite overwhelming. It may be helpful to use different terms such as research integrity or sustainable science to engage people more successfully with Open Science practices. More training is also needed.

There is no one-size-fits-all solution! If new tools are offered to researchers, they should ideally be robust and simplify existing workflows without causing additional problems.

Conclusion: Our main conclusions from the session were that we have a lot of experts and successful case studies to learn from. It is also important to have enthusiastic people who can push for progress in the departments and institutes!

How can libraries support researchers for Open Science?
by Mindy Thuna, University of Toronto Libraries

There were ten participants in this session from institutions in South Africa, Germany, Spain, Luxembourg and Canada.

Four key points that arose:

1. One of the first things that came up in dialogue was that Open Science is a very large umbrella that contains a LOT of pieces/separate things within it. Because there are so many moving parts in this giant ecosystem, it is hard to get started in offering supports, and some areas get a lot less attention than others. Open Access and Open Data seem to consistently be flagged first as the areas that generate a lot of attention/support while Open Software and even Citizen Science received a lot less attention from libraries.

2. Come to us versus go to them: Another point of conversation was whether or not the researchers are coming to us (as a library) to get support for their own Open Science endeavours. It was consistently noted that they are not generally thinking about the library when they are thinking, e.g., about research data or Open Access publishing. The library is not on their radar as a natural place to find this type of support/help until they have experienced it for themselves and realise the library might offer supports in these areas.

From this starting point, the conversation morphed to focus on the educational aspect of what libraries offer – i.e. making information available. But it was flagged that we often have a bubble where the information is located that is not often browsed. So the community is a key player in getting the conversation started, particularly as part of everyday research life. This way, the library can be better integrated into the regular flow of research activities when information/help is needed.

3. The value of face-to-face engagement: People discussed the need to identify and work with the “cheerleaders” to get an active word-of-mouth network going to educate more university staff and students about Open Science (rather than relying on Libguides and webpages to do so more passively). Libraries could be more proactive and work more closely with the scientific community to co-create Open Science related products. Provision of information is something we do well, but we often spend less time on personal interactions and more on providing things digitally. Some of the attendees felt this might be detrimental to really understanding the needs of our faculty. More time and energy should be spend on understanding the specific needs of scientists and shaping the scientific communication system rather than reacting to whatever comes our way.

4. The role of libraries as a connecting element: The library is uniquely placed to see across subject disciplines and serve in the role of connector. In this way, it can help facilitate collaborations/build partnerships across other units of the organisation and assist in enabling the exchange of knowledge between people. It was suggested that libraries should be more outgoing in what they (can) do and get more involved in the dialogue with researchers. One point that was debated is the need for the library to acknowledge that it is not and cannot really be a neutral space – certainly not if Open Science is to be encouraged rather than just supported.

Persistent identifiers and how they can foster Open Science
by Antonia Schrader, Helmholtz Open Science Office

Whether journal article, book chapter, data set or sample – these results of science and research must be made openly accessible in an increasingly digital scientific landscape, and at the same time made unambiguously and permanently findable. This should support the exchange of information within science from “closed” to “open” science and promote the transfer of findings to society.

Persistent identifiers (PIDs) play a central role here. They ensure that scientific resources can be cited and referenced. Once assigned, the PID always remains the same, even if the name or URL of an information object changes.

The participants in the spontaneous barcamp session all agreed on this central importance of PIDs for the digital science landscape. All of them were familiar with the principle of PIDs and have contact points in their daily work, especially with DOIs and ORCID iDs (Open Researcher and Contributor iD). In addition to the enormous potential of PIDs, however, the participants also saw challenges in their use and establishment. It became clear that there are still technical as well as ethical and data protection issues to consider.

There was consensus that these questions must be accompanied by a broad education on PIDs, their purpose and how they work; among the scientific staff of research institutions as well as among researchers. Websites tailored to the topic from ORCID DE (German) or Forschungsdaten.org (German) offer a good introduction.

Translating scholarly works opens science
by Victor Venema, Translate Science

Translating scholarly works opens science for more contributors (who do important work, but are not proficient in writing English), avoids double work and it opens the fruits of science to larger communities. Translated scientific articles open science to science enthusiasts, activists, advisors, trainers, consultants, architects, doctors, journalists, planners, administrators, technicians and scientists. Such a lower barrier to participating in science is especially important on topics such as climate change, environment, agriculture and health.

In this session we discussed why translations are important, tools that could help making and finding translations and foreign language works. An interesting thought was that currently blogs are important for finding foreign scientific articles, which illustrates how much harder it is to find such works and suggests allies to work with. The difficulty of finding foreign works emphasises the importance of at least translating titles and abstracts. Search engines that include automatically translated keywords can also help discovery.

The slides of the session “Translating scholarly articles opens science” can be found here.

Open Data before publication
by Mika Pflüger, Potsdam Institute for Climate Impact Research

In this session we discussed approaches and tools to collaborate on scientific data openly. The starting point of the discussion was the assessment that publishing scientific data openly is already quite well supported and works smoothly thanks to platforms like Zenodo. In contrast, open pre-publication collaboration is difficult because the available platforms impose restrictions, either on the size of the datasets or on the research area supported. Self-hosting a data collaboration platform like gin – Modern Research Data Management for Neuroscience is one solution, but usually not feasible for individual researchers or working groups.

We also talked briefly about experiences with open pre-publication collaboration. Experiences are limited so far, but fruitful collaboration can establish when the datasets in question are useful to a broader group of scientists and contribution is easy and quick. Furthermore, adapting data workflows so that intermediate results and workflows are openly accessible also has benefits for reproducibility and data organisation in general.

Conclusion of the Barcamp Open Science 2022

The Barcamp once again proved to be a suitable opportunity to meet both Open Science veterans and newcomers and to engage in low-threshold conversation. Particularly popular this time were the extensive rounds of introductions in the individual sessions, which not only minimised the inhibition threshold for speaking, but also helped all those present to classify their video conference counterpart in a professional manner and, if desired, to make a note of the contact for later. Topics were dealt with in breadth by many or in depth by a few. Sometimes two people are enough for the latter. In the end, it became clear that the most important thing is to network representatives from the different communities and to promote their exchange.

Thank you and see you next year!

Behind the scenes this year, the organising team had taken up feedback from the community that had arisen in the context of a survey on the future of the Barcamp Open Science. For example, there was an onboarding session especially for newcomers to the Barcamp to explain the format and procedure again and to “break the ice” beforehand. Even though we would like to hold the Barcamp in presence again, and this is also desired, there is also a clear vote for an online format. This is more inclusive and important for international participation. Ultimately, our goal is to further develop and consolidate the format together with the community. And we are open to new partners.

This text has been translated from German.

Web links to the Barcamp Open Science

More tips for events

You may also find this interesting

About the authors (alphabetical)

Dr Yvana Glasenapp is a research officer specialising in research data management and Open Science at Leibniz University Hannover (LUH). Her professional background is in biology. She can be found on XING, LinkedIn and ORCID.
Portrait: Yvana Glasenapp©

Dr Mika Pflüger works in the research software engineering group at Potsdam Institute for Climate Impact Research. He currently works on a better integration of simple climate models into the PIAM suite of integrated assessment models. Mika Pflüger can be found on Twitter.
Portrait: PIK/Klemens Karkow©

Dr Esther Plomp is a Data Steward at the Faculty of Applied Sciences, Delft University of Technology, in the Netherlands. She works towards contributing to a more equitable way of knowledge generation and facilitating others in working more transparently through her involvements in various open research communities including The Turing Way, Open Research Calendar, IsoArcH and Open Life Science. Esther Plomp can be found on Twitter, LinkedIn and GitHub.
Portrait: Esther Plomp©

Dr Guido Scherp is Head of the “Open-Science-Transfer” department at the ZBW – Leibniz Information Centre for Economics and Coordinator of the Leibniz Research Alliance Open Science. He can also be found on LinkedIn and Twitter.
Portrait: ZBW©, photographer: Sven Wied

Antonia Schrader has been working in the Helmholtz Open Science Office since 2020. There she supports the Helmholtz Association in shaping the cultural change towards Open Science. She promotes the dialogue on Open Science within and outside Helmholtz and regularly organises forums and online seminars (German) together with her colleagues. Antonia Schrader is active in ORCID DE, a project funded by the German Research Foundation to promote and disseminate ORCID iD (German), a persistent identifier (PID) for the permanent and unique identification of individuals. Antonia Schrader can be found on Twitter, LinkedIn and XING.
Portrait: Antonia Schrader, CC BY-ND

Claudia Sittner studied journalism and languages in Hamburg and London. She was a long time lecturer at the ZBW publication Wirtschaftsdienst – a journal for economic policy, and is now the managing editor of the blog ZBW MediaTalk. She is also a freelance travel blogger (German), speaker and author. She can also be found on LinkedIn, Twitter and Xing.
Portrait: Claudia Sittner©

Mindy Thuna has been a librarian since 2005. Before, she has worked as an educator in a variety of eclectic locations, including The National Museum of Kenya in Nairobi. Wearing her librarian hat, Mindy has had numerous fabulous librarian titles including the AstraZeneca Science liaison librarian, the Research Enterprise Librarian, Head of the Engineering & Computer Science Library and her current role as the Associate Chief Librarian for Science Research & Information at the University of Toronto Libraries in Canada. Her research is also rather eclectic but focuses on people’s interactions with and perception of concepts relating to information, with her current focus being on faculty and Open Science practices. Mindy Thuna can also be found on ORCID and Twitter.
Portrait: Mindy Thuna©

Victor Venema works on historical climate data with colleagues all around the world where descriptions of the measurement methods are normally in local languages. He organised the barcamp session as member of Translate Science, an initiative that was recently founded to promote the translation of scientific articles. Translate Science has a Wiki, a blog, an email distribution list and can be found on the Fediverse.

The post Barcamp Open Science 2022: Connecting and Strengthening the Communities! first appeared on ZBW MediaTalk.

Open Science Conference 2022: New Challenges at the Global Level

by Guido Scherp, Doreen Siegfried and Claudia Sittner

The Open Science Conference 2022 was more international than ever before. Almost 300 participants from 49 countries followed the 10 presentations and the panel discussion on the latest developments in the increasingly global Open Science ecosystem. While the talks often focused on the macro-level of the science system, additional 13 poster presentations took visitors to many best practice examples in different corners of Europe. Those who could not be there live could follow #OSC2022 on Twitter or watch the video recordings of the talks and presentations afterwards.

Tweet Leibniz Research Alliance Open Science: Thank you for being a part of this insightful three-day-event!

This year there was a cooperation with the German Commission for UNESCO (DUK). In the context of the UNESCO Recommendation on Open Science, which have been adopted at the end of 2021, the DUK organised a panel discussion and a workshop. The global perspective on Open Science associated with the recommendation has certainly contributed to greater internationalisation, especially outside Europe.

Professor Klaus Tochtermann, chair of the conference, emphasised in his opening address that much has happened in the Open Science movement since the last OSC in 2021. For example, the EU now requires a clear commitment to support open practices in research proposals in the Horizon Europe framework programme. The EU had already put the topic of Open Science on the research agenda in 2015. At that time, the focus was on Open Innovation, Open Science and Open to the World. In addition, the EU Commission recently launched an initiative to reform the existing system of research evaluation.

Tweet OpenAire: #StandWithUkraine

In view of the Ukraine war, Tochtermann also emphasised the importance of value-driven science diplomacy and freedom of science, in which global cooperation plays a central role.

Current challenges of the Open Science transformation

Once again, many “classics” were represented at this year’s conference. These included contributions on the latest developments in the fields of research data, societal participation and science communication. However, some conference contributions this year addressed points of contact between Open Science and other areas and showed how strongly Open Science is ultimately interwoven with a fundamental transformation of the science system. Openness alone does not solve all the problems in the global and interlinked academic sector, but it does show which barriers in the science system are currently hindering the implementation of Open Science. It is also important to keep an eye on the unintended negative effects of this transformation.

Tweet Ulrike Küstes: Kudos and standing ovations to @rimamrahal and your very precise addresses of the demands for change in #research in terms of precarious work environments, tenure clock and ideas for a better science legislation at #osc2022

In her presentation “On the Importance of Permanent Employment Contracts for Research Quality and Robustness”, Rima-Maria Rahal discussed how much research quality suffers under current working conditions. These include, on the one hand, temporary positions and the competitive pressure in the science system. In Germany, this is currently characterized by the #IchBinHanna debate (German) on Twitter. On the other hand, the misguided incentive system with its focus on the impact factor complicates the situation for many researchers. Ultimately, these framework conditions also hinder the implementation of Open Science on a broad scale. Improving research practice offers the opportunity to initiate structural changes in favour of research quality and to link them to open principles such as reproducibility, transparency and collaboration.

In his presentation on “Data Tracking in Research: Academic Freedom at Risk?”, Joschka Selinger addressed the general development that scientific publishers are increasingly offering services for the entire research cycle. Against the backdrop of the Open Access development, they are transforming their business model from a pure content provider to a data analytics business (see DFG position paper).

Joschka Selinger, graphic: Karin Schliehe at Open Science Conference

This privatisation of science combined with the (non-transparent) collection and exploitation of “research behaviour” is problematic for academic freedom and the right to informational self-determination, as Felix Reda also recently pointed out in a contribution to MediaTalk. Therefore, awareness of this problem must be raised at scientific institutions in order to initiate appropriate measures to protect sensitive data.

Tweet Peter Kraker: Great Presentation by @tonyR_H on ensureing equity in open science at #os2022 – a crucial topic that deserves much more attention

In his presentation “Mitigating risks of cumulative advantage in the transition to Open Science: The ON-MERRIT project”, Tony Ross-Hellauer addressed the question of whether Open Science reinforces existing privileges in the science system or creates new ones. Ultimately, this involves factors such as APC fees that make participation in Open Science more difficult and turn it into a privilege or “cumulative advantage” for financially strong countries. These factors were examined in the Horizon 2020 project ON-MERRIT and corresponding recommendations were published in a final report. In addition to APCs, this also addresses the resource intensity of open research as well as reward and recognition practices.

The global perspective of Open Science

It became clear that a central element of the further development of Open Science is in any case the “UNESCO Recommendation on Open Science”. This recommendation has particularly shaped the global perspective on Open Science and expanded it to include aspects such as inclusivity, diversity, consideration of different science systems/cultures and equity. This became particularly clear in the panel of the German UNESCO Commission on “Promoting Open Science globally: the UNESCO Recommendation on Open Science“.

Tweet Leibniz Research Alliance Open Science: Vera Lecoeuilhe reports on the negotiations an its challenges around the UNESCO recommendation on #OpenScience

In keynote speeches, Vera Lacoeuilhe, Peggy Oti-Boateng and Ghaith Fariz gave insights into the background of the recommendation and the process behind it. Negotiating such a recommendation is extremely difficult. This is despite the fact that it does not even result in legislation, but at most requires monitoring/reporting. In the end, however, there was a great consensus. The Corona pandemic has also shown how important open approaches and transnational collaborations are to overcome such challenges – even though it was a great challenge to create an atmosphere of trust in online meetings. Finally, the process leading up to a recommendation was itself inclusive, transparent and consultative in the spirit of Open Science: The text was also available for public comment in the meantime.

Tweet Leibniz Research Alliance Open Science: All panelists agree: Science is a global endeavor and thus shared responsibility is inevitable to make #OpenScinence a sucess

In the discussion that followed, it became very clear what great expectations and demands there are with regard to the topics of inclusion and equity. The panellists agreed that there must be a change: away from “science for a chosen few” to “science for all”. Access to science and the benefits of scientific progress must be guaranteed for all.

Panel discussion, graphic: Karin Schliehe at Open Science Conference

The issue of equity was strongly addressed using the example of the African continent (for example in the context of APCs). However, the discussion also focused on the outreach of the recommendation, the global dynamics it triggered, and a collective vision for Open Science. And finally, science was seen as central to achieving the UN Sustainable Development Goals (SDGs). Open Science plays a crucial role in this.

Tweet OpenAire: Agree with Internet access as minimum right

The implementation of the recommendation will now continue in working groups, the panellists reported. The topics include funding, infrastructure, capacity building and the above-mentioned monitoring. There are already some activities for the implementation of Open Science in African countries: Eleven of these best practice examples were presented at the end of the conference at the UNESCO workshop “Fostering Open Science in Africa – Practices, Opportunities, Solutions” (PDF). Anyone who would like to contact the DUK in the context of implementing the recommendation or in relation to Open Science activities is welcome to reach out to Fatma Rebeggiani (email: Rebeggiani@unesco.de).

Latest Open Science developments and best practices

Although the global view played a major role at this year’s Open Science Conference, there were again many insights into local projects, several Open Science communities and best practice examples. Especially in the poster session with its 13 contributions, it was easy to get into touch with local project leaders about their challenges in implementing Open Science.

Refreshing as always was the presentation of new projects and approaches, for example the grassroots initiative by students for students, which we reported on here on MediaTalk. Representing the student-volunteer-led initiative, Iris Smal, Hilbrand Wouters and Christeen Saparamadu explained why it is so important to introduce students to the principles of Open Science as early as possible.

Another best practice example showed how an initiative of the Helmholtz Association is proceeding to “liberate data”. Through services, consultations or with the help of tools, researchers are supported there in the management or provision of research data. Efficient handling of metadata or knowing where to find data from different disciplines are also relevant here, Christine Lemster, Constanze Curdt and Sören Lorenz explained in their poster.

The insights into the first six months of Open Science at UNC-Wilmington (North Carolina, USA) by Open Science pioneers Lynnee Marie Argabright and Allison Michelle Kittinger were also exciting. Two completely new roles were created for the two of them: that of data librarians. The goal is to build a sustainable Open Science campus across disciplines. An important concern of the two Open Science newcomers is also to raise awareness of the research data life cycle.

Insights into how the Open Science movement is progressing in different countries have also become an integral part of the repertoire of the Open Science Conference. This time, projects from these countries were presented at the poster session:

This showed how much consideration must be given to the national or local framework conditions and country-specific sensitivities in such projects in order for them to work in the end.

Conclusion Open Science Conference 2022

This year’s Open Science Conference once again showed how the understanding of the term Open Science expands when viewed from a global perspective, and how a completely different standard emerges. Whereas principles such as transparency, openness and reusability have been the main focus up to now, UNESCO is directing the global view more towards inclusion, diversity and equity. It is becoming clear that there is not one definition and approach to Open Science, but rather many, depending on the perspective. However, the discussion about the UNESCO recommendation on Open Science has shown how important it is to agree on a few basic prerequisites in order to also meet the needs of countries from the so-called “global south”.

In any case, the global discussion is in many ways different from, for example, the European one. Nevertheless, Open Science cannot be viewed in isolation from the national or continental science system. This is certainly not a new insight, but one that was impressively demonstrated at the #OSC2022 UNESCO workshop by the many Open Science projects in African countries.

Tweet Leibniz Research Alliance Open Science: Three incredible days

Nevertheless, it is also essential to look at the world as a whole. After all, common challenges need to be overcome. The climate crisis, the fight against the global Corona pandemic or the supply of food and energy are just a few examples of why the opportunity for global cooperation should not be missed. And the gap between knowledge and science between the so-called Western countries and the global South is already too big. But if the Open Science ecosystem is to function globally, it is crucial to involve researchers from all over the world. Only in this way can the crises of our time be solved effectively and inclusively.

Web links for the Open Science Conference 2022

More tips for events

You may also find this interesting

About the Authors:

Dr Guido Scherp is Head of the “Open-Science-Transfer” department at the ZBW – Leibniz Information Centre for Economics and Coordinator of the Leibniz Research Alliance Open Science. He can also be found on LinkedIn and Twitter.
Portrait: ZBW©, photographer: Sven Wied

Dr Doreen Siegfried is Head of Marketing and Public Relations. She can also be found on LinkedIn and Twitter.
Portrait: ZBW©

Claudia Sittner studied journalism and languages in Hamburg and London. She was a long time lecturer at the ZBW publication Wirtschaftsdienst – a journal for economic policy, and is now the managing editor of the blog ZBW MediaTalk. She is also a freelance travel blogger (German), speaker and author. She can also be found on LinkedIn, Twitter and Xing.
Portrait: Claudia Sittner©

The post Open Science Conference 2022: New Challenges at the Global Level first appeared on ZBW MediaTalk.

Discrimination Through AI: To What Extent Libraries are Affected and how Staff can Find the Right Mindset

An interview with Gunay Kazimzade (Weizenbaum Institute for the Networked Society – The German Internet Institute)

Gunay, in your research, you deal with the discrimination through AI systems. What are typical examples of this?

Typically, biases occur in all forms of discrimination in our society, such as political, cultural, financial, or sexual. These are again manifested in the data sets collected and the structures and infrastructures around the data, technology, and society, and thus represent social standards and decision-making behaviour in particular data points. AI systems trained upon those data points show prejudices in various domains and applications.

For instance, facial recognition systems built upon biased data tend to discriminate against people of colour in several computer vision applications. According to research from MIT Media Lab, white male and black female accuracy differ dramatically in vision models. In 2018, Amazon “killed” its hiring system, which has started to eliminate female candidates for engineering and high-level positions. This outcome resulted from the company’s culture to prefer male candidates to females in those particular positions traditionally. These examples clarify that AI systems are not objective and are mapping human biases we have in society to the technological level.

How can library or digital infrastructure staff develop an awareness of this kind of discrimination? To what extent can they become active themselves?

Bias is an unavoidable consequence of situated decision-making. The decision of who and how classifies data, which data points are included in the system, is not new to libraries’ work. Libraries and archives are not just the data storage, processing, and access providers. They are critical infrastructures committed to making information available and discoverable yet with the desirable vision to eliminate discriminatory outcomes of those data points.

Imagine a situation where researchers approach the library asking for images to train a face recognition model. The quality and diversity of this data directly impact the results of the research and system developed upon those data. Diversity in images (Youtube) has been recently investigated in the “Gender shades” study by Joy Buolamwini from MIT Media Lab. The question here is: Could library staff identify demographic bias in the data sets before the Gender Shades study was published? Probably not.

The right mindset comes from awareness. Awareness is the social responsibility and self-determination framed with the critical library skills and subject specialization. Relying only on metadata would not be necessary for eliminating bias in data collections. Diversity in staffing and critical domain-specific skills and tools are crucial assets in analysing library system digitised collections. Training of library staffing, continuous training, and evaluation should be the primary strategy of the libraries on the way to detect, understand and mitigate biases in library information systems.

If you want to develop AI systems, algorithms, and designs that are non-discriminatory, the right mindset plays a significant role. What factors are essential for the right attitude? And how do you get it?

Whether it is a developer, user, provider, or another stakeholder, the right mindset starts with the

  • Clear understanding of the technology use, capabilities as well as limitations;
  • Diversity and inclusion in the team, asking the right questions at the right time;
  • Considering team composition for the diversity of thought, background, and experiences;
  • Understanding the task, stakeholders, and potential for errors and harm;
  • Checking data sets: Consider data provenance. What is the data intended to represent?;
  • Verifying the quality of the system through qualitative, experimental, survey, and other methods;
  • Continual monitoring, including customer feedback;
  • Having a plan to identify and respond to failures and harms as they occur;

Therefore, long-term strategy for library information systems management should include

  • Transparency
    • Transparent processes
    • Explainability/interpretability for each worker/stakeholder
  • Education
    • Special Education/Training
    • University Education
  • Regulations
    • Standards/Guidelines
    • Quality Metrics

Everybody knows it: You choose a book from an online platform and get other suggestions a la “People who bought this book also bought XYZ”. Are such suggestion and recommendation systems, which can also exist in academic libraries, discriminatory? In what way? And how can we make them fairer?

Several research findings suggest making recommendations fairer and out of the “filter bubbles” created by technology deployers. In recommendations, transparency and explainability are among the main techniques for approaching this problem. Developers should consider the explainability of the suggestions made by the algorithms and make the recommendations justifiable for the user of the system. It should be transparent for the user based on which criteria this particular book recommendation was made and whether it was based on gender, race, or other sensitive attributes. Library or digital infrastructure staff are the main actors in this technology deployment pipeline. They should be conscious and reinforce the decision-makers to deploy the technology that includes the specific features for explainability and transparency in the library systems.

What can they do if an institute, library, or repository wants to find out if their website, library catalogue, or other infrastructure they offer is discriminatory? How can they tell who is being discriminated against? Where can they get support or a discrimination check-up done?

First, “check-up” should start by verifying the quality of the data through quantitative and qualitative, mixed experimental methods. In addition, there are several open-access methodologies and tools for fairness check and bias detection/mitigation in several domains. For instance, AI Fairness 360 is an open-source toolkit that helps to examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

Another useful tool is “Datasheets for datasets”, intended to document the datasets used for training and evaluating machine learning models; this tool is very relevant in developing metadata for library and archive systems, which can be further used for model training.

Overall, everything starts with the right mindset and awareness on approaching the bias challenge in specific domains.

Further Readings

We were talking to:

Gunay Kazimzade is a Doctoral Researcher in Artificial Intelligence at the Weizenbaum Institute for the Networked Society in Berlin, where she is currently working with the research group “Criticality of AI-based Systems”. She is also a PhD candidate in Computer Science at the Technical University of Berlin. Her main research directions are gender and racial bias in AI, inclusivity in AI, and AI-enhanced education. She is a TEDx speaker, Presidential Award of Youth winner in Azerbaijan and AI Newcomer Award winner in Germany. Gunay Kazimzade can also be found on Google Scholar, ResearchGate und LinkedIn.
Portrait: Weizenbaum Institute©

The post Discrimination Through AI: To What Extent Libraries are Affected and how Staff can Find the Right Mindset first appeared on ZBW MediaTalk.

Best Practice at the ZHB Lucerne: Agile Working in the Context of Small and Large Libraries

An interview with, Lucerne Central and University Library (Lucerne ZHB)

When and how did you discover agile working for yourself and in the library context?

That was back in 2017 during my OPL (One Person Library) job in the special legal library of a large corporate law firm. The management tried to introduce agile methods “top down”, which meant that I came into contact with them during workshops. I was so impressed that I integrated agile values and methods into my own modest library work.

For example, I visualised all ongoing tasks/projects in my small library using post-its on an improvised Kanban board in the centre of the hallway at the law firm. This led to colleagues viewing my job in a more positive light (“Oh, you take care of this too?”) and often to colleagues drawing the attention of the partner responsible for me to the fact that I needed something from her:him , because I had pinned a note next to her:his name with “Waiting for …” on the board. In the end, I was even allowed to support entire legal teams in implementing agile working and became a kind of “agile coach”, without realising at that time that this job actually exists.

Later, as head of library IT at the ZHB Lucerne, I was able to use agile working throughout an entire team in order to implement IT projects – starting from more complex software updates to system migration projects, through to pilot projects to test entirely new (library) technologies. For a year now, I have been library director – a completely new challenge for me to live agile methods, roles and values in the entire, cross-team and cross-departmental organisation of a library from this position.

Why is agile working useful for modern libraries?

Libraries are not so fundamentally different from other institutions or companies in an agile context. A library is just as interested in “external agility” (PDF, German) as Google or Tesla – it wants to be as successful as possible for as long as possible. You can see this by using measurable outputs such as high number of loans, a large amount of search requests and e-media hits, numerous entrances to the building, excellent occupancy of the study stations, many new registrations, a high number and quality of events including those with lots of participants, a positive media presence, high satisfaction of the users and sponsoring organisations, growing budgets, etc.

This means that libraries need to be innovative, to adapt swiftly to changing contexts and challenges and to always offer those products and services that are in particular demand from their users and partner organisations. This is precisely where agile values and methods can help and promote so-called “inner agility” (PDF, German) by improving internal communication and cooperation through more transparency, a positive error culture as well as flat hierarchies, flexible roles and self-organisation, and by continuously integrating user feedback into the work process.

On the other hand: in many places, the structures which have developed in libraries over a very long period of time often seem to be somewhat rigid and cumbersome. How can agile working function here in spite of this?

By trying it out on a small scale initially. Projects for introducing new services, offers or products are particularly useful in this regard, and ideally those which the library colleagues themselves do not know exactly how to implement in the first place because, for example, the starting situation, the approach and/or the expectations and needs of the users are still unclear.

Furthermore, this working method can be launched particularly successfully if all colleagues involved are integrated and are allowed to contribute to the decision-making process when it comes to implementing agile methods, rituals and principles. Last but not least, you need to obtain the agreement of the manager who has been responsible for the line organisation of the project up to now: agile working can only succeed if this person is prepared to share its knowledge, expertise and responsibilities with the entire project team.

After the initial, hopefully successful and inspiring agile projects (although non-implemented pilot projects can also be regarded as successful ?) this working method can also be fundamentally and permanently established in individual teams, for example by meeting all of the team’s annual targets in an agile manner; by having regular, agile discussion formats that characterise communication in the team; and by implementing retrospectives that allow all colleagues to evaluate and adapt the collaboration within the team.

Several agile teams can be combined into larger agile organisational units at a later stage. Here in Lucerne at the ZHB for example, we have merged all departments from the fields of e-media, IT and Open Access / research and publication support into “Digital Services”. At the TIB Hannover there is an interdisciplinary team that takes care of the agile further development of the AV portal. We are also enviously eyeing the Zurich´s Central Library (ZB Zürich), where the organisational unit “IDE” (information expertise, digital services and development) agilely develops service offerings in the user area (German).

To what extent does the Lucerne Central and University Library actually work in an agile way? Do you have a few examples?

We are still quite a long way off claiming that the entire ZHB Luzern works in an agile way. But in recent years we have been able to gather a great deal of experience with agile projects, for example when we tested, adapted and later introduced our Seat Navigator across all locations (German). This precisely measures how many of our study stations are occupied down to each individual seat.

The example of the Seat Navigator

The starting position was that our location at the uni/PH building was in high demand, particularly during examination periods. Students were practically fighting over the available seats in the library and had developed creative reservation techniques (German).

We weren’t able to offer more places just like that, but we had the idea of using IoT sensors at each seat to measure exactly whether it was occupied or free.. We are able to make the total number of available study stations at all locations visible online, and if the occupancy is too high at one location, to spread it more evenly. In this way, users should be able to quickly find a free place in the building and also decide at home to go to the library location that currently has the most free places.

Reading Room at the Uni PH Building, one of four Locations of the Lucerne Central and University Library

If we hadn’t approached this pilot project using the agile method (= quickly testing a small-scale prototype to get targeted feedback from users and taking adjustments into account whilst still in the project phase), we wouldn’t have been able to find out as quickly that the system can only function with a break mode which, as well as showing free (green) and occupied (red) seats also shows those that have been left temporarily for a break (yellow). It was only following this feedback and the respective adjustment that we were able to lay the foundation required to operate the system at all four ZHB locations with over 700 sensors.

The example of the Lucebro AI Software

We also selected a similar agile approach when testing our “Lucebro” AI software (German), with which we wanted to (partly) automate recurring questions & answers in the daily communication with our users. Alongside pilot tests, continually gathering feedback and relevant adjustments to the software, in this case it was predominantly the complete transparency of all project steps as well as the involvement of all employees in the implementation which provided a good example of agility. Despite the tricky issue of automated advice, in the end 75% of all employees actively helped in training the AI software to handle frequently asked questions & answers from the information service. Even if the project was not ultimately implemented in a productive manner, owing to a poor cost-benefit ratio, it was a complete success in terms of in-house collaboration and experiences gained in comparison to classic project management.

The example of the “Luzi” Pepper Robot

The deployment of our Pepper robot (German) also demonstrates how even failed projects can be seen as successful in the context of positive (= agile) error culture. Instead of investing a great deal of time and money in AI software and developing this, at best, without considering the needs of our users, we have learned that these kinds of solutions need to be as low-threshold as possible, in order to be accepted by library users.

This means that the training data from the Lucebro project are now being used to teach our Pepper robot “Luzi” the most frequently asked questions & answers about the library. It is then very easy to speak to Luzi directly and personally on site, and she patiently explains all day long how to access the Wifi or how you register for the first time. Naturally we are also continually asking our users for feedback in this regard, as to how Luzi could help them further, and we are developing her continually.

You were responsible for the introduction of the swisscovery library platform throughout Switzerland. Is it right that you used agile methods to develop and improve the platform? Which ones exactly? Were you successful?

Well, it was certainly not due to me alone: swisscovery was launched as a joint network of 475 libraries and a national research platform in December 2020 (German), and this required many years of dedication and enthusiasm from over 2000 library colleagues from all over the country. I was only actively involved in the introductory project for the ZHB Luzern as group coordinator for the integration of our network of Central Switzerland (higher education institution) libraries into swisscovery.

However, things only really became agile after the launch, when our national research platform came under criticism (German). The senior management of the Swiss Library Service Platform (SLSP), which operates swisscovery for the libraries, reacted to this by switching the further development of swisscovery to agile working methods, which had been planned anyway, in order to respond more quickly to the most pressing criticisms of our library users.

Ever since, I have been able to incorporate the perspective of the 15 shareholder libraries behind SLSP AG to the agile project team made up of SLSP and library colleagues (PDF, German) and work on specific improvements to the search interface.

We rely on Scrum as a framework and jointly maintain a backlog with all adaptation requests that we generate from interviews with users, from support tickets in SLSP and from direct feedback on Twitter. We pool adaptation requests according to topic and priority into month-long sprints, during which we develop solutions together, before updating them directly in the national overview of swisscovery. After every sprint we review the adaptations achieved and plan the next sprint. Compared to the previous frequency, the agile procedure is a complete success. In six months we were able to fix the most urgent problems, decisively improve the user friendliness of swisscovery and even enjoy a little praise now and then.

Your favourite tools or methods for agile working?

I am a big fan of Kanban because it allows you try things out rapidly, and uses fewer strict rules, rituals and time limits compared with Scrum. On a Kanban board, the pending to-dos for a team or a project can be easily made transparent for all who are interested. With this important foundation in place, it is possible to try out further agile principles such as daily/weekly stand-ups and a step-by-step transition towards self-organisation of the team. If virtual Kanban boards such as Trello, MeisterTask or Stackfield are used, everything can be achieved regardless of time and location, which became an important element over the past two years of the pandemic.

Scrum, on the other hand, has the advantage that it contains a complete framework and not just one method and, in addition to the rituals and practices, raises awareness of the fact agile working must be underpinned by fundamental values and a cultural change; without this, no tool, no matter how exciting, would bring any positive effect at all to the collaboration.

In our fast-moving digital world, new, optimised tools are continually being developed. How do you motivate your colleagues to openly try out new working methods and tools?

This only works by setting an example and actively using these methods and tools in your own work. What’s even more important is that I, particularly as a manager, must support the agile values, for example by making my own objectives and projects transparent for everyone involved, opening up active participation in them for anyone who is interested and not getting tied up with hierarchies; that I myself have the courage to try things out quickly, and immediately subject myself to the (sometimes merciless ?) feedback of the respective target group and that I can also deal with the situation positively if my attempts fail, because I am nevertheless always able to learn something from them. In my personal experience, this is the best way to motivate my colleagues to engage in these new forms of communication and cooperation.

You are a member of the Community of Practice: “Agilität in Bibliotheken” (agility in libraries) on Twitter. What does this actually mean?

Approximately 70 library colleagues from Germany, Austria and Switzerland who already actively use agile working methods in library contexts have joined together in the Community of Practice. We meet on a monthly basis to discuss things – and of course in agile discussion formats such as Lean Coffee. These are not so much about sharing showpiece and glossy projects, but much more about open and honest discussion concerning issues and problems which we encounter when implementing agile working practices in our team or throughout the library. It’s often the collective intelligence of the community that finds solutions with its pooled wealth of knowledge. But sometimes we’re simply also a self-help group. Colleagues who already use agile working in libraries or would like to start using it immediately and who are looking for advice are welcome to contact us: Community of Practice “Agilität in Bibliotheken” (agility in libraries) on Twitter (German).

What would you recommend to colleagues who would like to get into agile working? What are good starting points?

My experience has been that transparency is a good start for agile working. From the moment that I – as project, team or department manager – make all steps/tasks in a project or an annual objective completely transparent for all colleagues, further important steps towards agility often arise by themselves. As soon as my colleagues get an insight into all correlations of a project, they usually give feedback, point to possible problems and contribute suggestions for improvement. This can then be consolidated in rituals such as daily stand-ups and, from there, it’s only a short step towards not only delegating but actively inviting colleagues to take on and complete tasks from the backlog and report back on them during the stand-up. If we then also manage to continually integrate feedback from the actual target group for whom the project, new service or new product is intended into this workflow, then we have already achieved a great deal. Entirely in keeping with the agile mindset, I can therefore simply recommend: just give it a try; things will go wrong anyway. ?

This text has been translated from German.

This might also interest you:

We were talking with Benjamin Flämig

After or during his Master’s degree in History/German in Berlin and ten years in the private sector context (Information & Knowledge Management in international business law firms), Benjamin Flämig’s completion of his part-time MALIS degree at the TH Köln in 2018 led him to the Lucerne Central and University Library as Head of IT, where strategy and organisational development, the launch of swisscovery and one or two construction and pilot projects kept him well occupied. Since February 2021, he has even been able to take on responsibility there as Director. Benjamin Flämig can be found on Twitter, LinkedIn and ORCID.
Portrait: Benjamin Fläming [CC BY 4.0]

The post Best Practice at the ZHB Lucerne: Agile Working in the Context of Small and Large Libraries first appeared on ZBW MediaTalk.