ChatGPT & Co.: When the Search Slot turns into an AI Chatbox

by André Vatter

The claim that AI language models are here to stay should be irrefutable by now. Although just introduced to the public in November 2022, ChatGPT has made rapid progress in this short time. How rapid? Let’s compare: After its founding, it took Twitter two years and Facebook ten months to build up a base of one million users. ChatGPT managed to reach this milestone in only five days. Two months after its launch, almost forty percent (German) of all Germans said they had heard of the chat robot or had already tried it.

A brutal race

But as impressive as the private adoption rate is today, what is more exciting is the escalations that ChatGPT has caused hitting the corporate world. Microsoft’s announcement of its plan to integrate the generative language model into its own search engine Bing caused sheer panic among the undisputed global industry leader Google. Google has been tinkering with an AI-supported web search for some time, but it has not yet been able to demonstrate that it is really ready for the market. There is a name, “Bard“, but CEO Sundar Pinchai is silent about concrete integrations. On the other hand, Microsoft was able to announce just a few days ago that its own search engine, which had hardly been noticed by users for decades, had to cope with a sudden rush of visitors:

“We have crossed 100M Daily Active Users of Bing. This is a surprisingly notable figure, and yet we are fully aware we remain a small, low, single digit share player. That said, it feels good to be at the dance!”

Redmond, Washington, is in an AI frenzy. In the future, there will hardly be a business area at Microsoft – whether B2B or B2C – in which ChatGPT does not play a role.

The disruption is also leaving its mark on the smaller competitors. Brave Search, the web search engine created by the US browser manufacturer Brave Software Inc. recently got a new AI feature. The “Summarizer” not only summarises facts directly at the top of the search results page, but also provides relevant content information for each result found. There are also changes at the privacy-focussed search engine DuckDuckGo, which has just launched “DuckAssist“. Depending on the question, the new AI feature taps Wikipedia for relevant information and offers concrete answers while still being on the search results page. But this is just the beginning: “This is the first in a series of generative AI-assisted features we hope to roll out in the coming months.”

All these integrations of AI language models into search engines are not about creating extensions to the existing, respective business model. It’s about a complete upheaval in the way we search the web today, how we interpret results and understand them.

How finding replaces searching

Whereas the previous promise of search consisted of an effortful “I’ll show you where you might find the answer”, in combination with AI it suddenly advances to: “Here’s the answer.” Since their invention, search engines have only ever shown us possible ways where answers to our questions might be found. In fact, it has never been the inevitable goal of advertising-based business concepts to provide users with a quick answer. After all, the goal is to keep them in one’s ecosystem as long as possible in order to maximise the likelihood of ad clicks. This is also the reason why Google at some point began to present generally available information – such as times, weather, stock market prices, sports results or flight information – directly on the search results pages (SERPs), for example, in the so-called OneBox. The ultimate ambition is that no one leaves the Googleverse!

Intelligent chatbots, like ChatGPT, get around this problem. On the one hand, they transform the type of search by replacing keywords with questions. Soon, many users are likely to say goodbye to so-called “search terms” or even Boolean operators. Instead, they’ll learn to tweak their prompts more and more to make their communication with the machine more precise. And on the other hand, intelligent chatbots reduce the importance of the original sources; often there is no longer any reason to leave the conversation. Those who search with the help of AI want and get an answer. They do not want a card catalogue with shelf numbers.

Despite all answers, questions remain open

We can already see that comfort does not come without critical implications. For example, with regard to the transparency of sources that we may no longer be able to see. Where do they come from? How were they selected? Are they trustworthy? Can I access them specifically? Especially in the scientific sector, reliable answers to these questions are indispensable. Other problems revolve around copyright. After all, AI does not create new information, but relies on the work of journalists who publish on the internet. How will they be remunerated if no one reads their texts and only rely on machine summaries?

Data protection concerns will not be long in coming. In communicating with the machine, a close relationship develops over time; the more it knows about us and can understand our perspective, the more accurately it can respond. In addition, the models need to be trained. Personalisation, however, inevitably means a critical wealth of data in the hands of third parties in return. In the hands of companies that will have to build entirely new business models around a question-answering game – quite a few of which, if not all, will be ad-supported.

AI provides answers. But not really to all questions at the moment. Search will change radically in a short time. Academic libraries with online services will also have to orient themselves accordingly and adapt. Perhaps the “catalogue” as a static directory or list will take a step back. Let’s imagine for a moment the scenario of an AI that has access to a gigantic corpus of Open Access texts. Researchers access several sources simultaneously, have them sorted, summarised in terms of content and classified: Have these papers been supported or falsified? The picture that emerges is of a new mechanism for making scientific knowledge accessible and comprehensible. Provided, of course, that the underlying content is openly accessible. From this perspective, too, here is once again a clear plea for Open Science.

So, how do academic libraries implement these technologies in the future? How do they create source transparency, how do they build trust and which disciplines of media literacy move to the foreground when new, machine-friendly communication is part of the research toolbox? Many questions, many uncertainties – but at the same time a great potential for the future supply of scientific information. A potential that libraries should use to actively shape the unstoppable change.

This might also interest you:

The post ChatGPT & Co.: When the Search Slot turns into an AI Chatbox first appeared on ZBW MediaTalk.

10 Years of OERcamp: Community Get-together on Digital and Open Educational Resources

An interview with Kristin Hirschmann

Around ten years ago, the first Barcamp to focus especially on the topic of Open Educational Resources (OER) was held in Germany. For the anniversary event in October 2022 there were 446 registrations, 19 workshops, 56 Barcamp sessions, 235 minutes of video, keynotes and live podcasts ranging from Austria to New Zealand.

In this interview Kristin Hirschmann, project manager of the OERcamps organised by the J&K – Jöran und Konsorten training agency, reports on the development of the Barcamps and further OERcamp formats.

The anniversary OERcamp took place in autumn 2022 in Hamburg. Which topic did the community explore in particular?

At the OERcamp in October 2022, the main focus was on the OER strategy of the federal government, introduced in the summer by Jens Brandenburg, parliamentary state secretary at the Federal Ministry of Education and Research (BMBF). The third day of the OERcamp was therefore completely dedicated to the strategy: Stakeholders of the OER community discussed and reviewed the different areas of activity of the strategy. We also made it possible for members of the OER community who were not able to be present at the OERcamp to comment on the results realised at the event. This commentary phase took place during the two weeks following the event via a collaborative document; 100 contributions were also added.

By agency J&K – Jöran und Konsorten for the OERcamp 2022 under CC BY 4.0

What’s more, after almost 3 years without face-to-face events, people felt it was important to bring the community together in one place. I received this feedback from many participants: the personal networking and exchange was even more important.

 
 

What were the milestones during ten years of OERcamp?

The first OERcamp took place at Bremen University in 2012. Since then, the OERcamp format has developed further according to current requirements: An OERcamp took place in Berlin every year until 2016. In 2017 there were then two innovations: the OERcamp was extended to four events for the first time: in the north, east, west and south. The OER Award was also established to select the best open educational materials in German-speaking countries. In 2018, the principle of one OERcamp in each of the four points of the compass in Germany was established. In 2019–2020, the OERcamp formats were refined so that the 5 aims of the OERcamp (qualification of OER, OER mainstreaming, networking and exchange, creation of specific materials and enabling a culture of sharing) could be implemented even more successfully. This means that alongside the participatory format of the Barcamp, there are now OERcamp workshops that focus on the creation and publication of OER as well as small, compact OERcamps that are affiliated to other events.

By agency J&K – Jöran und Konsorten for the OERcamp 2022 under CC BY 4.0

In 2020, the OERcamps were held in a virtual format owing to the coronavirus pandemic. With online formats such as the OERcamp webtalks or the OERcamp SummOERschool, we were able to respond to the urgent need of teachers to find, use and create digital teaching and learning materials quickly.

Further milestones that we were pleased about were the „Open Innovation Award 2020“(German) of the OE Awards Committee of “Open Education Global” at the 2020 OERcamp, the recognition of the work of the OERcamps with the 2020 EDUCAUSE Horizon Report the first international OERcamp in December 2021, OERcamp.global. This was, moreover, the first 48-hour festival on OER, and received 1063 registrations from 87 countries.

What makes the OERcamp so unique in your opinion?

At the OERcamps, I am struck time and again by the participants, who are exceptionally enthusiastic. Of course, this comes on the one hand from the participatory format of the Barcamp. On the other hand, the formats of the OERcamp, such as the OERcamp workshop, are very needs-based and offer many more programmes than the participants can use: the participants therefore need the courage and openness to decide what they need for their personal OER journey from the abundance of programmes.

By agency J&K – Jöran und Konsorten for the OERcamp 2022 under CC BY 4.0

I also enjoy the great mix of people: those who are attending the OERcamp for the first time, and OER connoisseurs and pros. Experience has shown that around half the participants at the events are OERcamp newbies. The exchange that takes place there is very fruitful and also serves the aim of making OER accessible to a wider group of people.

In the OERcamp workshop, great focus is placed on the creation as well as the publication of OER. This is owing to the concept, because OER newbies receive a low-barrier start to working with OER thanks to the concrete utilisation of OER. This very specific learning-by-doing thereby enables them to experience the culture of sharing. I find this experience in particular a very important basis for continuing to engage with OER, and therefore also using the potentials of the OER community. The fact that we promote this active engagement with OER through the OERcamp format is something special, in my opinion.

The members of the OER community are not merely participants at an OERcamp who consume one-sidedly, but also central components who give input, ensure exchange and networking, thereby also ensuring the further development of OER and its dissemination.

Are there comparable events at international level?

From an international point of view, OER has even greater relevance than in German-speaking countries. The CC summit of Creative Commons, the Open Education Global Conference with the Open Education Week, the Open Education Policy Forum, the OER22 Conference, the Open Science Conference and the OpenEd are just a few events that focus on OER. At the events, it also becomes clear that OER plays a role on all continents. There is even more value placed on OER as a correct step towards more educational fairness and, above all, equity and social inclusion.

What are your tips for employees in libraries who want to get started with OER?

There is already a wide range of materials available for getting started in OER. This includes the #OERklärt video series (German), which explains the OER basics. This is published via OERinfo (German), the information centre for Open Educational Resources. The platform iRights.info (German) focuses on OER from a legal perspective. And the OERcamps themselves have published diverse materials that can also be further used. The campus of the OERcamps (German), for example, offers 12 online courses with know-how on OER. These include “100 great sources for OER”, “videos and audios as OER” or “online courses with and as OER”.

For librarians in particular, I recommend Fachstelle Öffentliche Bibliotheken NRW (German) – which emerged from the “oebib” blog, which has already been very active since 2015.

This text has been translated from German.

This might also interest you:

We were talking to:

Kristin Hirschmann is a cultural and educational scientist. She works as project manager for the J&K – Jöran und Konsorten (German) training agency – a “think and do tank” for contemporary training. In this context, she designs and organises educational events for all educational fields and works in a content-related capacity on the topic of Open Education/Open Training.
Portrait: Kristin Hirschmann©

The post 10 Years of OERcamp: Community Get-together on Digital and Open Educational Resources first appeared on ZBW MediaTalk.

Social Media in Libraries: Best Practice of the ETH Library in Zurich

In this interview, Lea Bollhalder, who is responsible for the social media channels of the ETH Library, gives us an insight into the work of the social media team.

Why do you think it is important for libraries and digital infrastructures to be active on social media?

Social media enable direct communication not only between the library and its customers, but also between the library’s customers themselves. This opens up a variety of perspectives. Libraries can use social media to increase their visibility, raise their profile and generate additional website traffic – just to name a few examples. Through social media, libraries can provide relevant, quality information to their target audiences and build relationships between the library, its customers and other stakeholders. The content complements a library’s existing marketing and communication channels.

With the ETH Library, you operate your own channels in various social networks. Why did you decide to use them? Who are your target groups there?

The ETH Library is active on LinkedIn, Twitter, Facebook and Instagram. We have either deleted other channels or no longer actively operate them.

ETH Centrum©, working stations

At the time we chose the channels, we took our goals, target groups and capacities into account. We are constantly checking which channels are becoming more relevant for customers of the ETH Library and which are becoming less popular, and we compare whether our goals can be achieved on the respective channel. For example, we are currently keeping a close eye on current developments on Twitter and Mastodon and are in contact with the ETH Zurich Communications Department.

One of the communication goals of the ETH Library, which also guides our social media strategy, is to sharpen our profile. We want to achieve this on social media by identifying specific target groups for each channel and focusing specifically on them there. On our channels, we address the scientific community of ETH Zurich (students and researchers), the interested public as well as other libraries and their employees, depending on the objective. We have subdivided these target groups even more precisely, defined personas and consistently focus on them when creating social media content.

What topics take place on your social media channels?

The topics are very broad. However, we always strive to provide relevant content for our target groups and continuously collect content ideas. For example, we share tips and tricks for studying and academic writing on Instagram and Twitter, industry news on LinkedIn and content related to our collections and archives preferably on Facebook. On social media, we promote our services, products and events, new blog articles and share curated content – but there should also be room for entertaining content. We regularly involve our followers and ask them about their wishes, e.g. regarding content that is useful and interesting for them.

To fill the social media channels for an institution with good content, you need people who think of the social media team and share information, insights and stories. How do you manage to activate other staff members to provide you with content ideas?

We work with an editorial plan that includes social media as well as all other communication and marketing channels. This makes content planning much easier and we know exactly what is coming up and when. In addition, we maintain a close exchange with various departments of the ETH Library and are included when new communication and marketing activities are planned.

Furthermore, the social media team curates content and asks the specialists at the ETH Library for their opinion regarding the quality and target group relevance of the source found. If we receive subject-specific questions on social media, the respective specialists provide us with the answers.

In addition, we are planning to set up an internal network that will enable us to spontaneously get in touch with the ETH Library staff. This should enable our colleagues to share content inspirations, ideas and images even more effectively with the social media team. Furthermore, employees should help with content curation by sharing interesting news articles, blogs, social media posts, etc. that they have come across. The idea behind this network is not only to curate content more efficiently and to create it faster, but also to allow employees to help shape the ETH Library‘s presence in social media.

Which topics or posting formats work particularly well for you?

Video content generally achieves better results than photo posts – with a few exceptions. On Instagram, we’ve been focusing heavily on Stories since mid-2021, and we’ve also been creating more Instagram Reels for a few months now. However, there are always surprises as to why a certain post was particularly popular or, on the contrary, met with no interest at all. Basically, any format can achieve good results as long as it generates added value for the relevant target group – regardless of whether the post provides useful information or is simply entertaining.

Has a content idea ever backfired?

Yes! In February 2022, the course “How to use the ETH Library in 8 steps for new staff and doctoral students, or what you need to prepare for a zombie apocalypse” took place, which – from our point of view – finally had a really snappy title that we obviously wanted to use for ourselves on social media. We promoted the course on the second day after the start of the war against the Ukraine. The illustration of the course content with zombie illustrations was rightly perceived by our followers as tasteless and inappropriate. We immediately deleted the Instagram Story and apologised. Of course, it was not intentional. This unfortunate incident occurred partly because the course content was prepared particularly early and the social media manager was on vacation at the time of publication. During the preparation time, we had not yet made any connection between the chosen zombie images and the war.

Do you have any good tips for libraries that want to get started with social media?

Always start with the goals and the target groups and consider how a post will generate added value for the relevant target group. The choice of channel should be secondary. A solid social media strategy can help set the right goals and a plan for how to achieve them. It is also important to consider your own resources. If these are limited, it is better to limit oneself to individual channels instead of being present on all social networks, even though one does not have the capacity to regularly provide content on them.

Finally, a little peek into the magic box: what are your favourite social media tools?

  • Hootsuite – makes it easy for us to plan content in advance and analyse our social media activities.
  • Animoto – a simple tool to create video content quickly and without prior knowledge.
  • Canva – no longer an insider tip! With Canva you can create visually appealing content without any design knowledge.
  • Microsoft Excel – sounds boring, but the Excel editorial plan makes content planning and collaboration much easier.
  • chatGPT – we are currently experimenting with this AI text tool. Just ask the AI and never sit in front of a blank page again.

This text has been translated from German.

The ETH Library on the internet:

This might also interest you:

We were talking to:

Lea Bollhalder has been working at the ETH Library since July 2018 and is responsible for the social media channels. She studied Human Biology at the University of Zurich and has an additional Master’s degree in Marketing, Service and Communication Management from the University of St. Gallen. She can also be found on LinkedIn.
Portrait: Lea Bollhalder©

Featured Image, ETH outside view: ETH Zürich© / Gian Marco Castelberg

The post Social Media in Libraries: Best Practice of the ETH Library in Zurich first appeared on ZBW MediaTalk.

Back to the Future: Amazing Discoveries in a Futurology Card Index from the 1980s

Guest article by Anna Kasprzik

The principle of knowledge organisation

Semantics is the study of meaning, and knowledge organisation is semantics made explicit. A great deal of the time I have spent studying, engaged with my Ph.D., and working has been taken up with these two topics and I particularly love eccentric examples of them, such as Luhmann’s card index (German).

During my librarian traineeship in Munich, our subject indexing lecturer Gabriele Meßmer left a lasting impression on me when she told us how, many years ago, she had to beg for three years (!) as a little light at a large German library to be allowed to sort cards into the subject catalogue, because at that time only officially qualified subject indexers in the higher service were allowed to do so – I guess that at some point she wore down the resistance of her superiors. Later she was very eager to follow how the library world entered into electronic data-processing, initially with punch cards … and, after several decades of persistent work on the topic, she became one of the leading lights in the world of committees and education regarding subject indexing in the German library system. Now she is retired. Her unwavering passion for knowledge organisation moved and inspired me.

A few years ago, when I had recently arrived at ZBW, I was therefore delighted to be allowed to rummage around in the card indexes in Kiel, which contained the precursors of the Thesaurus for Economics (STW) … and was pleased to find a set of particularly interesting cards. I was all the more alarmed when I heard rumours this spring that these boxes were to “go over the ramp” (or something like that, I couldn’t help thinking of “walk the plank”) – “oh no! I absolutely have to save ‘futurology’!” ?

Photo 1: Futurology card index

Fortunately I was able to infect my superiors with my impetuosity and in a cloak and dagger operation (ok, it was actually during a regular site trip to Kiel in broad daylight), I was allowed to “extract” the box and evacuate it via the internal mail to my office in Hamburg, where it is now safely stored. Since then, this box has been an endless source of amusement whenever I need a break from the more serious side of our work.

A melange of meta-levels resembling an Escher painting

Why was I so fascinated by “futurology” in particular? Or to put it the other way round: What’s not to like? For nerdy semantics enthusiasts, futurology and this box with cards dating from the 1960s to the 1980s represent a huge crazy bouquet of meta-levels and mind-blowing twists – a (not-at-all complete) list:

  • In bygone days people thought about the future, and I am now in their future, thinking about the people who in the past thought about the future …
  • Futurology (also known as future studies) isn’t merely concerned with the future, however, but with the science of thinking about the future …
  • The cards in this box deal with literature that represents the contemporary view of the science of how one should best think about the future and how people are thinking about the future …
  • … and the knowledge organisation system back then tried to use the keyword “futurology” to classify literature that represents the contemporary view of the science of how … and so on.

How many meta-levels is that already? Never mind, I’m feeling pleasantly dizzy.

I was also fascinated to discover how much one can glean from the titles – without having read the listed publications themselves! – about the attitude to life at that time, and perspectives at that time on the next century, and how shockingly visionary or also how shockingly up-to-date these perspectives still are. Some things were simply amusing, others made me choke on a cynical laugh.

I thought I would share a few examples with you in this blog post and add my two cents’ worth ( ? ) as well.

An astonishingly clairvoyant potpourri of visions

“Too stupid for the future? People from yesterday in the world of tomorrow” (German, publ. by Theo Löbsack, 1971)
? Apparently, people were scared of being left behind 50 years ago as well…

 

“The desire for doom. Pessimistic future prognoses, a modern illness?” (German, Ivo Frenzel, special issue “Zukunft konkret”, 1978)
? Now we can answer this question too: Whether it’s illness or a valid immune response, it is still rampant – you simply have to scroll through Twitter for half an hour.

 

“The most dangerous years since the Ice Age. Our future up to the year 2000” (German, Karl Deutsch, “Presentations in the context of the gala to celebrate the 50th anniversary of the Edeka juniors group”, 1980)
? “The most dangerous years”? Wait until the year 2020, you ain’t seen nothing yet …

 

“Only one working day per week in the future? Microelectronics and its social consequences” (German, Frank Niess, 1981)
? Also highly topical! From fears of being cast aside due to the progress of automation through to pandemic-accelerated flexibilisation of our working methods and work locations with the help of “microelectronics”… although: to my knowledge, Google and other Silicon Valley companies have not yet managed to compress working hours to a single day per week – dear Civil Service, perhaps a chance for you to make a name for yourself as being particularly innovative…? ?

 

“The problems with personal privacy in the year 2000” (German, Harry Kalven, 1968 !! )
? There’s almost nothing more I can say about this. They SAW IT COMING! ALL OF IT!

 

“Wrongly programmed. About the failure of our society in the present and for the future, and what actually needs to happen (German, Karl Steinbuch, 1968)
? Also has a ring of familiarity to it! ? I really wanted to know what “actually needs to happen”, so I started searching for the book and here you are: number 226604144 in EconBiz (German). From the single-view page, you at least get to an amusing review in the Frankfurter Allgemeinen Zeitung (German, PDF, from 25.09.1968 )- excerps:

    “This book is aggressive: In an age of sensory overload, information no longer reaches its target without the vehicle of provocation.”
    ? No kidding.

    “The object of this aggression is the ‘hidden world’, by which the author means everything that prevents us from developing a scientific culture based on research and technology.”
    ? Science sceptics! We know about them!

    “A future superiority of computers over the human brain regarding all rational mental processes will make possible the establishment of a ‘cybernetic state’ superior to all previous forms of political organisation. […] “This requires a careful analysis of which principles are suitable for making human life possible and worth living in the engineered world of the future and dense mass society.” That gets to the heart of the matter. Who analyses? Who decides what is valuable and ethical?”
    ? The fear of “artificial intelligence” and all that “it” could come up with – that too is highly topical.

    “But: [this also requires a new faith.] In the quality of the future person, who […] must have not only the opportunity, ‘to develop patterns of thinking and behaviour in freedom, which were previously unknown’. Who has to be not merely […] an original personality, which, as might be expected, does indeed require a high level of optimism, but who must also be more humane, a ‘better person’ in the deepest sense of this hackneyed term. This kind of ‘moral mutation’ would be an innovation in the history of homo sapiens.”
    ? And this 300 years after the dawn of the age of the Enlightenment … oh well. Sic transit gloria mundi. ?

On that note: I hope that you enjoyed this trip “back to the future” and wish you all a pleasant turn of the year. Stay healthy and in good spirits!

P.S.: My journey down the rabbit hole with the previous publication went on ? my parents’ two cents’ worth:

Parent 1: “I think ‘Wrongly programmed’ is on the book swap shelf. I’ll go and see if it’s still there.”
Me: “I wonder if the author ever dreamed that he would end up on the book swap shelf …”
Parent 1: “That was a bestseller. Basically everyone had that book. That means there still must be a lot of copies around that are now being thrown out.”
Parent 2: “Yes, Steinbuch was a technocrat – he didn’t understand much about society. Many people had it in their cabinet at that time, including your grandfather. He was unpopular with the 1968ers and the young Green Movement.”
Ho-hum. ?

Read more

More articles by and with Anna Kasprzik on ZBW MediaTalkk

About the Author:

Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Mastodon.
Portrait: ZBW©, photographer: Carola Gruebner

The post Back to the Future: Amazing Discoveries in a Futurology Card Index from the 1980s first appeared on ZBW MediaTalk.

EconStor Survey 2022: Repository Registers Satisfied Users, but More Marketing Efforts Needed

Guest article by Ralf Toepfer, Lisa Schäfer and Olaf Siegert

Back in 2009, the ZBW launched its disciplinary Open Access repository EconStor. Now in 2022, it provides more than 240,000 academic papers from the economics and business studies disciplines, coming from over 600 institutions and about 1000 single authors worldwide. All papers are available in Open Access. After thirteen years of developing and connecting EconStor we thought, it was high time to hear from our research community, what they think about the repository and its services.

In this short report, we would like to present the results of a user survey we conducted this year. First, we will give some background information about the survey and how it was conducted. Then we describe who the actual respondents were. Last but not least, we present some results on specific aspects of the survey.

Background information

The idea to conduct a survey came up in a brainstorm meeting of the EconStor team in 2020. Our aim was to address the following topics:

  • Satisfaction of the research community with EconStor,
  • Environment analysis (which other tools are used in economic research?),
  • Special look at our authors (who is uploading papers on EconStor?),
  • Suggestions for further development.

Since we are no survey experts, we knew, we would have to rely on some external assistance. This was provided from two sides: First, our ZBW marketing team, who had conducted surveys on other issues before. And second, from a course of students in library and information science at the HAW -Hamburg University of Applied Sciences (German). Their professor, Petra Düren, contacted us in spring 2021 to ask for practical examples regarding user surveys. We decided to organise an international EconStor user survey together at the start of 2022. We developed the questionnaire and used Limesurvey as our tool for the survey. After some pretesting, we were ready to start.

The survey was conducted between the 10th and the 24th of January 2022. We promoted it via the EconStor website and through mailings among researchers in Germany and abroad. Overall, we received 756 responses, of which 441 were fully completed questionnaires.

Profile of respondents

Most of the respondents came from Europe (87%): Half of them (45%) was based in Germany, other major European countries were Italy (16%), Spain (5%) and France (4%). From the rest of the world about 3% overall came from the United States and 3% from Australia.

Regarding their affiliations, most respondents were coming from universities (78%), another 10% were from universities of applied sciences and 6% from non-university research organisations. The rest mainly came from central banks or the private sector.

With regard to the age of the participants, we received fairly even answers from different age groups according to the academic career span: i.e. 9% were younger than 30 years, 45% were between 30 and 49 years and 56% were 50 years or older. Looking at their academic status, 58% were professors, 20% were researchers or postdocs and 18% were PhD Students (see illustration 1).

Illustration 1: EconStor User Survey 2022: Participants by academic status

Regarding the scientific disciplines, most of the respondents came from the field of economics (65%) or business studies (25%), the remaining 10 % were allocated to neighbouring disciplines such as sociology, political studies, statistics or geography.

Results of the EconStor survey

The survey addressed various aspects of the use of EconStor such as awareness and familiarity of the services, usage of searching and browsing options, evaluation of the services and suggestions for improvement. In the following, we briefly present some key results.

Usage & environment analysis

The majority of respondents have known EconStor for more than three years, but there are differences by discipline, as researchers in economics are aware of EconStor longer than their colleagues in business studies (see Illustration 2).

Illustration 2: EconStor User Survey 2022: Awareness of EconStor

In terms of usage, almost half of the respondents use EconStor at least once a month and about 14% even weekly, indicating that the majority of respondents are quite familiar with the platform.

Illustration 3: EconStor User Survey 2022: Usage of EconStor

About one third of the respondents from the field of economics first discovered EconStor via RePEc, while most researchers from business studies became aware of EconStor via Google Scholar. This is in line with the answers given concerning the use of other platforms the researchers use to access economic papers, where besides ResearchGate, Google Scholar, RePEc and SSRN are mentioned.

Illustration 4: EconStor User Survey 2022: Usage of other platforms than EconStor to access economic papers

Coincidently ResearchGate and to a lesser extent RePEc and SSRN are the most used platforms to distribute research papers in economics and business studies. Researchers also consider their own research institution important for disseminating their papers. This suggests that institutional repositories are still relevant even if large disciplinary and interdisciplinary platforms exist.

Illustration 5: EconStor User Survey 2022: Platforms to distribute research papers

Searching & browsing

EconStor provides several options to navigate through the website. Users can find papers by searching specifically for individual titles or by using the browsing function to view documents sorted by institution, type of document, author, etc. Considering the answers given, searching and browsing options are not very important. Only about 50% of respondents even use the options provided by EconStor. One of the respondents wrote, “I search for papers mostly via Google or Google Scholar, where I may find EconStor papers. It did not occur to me to search on EconStor itself, or to explore its functionality.” This answer seems to describe the typical use case of EconStor accurately. Our monthly usage statistics tell the same story: Researchers use EconStor primarily as a source to get the full text after searching databases or using search engines.

Illustration 6: EconStor User Survey 2022: Usage of browsing and searching functions

Self-upload & quality management

More than 600 institutions use EconStor to disseminate their publications. Authors can also upload their papers themselves, although this feature is reserved for Ph.D. researchers in economics and business studies at academic institutions. After registering, authors can upload working papers as well as journal articles, book chapters, conference proceedings, etc. The majority of about 55% of the respondents did not know about this option. However, of the authors who actively use the self-upload feature, more than 95% are satisfied or even very satisfied with the self-upload process.

Illustration 7: EconStor User Survey 2022: Satisfaction with the self-upload process

Once a paper is uploaded, the EconStor team checks several points for quality assurance: namely plagiarism check, personal requirements for registration, document type, journal listing in Directory of Open Access Journals (DOAJ) and formal checks of the paper. About two thirds of the authors appreciate this quality assurance measures. The plagiarism check and formal check are most important to them.

Illustration 8: EconStor User Survey 2022: Importance of quality assurance checks

Evaluation of other EconStor services

EconStor provides some more services than the self-upload feature or the searching and browsing options. To the EconStor users the most important other services are the distribution service and the provision of download statistics. The distribution service includes distribution to search engines like Google, Google Scholar or BASE (Bielefeld Academic Search Engine) and to academic databases like WorldCat, OpenAire and EconBiz . More than 90 % of the respondents agree that these two services are important for their work. The possibility to export metadata and to link papers with their underlying research data are relevant too, but to a lesser extent.

Illustration 9: EconStor User Survey 2022: Importance of different services in EconStor

Overall evaluation of EconStor

More than 95% of the respondents are satisfied or very satisfied with EconStor and its services overall. This applies to the two research areas of economics and business studies.

Illustration 10: EconStor User Survey 2022: Evaluation of EconStor and its services overall

About 67% of the respondents feel sufficiently informed about the services on the EconStor website. While this is by no means a bad result, there is room for improvement. Some respondents for example suggest providing a newsletter informing about new content indexed in EconStor.

Suggestions for improving EconStor

54 respondents were kind enough to share their views on possible improvements to EconStor. The suggestions ranged from the desire for higher visibility or awareness of EconStor and the desire for more information about the product to suggestions for improving individual functions.

Illustration 11: EconStor User Survey 2022: Suggestions for improving EconStor

Conclusions on the 2022 EconStor survey

Overall, the researchers evaluated EconStor very positively. In particular, users who have known the service for several years and those who actively use the self-upload feature are very satisfied with it. Its users perceive EconStor primarily as a full-text source that can discovered via search engines, while its own search and browsing functions are less well known. The environment analysis shows differences between researchers from economics on the one side and business studies on the other, e.g. regarding the relevance of RePEc or ResearchGate. The potential for greater use could be tapped through stronger marketing (including promotion of the self-upload service) and through supplementary services.

The EconStor team very much appreciates the answers and opinions provided. This will help us to make EconStor even better. As a first response, we have created two promotional videos, one regarding EconStor in general, and the other one regarding the self-uploading process in particular. Other improvements will follow soon.

This could also interest you:

About the authors:

Ralf Toepfer works in the Publication Services Department of the ZBW – Leibniz Information Centre for Economics, where he is responsible for discipline-specific services for the management of economic research data, among other things. You can also find him on Mastodon.
Portrait: ZBW©, photographer: Sven Wied

Lisa Schäfer has been supporting various Open Access transformation projects at the ZBW – Leibniz Information Centre for Economics since 2020.
Portrait: Lisa Schäfer©

Olaf Siegert is head of the Publication Services department and Open Access Representative of the ZBW – Leibniz Information Centre for Economics. He is involved with open access as part of his work at the ZBW and is also active for the Leibniz Association, where he represents the Leibniz Open Access working group in external committees. He is involved in the Alliance of Science Organisations in the working group Scientific Publication System and at Science Europe for the Leibniz Association.
Portrait: ZBW©

The post EconStor Survey 2022: Repository Registers Satisfied Users, but More Marketing Efforts Needed first appeared on ZBW MediaTalk.

Self-organised network: does Mastodon have what it takes to become the “scholarly-owned social network”?

by ZBW MediaTalk-Team

Ever since Elon Musk, holding a sink in his arms (“Let that sink in!”), entered the Twitter headquarters in San Francisco at the end of October, a sense of dark foreboding has been spreading in the online world. The richest man in the world had orchestrated a hostile takeover of the short message service: It is rumoured to have cost him 44 billion US dollars to turn his hobby into a new enterprise, which he can add to his business empire (Tesla, SpaceX, SolarCity, Neuralink and others).

The billionaire had previously assured the world that he is a “free speech absolutist”. His plan was now to make Twitter into a place of uncensored freedom of speech. Those who were sanctioned and blocked for violating the community rules, would sooner or later receive a general absolution and be able to return to the platform. Even Donald Trump – former president of the United States and co-instigator of the most spectacular attempted coup in the USA to date – would have the red carpet rolled out for him.

Toxicity 2.0

Now Twitter has never been a cosy refuge of mutual understanding, consideration and the cultured exchange of arguments. Twitter has polarised opinions for years. But as the increase in social division has continued, particularly in the west, hate and toxicity have been constantly increasing on the platform. They are expressed in threats, open racism, discrimination, fake news, doxing and cyber-bullying. More than a few German politicians have therefore recently pulled the plug and turned their backs on the network.

How Twitter will develop in future years is anyone’s guess. However, on the evidence of the few days since Elon Musk has been at the helm, it doesn’t look good. The new CEO appears to be nervously driven, almost erratic. His first act after taking the wheel was to fire the moderating powers within the company, thereupon to bark contradictory commands to the remaining workforce. In the meantime, Twitter Inc. has neither a press department nor a data protection officer, causing the data protection officers of German companies and organisations to break out in a collective sweat, because the operation of Twitter accounts under consideration of GDPR aspects can only be legally justified with a great deal of good will.

Fear of loss of reach

Ministries, authorities but also the science sector is now facing a dilemma. There is a strong moral obligation to pack up, shut down the account that you have been nurturing and maintaining for many years and bid farewell, softly but firmly, to Twitter. On the other hand, there is an understandable fear of loss of reach: How can politics stay in touch with the public? How can universities, museums and libraries fulfil their public mandate if, at the same time, they leave their online communities?

Wikipedia, CC BY-SA 4.0

It is questions like these that, since the dimming of Twitter, have led to one name in particular being floated around: “Mastodon”. At the moment it’s individuals in particular, who are looking for a new home – and the short messaging service alternative seems to have a certain appeal to members of the science community especially.

Much has been written in recent days about this actually not-so-very-new platform. Started in 2016 by German software developer Eugen Rochko, it is a distributed micro-blogging service that lies completely in the hands of the community, thanks to its open source code. In contrast to Twitter, Mastodon is not a centrally organised entity but a network that is created from hubs “instances”. Every instance can function autonomously or alternatively stretch its arms out to the big network where it then becomes part of the large Fediverses that is home these days not only to social networks but also to video streaming services, image sharing services and the like. Theoretically, every imaginable service and every kind of content can be added to the Fediverse using compatible open source communication protocols – the possibilities are boundless!
Theoretically, at least.

Crisis as chance

Although the developments regarding the Twitter takeover are to be evaluated critically, they were – at the same time – a collective wake-up call for openness in the digital sphere. The idea of decentralised systems that are in the hands of the communities – such as for scientific exchange and scholarly communication – is closely aligned to the wish of many people for more openness in science. There is no gatekeeper; there are no paywalls, no evolved, incomprehensible hierarchies; just the self-organisation of the community.

ZBW MediaTalk succumbed to the charm of Mastodon at a quite early stage. In 2019 we set up the account for the blog; a few months ago we really got going and since then we have been posting content regularly from the library and Open Science world.

And it’s working.

But after several months of operation, maybe it’s time to do a stocktake – not a performance evaluation, though; it’s definitely too early for that. But a summary of the experiences we have made to date. Because naturally even this much lauded network (perhaps occasionally praised with too much uncritical euphoria) is not entirely free of problems. Let’s refer to them as unusually deep puddles that lurk out of sight, and that Mastodon newbies can easily put their feet into. Because they do exist.

At that time we decided to create our account on the Openbiblio Instance. Purely theoretically though, we could have decided to use any one of the dozens of official and even hundreds of unofficial instances. Or to operate our own server. So why Openbiblio? This instance has been operated by the Berlin State Library (SBB) since 2019, and we therefore know the team behind it. There is a data protection statement, server rules and thanks to the maintenance by the SBB IT department, one can assume that the accessibility of the server is relatively reliable. All this is not necessarily a matter of course. As a result of its decentralised nature, Mastodon and the Fediverse in general have been born with structural weaknesses that have still not been ironed out.

Three critical points

1. Data protection

Firstly the topic of data protection. Unlike commercial platforms that track, log and process the behaviour of their users down to the smallest detail in order to sell targeted advertising, Mastodon instances are exempt from such blanket data collection frenzy. Is data protection therefore automatically guaranteed in the Fediverse? Not at all. With one click, the administrator has an overview of everything at all times: on Mastodon, posts and messages are not even end-to-end encrypted, which is why most instances today pre-emptively warn that if someone wants to send a DM, “don’t share any sensitive information on Mastodon!” And the way in which private user data is protected from the eyes of third parties is also left to the discretion of each administrator. With some servers, there is no mention of a contact person for data protection issues; others completely neglected to provide a privacy policy worth mentioning at all.

2. Data security

Next keyword: data security. This too depends completely on the knowhow and commitment of the server administrator. It doesn’t take much to bring a Mastodon instance to life. But it doesn’t take much to destroy it again either. The founder of the Social.Bonn server found this out in the year 2017. When trying to install an update on his instance, the whole system crashed: all postings and all the accounts that had been previously set up were irretrievably lost. There was no backup.

Do the administrators of the chosen instance handle it with care? Do they install critical fixes to the code in a timely manner? Do they even install updates at all? Can they guarantee regular data security? From the outside, these questions can almost never be answered, which means that choosing an instance is reduced to a game of chance. The hint that you can change your instance at any time is no help here, because when would be the right time to do this? However much the world mistrusts the major commercial platforms: no-one seriously worries about a complete loss of data there.

3. Moderation

A third point of criticism concerns the climate – the social discourse on the platform. How can it be ensured that the instance is a place of civilised discourse? Mastodon is by default equipped with features that allow the members to report offensive or criminal content. But how and whether the administrators react to the reports is initially left solely up to them. The Fediverse does not have a common canon of values for content evaluation; there are no generally-valid community guidelines and no overriding committee that members can call on for clarity if no action is taken or suspicions are false. What mobbing is, what fake news is, where offensiveness stops and open hatred begins – all this is decided by the administrators of the respective server, initially under their own steam. Sometimes their rules are laid down specifically; sometimes not. Factors such as the size of an instance and the resources available can also make content moderation more difficult. The large commercial networks rely on artificial intelligence and outsourced moderation teams to fish out evil, dirty and forbidden content from the timelines. How can just one person take on this task round the clock if they are maintaining an instance with thousands of members? And the issue of toxicity is only one element of the supervision: we haven’t even mentioned how copyright-protected content is handled (German).

Cooperation is now called for

Data protection, data security and moderation – these are the three critical weak points that you need to bear in mind with Mastodon, when choosing an instance. There is always only an approximation of security (and at this point, thanks again to the SBB in Berlin), but no guarantees. If you want to play it safe, you logically have to rely on self-hosted instances.

Operating your own instances as an alternative to using the services of the major commercial players sounds like the promised land in a science environment that is becoming increasingly more open, transparent and independent. This is also true in the light of current efforts to have the operation of Open Science infrastructures completely in the hands of scientific communities (scholarly-owned) or at least under their control /scholarly-led). But in order for this plan to become a reality, institutions must cooperate more closely, come to agreements, and develop a common vision of what such a network could look like and the values it could reflect. And the time is now. Consolidation, clear responsibilities and transparency are required to minimise the three structural weak points. One idea could be to establish a consortium, within which several scientific institutions can join forces, either on an institutional or target group-specific basis, in order to jointly operate an instance that is secure for everyone. The fact that Mastodon is an open source project means that there is even the opportunity to actively push the development of the network forward or promote it in another way.

Alternatively or additionally, the development of a certification process is a possibility for existing and new instances such as those in the scientific sector. A joint criteria catalogue has been defined for this purpose – compliance with it offers registered users a certain degree of security. Are there specific contact persons? Is data protection maintained? Are data pools backed up regularly? Does moderation take place, and if yes, on the basis of which rules? If there was simply a seal, a formal certification, then outsiders would have many of their questions answered. Even today, timid attempts at an initial regulation have been made: For example, the official Mastodon website currently only lists those servers, who fulfil certain criteria, although this tends to concern merely rudimentary rules.

These are just a few suggestions. There are sure to be a few clever ideas out there on this topic that could help to make Mastodon a viable alternative to Twitter – or much more, perhaps. One thing is certain: the momentum to start thinking about it has arrived right now.

You may also be interested in:

The post Self-organised network: does Mastodon have what it takes to become the “scholarly-owned social network”? first appeared on ZBW MediaTalk.

Digital Long-term Archiving: Discovering Networks With the nestor Community Survey

Guest article by Svenia Pohlkamp, Stefan Strathmann and Monika Zarnitz

The idea of the nestor community survey

Digital preservation is a task that is complicated and resource-intensive. Cooperation, communication and mutual support is necessary to cope with the different challenges of this matter. nestor is active in all these areas.

That`s why the idea emerged to start a survey among the national and international communities that cope with digital preservation. nestor installed a small working group that developed the questionnaire and analysed the result of the community survey. The aim of this survey is to create transparency about the international landscape of communities in this field and to collect information for all those who wish to collaborate.

The questionnaire for the survey, which was conducted online, consisted of 40 questions. We gathered 54 valid answers as basis for our analysis.

The nestor community survey

The survey distributed through multiple channels, such as mailing lists, direct contact to colleagues and so on from autumn 2019 until May 2020. The results of the survey were evaluated, edited and published in 2022 in the series of nestor materials.

Besides this publication, another result of the survey was the development of so-called community profiles, which you can find on nestor’s website. These profiles are self-descriptions of the involved communities and may serve as a sort of registry of national and international communities. They provide the first ever overview of the various facets, resources and focal areas of long-term archiving networks worldwide. The aim is to improve transparency and facilitate cooperation of the different communities worldwide.

Of the 54 participants who completed the questionnaire in full, 32 have so far allowed us to publish their community profile. We hope that more will give their consent. The communities had the opportunity to update and/or correct their data while reviewing their profiles.

What is a community?

One basic decision during the project was how to define and circumscribe the term “community” in the context of the survey, since there are manifold possibilities to define a community and it should be fitting to our object of investigation. Following intensive discussion, the working group agreed on the following definition:

  • An open community of persons and/or institutions that engages with the subject of long-term archiving. Digital long-term archiving can be one of several topics, which the community deals with.
  • A community whose members are committed to digital long-term archiving in a manner that goes beyond pure self-interest. Its central or sole purpose is not to supply a product or provide a commercial service.
  • A platform for discussing the topic of digital long-term archiving and its advancement, including the development of tools and/or the provision of services.
  • It can be local, regional or international.
  • It does not matter how big the community is. It can be large or small.
  • Whether the community is product-related or not is also irrelevant.
  • In the following paragraphs, we present some selected results of the survey.

Digital preservation communities: Where are they situated?

In question 6 we asked in which country or part of the world the community is located. Several communities mentioned more than one country in the text entry field. We chose either the country where they are based or the first country they mentioned.

Digital preservation communities: Where are they situated?

Interpretation: Almost all communities represented in this survey are situated in industrial countries. Either we couldn’t reach out to the communities in other countries or there are very few digital preservation communities in the developing and less-developed countries. This may be due to the lack of resources, and shows that in the most countries there is either few digital preservation activity or the actors in this field do not have the resources to join a community and benefit from the exchange with colleagues in other countries. The latter aspect may be not so important because communities increasingly communicate digitally and there is abundant literature and software freely accessible in the web.

Digital preservation communities: Are they silos or do they cooperate with each other?

In question 25, we asked how many cooperations with other communities the participating communities currently have. Four check boxes were provided. Only one answer could be given.

Percentage of cooperations

Interpretation: Our data shows clearly, that communities are no silos and that they interact with other communities intensively. Only 17 % of the communities do not have a cooperation with another community, while 19 % of them cooperate with more than ten other communities. Institutions and persons who engage in digital preservation are often members of several communities, so there is a broad exchange of ideas, tools, publications and other results of community work. Digital preservation is a task too complicated to tackle on one’s own and this not only on the individual level but at the level of communities as well. This may be the reason for the intensive exchange between the communities.

Digital preservation communities: What kind of organisation are they and what kind of finance do they use?

In question 11, we enquired how the communities are organised. The majority of 93 % stated to be non-profit organisations. In question 14, we asked how the communities finance themselves and their work. Six check boxes were provided. Multiple answers were possible. The sixth check box was “Other” with a text entry field.

The entries for “Other” have been re-categorised and are shown in the table below alongside the five given response options. The entries re-categorised and reassigned in “Other” are displayed in italics.

Digital preservation communities: What kind of organisation are they and what kind of finance do they use?

Interpretation: This table shows that the main sources of finance are membership fees, revenues from services, sponsoring, third party fund / grants and in kind contributions. None of the other sources has a comparable importance for financing. This, together with the fact that communities are mainly non-profit organisations (see above), shows that digital preservation has no commercial aims and that the self-conception of these organisations is comparable to the self-conception of libraries, archives and museums as heritage organisations. Indeed, the persons active in the communities originate from organisations such as these and carry the same mentality into the communities.

Digital preservation communities: What makes a community successful?

In question 40 we asked about the most important success factors of the community. Participants often entered several options into the text fields. This means, there were many different answers to this question. For this reason, we assigned the answers given in the text entry fields to different categories (where possible) and displayed them in a word cloud. It contains all the categorised answers as well as those for which no category was found.

This word cloud contains all the categorised answers as well as those for which no category was found.

Interpretation: This word cloud shows the most important aspects for the success of a digital preservation community. Three aspects are particularly significant:

  1. Critical success factors are the engagement, the collaboration and the sharing of knowledge and resources.
  2. Communities support the creation of knowledge and technologies for digital preservation.
  3. The broadness of a community is important since there are so many details in digital preservation that the manifoldness of competencies and perspectives is necessary..

Conclusion on digital preservation communities

The nestor community survey offers a rich source of data that explains the behaviour of digital preservation communities. The cases we picked up in this blog post show that the communities cluster in industrial countries and that they are in close contact and interaction with each other on several levels (the communities themselves, individual members, institutions, persons). The institutions that are parts of the communities are mainly non-profit organisations with the typical sources of finance and the typical mentality of heritage organisations.

Repetition in 2023

We would like to repeat the survey in 2023 and hope to improve it with our experiences from the first run. We aim at reducing the time between the beginning of the next survey and the date of publication of the results and we will reformulate some questions so that they are clearer and the evaluation is easier. We hope that with the publication of the first survey there may be more awareness for the second round and that then more communities participate.

We invite all communities that are active in the field of digital preservation to suggest improvements of the survey and to take part in its upcoming repetition. If you are interested in participating, please contact us:

This might also interest you:

About the authors:

Svenia Pohlkampworks at the German National Library (DNB) and manages the nestor office there. She is responsible for the coordination between nestor’s partners and organisational matters of the network. She also takes part in two of nestor’s working groups, Community Survey and Certification.

Stefan Strathmann works at the Göttingen State and University Library (SUB) in the Digital Library Department. He is responsible for SUB’s activities in the area of digital preservation. In particular, he represents the SUB at nestor, the German national network of excellence in digital preservation.

Dr Monika Zarnitz is an economist and Head of the Programme Area User Services & Collection Care at the ZBW – Leibniz Information Centre for Economics. She is head of the nestor working group „Community Survey“.

Portrait Monika Zarnitz: Fotograf: Sven Wied, ZBW©

The post Digital Long-term Archiving: Discovering Networks With the nestor Community Survey first appeared on ZBW MediaTalk.

Social Media in Libraries: Best Practice From the Austrian National Library

An interview with Marlene Lettner, Claudia Stegmüller and Anika Suck, part of the social media team in the Communication and Marketing Department of the Austrian National Library.

The reach of the Austrian National Library is one of the widest on the social web among libraries in German-speaking countries. Whether it’s Facebook, Instagram, YouTube or LinkedIn – the institution keeps its public up to speed through text, photo and video, and it does it successfully! We asked Marlene Lettner, Claudia Stegmüller and Anika Suck, who are in charge of the channels, what the National Library’s social media goals are, which formats generate followers and what the workflow behind the scenes looks like.

Hello! In your opinion, why is it important for libraries and digital infrastructure institutions to be active on social media?

Firstly, to increase our visibility and secondly, because we want to reach our target groups where they like to hang out. Beyond this, as the Austrian National Library, we have a legal mandate to make our collections accessible to a wide public, and social media is perfect for this.

The Austrian National Library runs its own channels on Instagram, Facebook and YouTube. Why did you decide to use these specific networks and who are your target audiences there?

We cater to our target audiences on all of the channels they use. This means that on Facebook, we communicate with our older target groups who mainly visit our museums. Facebook still offers the best option when it comes to telling our visitors about events too. Instagram is most popular with the target group of 25- to 45-year-olds and it offers some playful features. We mostly use YouTube as a home base for our videos, which we then share on our website or via other social media channels.

What kind of topics do you feature on your social media channels?

We’re not just a library – we’re also home to six museum areas and eight collections – so we need to cover a wide range of topics.

From special exhibitions to events and current blog posts, offers for guided tours and seminars, follower reposts and bizarre discoveries in the archive – we do it all.

To create good content for an institution’s social media channels, you need people who remember the social media team and pass on information, insights and stories. How do you manage to motivate other employees to give you ideas for content?

We are a relatively large institution with almost 400 employees. Luckily, colleagues from the most varied of departments provide us with content on a regular basis. This includes special discoveries from the photo archives, from ANNO (Austrian Newspapers Online) and finds from the hashtag #AriadneFrauDesMonats (“#AriadneWomanOfTheMonth”).

What topics or posting formats work particularly well for you?

Our users like photos of our magnificent ceremonial hall the most, as well as old cityscapes of Vienna.

Antique bookshelves with ladders ladders always work well, as does anything ‘behind-the-scenes’ in addition to unusual, particularly beautiful perspectives. Unusual finds from our collections are also popular.

Has a content idea ever backfired?

Fortunately, we haven’t had a shitstorm yet. And we’ve never had a real fail either. There are, however, some sensitive topics we deal with that might cause a stir. That’s why we try to stick to the facts, stay neutral and not get political. But sometimes people react to something when you’re not expecting it: we recently advertised an event that is taking place throughout Austria that focuses on climate protection this year. Some people misunderstood and reported the post.

In your opinion, what is a good tip that libraries should bear in mind if they want to get started on social media?

As it’s difficult to influence the algorithms, it’s important to experiment and find out what your target audience actually likes. In terms of content, you should aim for quality and stay true to your principles. So don’t share daily politics, polemical content and so on.

And finally, please tell us which formats go down particularly well – both with the public and with the editors.

Stories with GIFs, reels or short videos and anything that gets users interacting with you like exclusive Instawalks, reposts and quizzes. Recurring content like #staircasefriday is also good because the editing is faster, but it still keeps things interesting for users.

Thank you for the interview!
This text has been translated from German.

The Austrian National Library

This might also interest you:

About the authors:

Marlene Lettner (LinkedIn), Claudia Stegmüller (LinkedIn und Xing) and Anika Suck (LinkedIn) are part of the social media team in the Austrian National Library’s Communication and Marketing department.

Portraits:
Anika Suck: private©, Claudia Stegmüller: FOTObyHOFER©

All other pictures: Austrian National Library©

The post Social Media in Libraries: Best Practice From the Austrian National Library first appeared on ZBW MediaTalk.

Social Media in Libraries: Best Practice and Tips for Successful Profiles From the Bayerische Staatsbibliothek

Especially when looking at the Facebook (around 11,000 followers) and Instagram channels (3,700 followers) of the Bayerische Staatsbibliothek (BSB), it quickly becomes clear that they are doing something pretty right on social media. In addition, the BSB is active on Twitter, YouTube and Flickr in various ways. We asked two members of staff about their target groups, recipes for success and topics that are doing particularly well.

An interview with Peter Schnitzlein and Sabine Gottstein from the press and public relations division of the Bayerischen Staatsbibliothek in Munich.

Why do you think it is important for libraries and digital infrastructure institutions to be active on social media?

Here we can only refer to the interview published on ZBW MediaTalk on the seven “glorious” reasons: Why libraries have to be permanently active on social media!

Today, certain target groups can simply no longer be reached with “classic” communication channels such as press relations or a library magazine – regardless of whether they are published in analogue or digital form. These target groups are more likely to be reached – differentiated according to age and content – via the appropriate and corresponding social media channels. This does not mean that classic communication work will disappear in the foreseeable future – on the contrary. However, it can be stated that social media engagement is taking up an increasingly larger share of a library’s overall communication. We have to take this into account.

You are very active on social media at the Bayerische Staatsbibliothek. What are your goals with and target groups on the different channels? Why did you choose these of all channels?

The aim of the engagement in social media is primarily to inform about the Bayerische Staatsbibliothek, its services, holdings and information and usage offers, to interest people in the library or to positively influence the perception of the library and, if necessary, to strengthen the bond with the library through entertaining elements. The activities serve to make the library visible to the digital or virtual public as an internationally important general and research library as well as an important cultural institution on a local, regional and national level. The social media ideally support the strategic goal of the BSB to be perceived as Germany’s leading digital library with extensive, innovative digital usage offers and as a treasure house of written and visual cultural heritage. We attach great importance to participation and networking with specialist communities and stakeholders in our communication.

As extensive and wide-ranging as the fields of action of the Bayerische Staatsbibliothek are, as diverse and varied are the target groups that need to be considered and served. We operate our own channels on Twitter, Instagram, Facebook, YouTube and Flickr. With these five social media channels selected by the library, we hope to be able to address the majority of the target groups in an appropriate manner. Roughly formulated and certainly strongly generalised, we can state the following:

  1. Twitter primarily serves professional communities, thematically related institutions or multiplier groups such as press and media representatives.
  2. Instagram is intended to reach a younger target group (20-35 years of age),
  3. Whereas Facebook is aimed more at the 30 to 55 age group. The two channels should appeal to users as well as to a broad audience with an affinity for culture and libraries.
  4. With YouTube, we want to address not exclusively, but primarily everyone over 16, actually everyone who is at home in the digital world. Explanatory videos on webinars, on how to use the library or a new app are just as much in demand here as the presentation of special library treasures. Video content is currently the measure of all things and we will pay special attention to this channel in the future.
  5. We use the photo portal Flickr less as a social media channel than as a documentation site, to offer important pictures of the building or of exhibition posters in one central place, and for external requests for pictures of the BSB.

In addition to the corporate channels, the Bayerische Staatsbibliothek also operates numerous specialist channels for individual departments, projects or specialist information services. The reason for this is the fact that certain (specialist) target groups cannot be successfully addressed through corporate channels. In view of the immense range of subject areas covered by the BSB, the central social media editorial team cannot have the professional expertise needed to cover all these topics in detail. Coordination processes would be too time-consuming and lengthy to successfully create content and to be able to act quickly and efficiently – a very important aspect in social media communication.

How long have you been present in social media?

The Bayerische Staatsbibliothek dedicated itself to this field of communication relatively early on. We have been active on Facebook, Twitter and YouTube since 2009, on Flickr since 2007 and on Instagram since 2016. At present, we have no plans for further expansion of activities. In view of the short-lived nature and speed of innovation in this area, however, this may change in the short term. In this respect, only a daily status report is possible here.

What topics take place on your social media channels?

The content that the BSB posts can be summarised well, as mentioned above, under “inform, interest, entertain”. The same content is often published on Facebook and Twitter, although more specialist topics that are primarily intended to interest the specialist community and multipliers tend to be published on Twitter. On Instagram, the decisive criterion is always the appealing picture, and recently video. In general, a certain entertainment factor plays just as much a role on Instagram as on Facebook as the primary approach of informing.

In order to “feed” the social media channels well for an institution like yours, you need people who think of the social media team and pass on information and stories, who are perhaps also willing to make an appearance themselves. How do you get other staff to provide you with information, stories and ideas for your channels?

The topics are recruited in close cooperation and constant exchange with our internal specialist departments. There are social media contacts there who report relevant content from their own department to the central social media editorial team. The latter, in turn, also inquires specifically in the departments if necessary. Our directorate expressly supports and welcomes the active participation of the departments, project groups and working groups in the social media work of the house.

The social media team also actively establishes references to other cultural and academic institutions, picks up on library-relevant topics and comments on them. The creation of a thematic and editorial calendar with anniversaries, jubilees, events, etc. also facilitates the identification of suitable content for the social media channels.

In the press and public relations division, something like a central “newsroom” is currently being set up. This is also, where information for press topics or content for library magazines should come in. The social media editorial team will automatically learn about topics, which are primarily intended for other communication channels. The team can then decide to what extent they should be included in the social media work.

Which topics or posting formats work particularly well for you and why?

In general, we can see that postings related to current events work well:

Tweet of the Bayerische Staatsbibliothek regarding the participation in the SUCHO (Search for Ukrainian Cultural Heritage) project (German)

For example, our tweets condemning the invasion of Ukraine (German) or our participation in the SUCHO project (Search for Ukrainian Cultural Heritage, German) achieved a wide reach, as did a humorous tip to cool off in the hot summer month of July. The start of a library exchange with colleagues from the German National Library (DNB) and the Staatsbibliothek zu Berlin (Berlin State Library; SBB), which just started in Munich, triggered many interactions on Twitter.

On Facebook, the World Book Day post (German) on 22 April referring to the Ottheinrich Bible, one of our magnificent manuscripts, together with a series of archive photos of archive photos of Queen Elisabeth II (German) ) on the occasion of her death were very successful.

Appealing images on Twitter and Facebook – especially posts with three- or four-image compositions – are still crucial for success. Embedding videos on these two social networks, on the other hand, surprisingly does not achieve the desired result on our channels. On the contrary. These posts and tweets achieve low reach and popularity.

On Instagram, on the other hand, short videos in the form of reels are becoming more and more important alongside good picture posts in the feed (German), accompanied by casual, often humorous descriptions. We used this format successfully, especially for our exhibition #olympia72inbildern (#olympia72inpictures, German). Both formats also benefit from being referred to via stories.

Sometimes things go wrong in social media. What was your best fail?

Fortunately, nothing has ever really gone wrong – with one exception (see below). However, every now and then we are (justifiably) reminded that we should not forget to gender in our tweets.

Have you ever had a shitstorm? What have you learned from it?

Yes, we had, at least to some extent – and we don’t like to think back on it. However, we have learned a lot from the incident in dealing with social media. The basic mistake at the time was not to have taken into account the specific requirements of each channel with regard to the wording, the approach to followers and fans and the willingness to explain.

Tips & tricks: What are your tips for libraries that would like to get started with social media?

First of all, it is important to do an honest and thorough analysis. Social media ties up resources, and quite a lot of them. Just doing it “on the side” will not lead to the desired result and harbours dangers. If you want to be active, you must have affine personnel with the appropriate know-how and sufficient time resources. It is indispensable to define the target groups and to identify a permanently sufficient number of topics.

While social media was text-based in the early days, today there is no post or tweet without a picture. On some channels, video content is now the measure of all things, just think of the reels on Instagram, video platforms like YouTube or the omnipresent TikTok. They are currently becoming more and more popular and setting trends. These developments must be taken into account in all considerations of online communication.

If you want to use social media as a means of library communication, you have to check whether you can actually afford to operate all the channels that are currently important and which target groups you actually want to serve with which channels. Creating a written concept – even a short one if necessary – helps to answer these questions precisely. For example, concentrating on one channel, true to the motto “less is more”, may be an effective means of operating successfully with limited resources.

Finally, a little peek into the magic box: What are your favourite tools for social media?

With “Creator Studio”, feed posts for Instagram can also be posted conveniently from the computer and not only from the mobile phone, which makes work considerably easier. Then, of course, there is the editorial and topic plan mentioned above. It is the central working tool for keeping track of and working through topics and content across all channels. In addition to news from the management and the departments, it contains as many events, occasions, relevant (birth or death) anniversaries, etc. as possible. Finally, the apps “Mojo” and “Canva” should be mentioned. With their help, we create and edit Instagram stories, reels, social media posts and visual content. This even goes as far as adding royalty-free music to clips.

This text has been translated from German and is licensed under CC BY-NC-ND.

The Bayerische Staatsbibliothek on the net

This might also interest you:

This blog article is licensed under CC BY-NC-ND.

We were talking to:
Peter Schnitzlein passed the final examination for graduate librarian (upper level- graduate of a specialized higher education institution (research libraries)), in 1993 and the modular qualification for the highest career bracket for civil servants in Germany (QE4) in 2018. He has been head of press and public relations and spokesman of the Bayerische Staatsbibliothek since 2007.
Portait: BSB©, photographer: H.-R. Schulz

Sabine Gottstein studied language, economic and cultural area studies, worked in the field of communications in Germany and abroad and has been working for the Bayerische Staatsbibliothek since 2015. She is the head of the social media team in the press and public relations division.
Portait: BSB©, photographer: H.-R. Schulz

The post Social Media in Libraries: Best Practice and Tips for Successful Profiles From the Bayerische Staatsbibliothek first appeared on ZBW MediaTalk.

Anniversary of re3data: 10 Years of Active Campaigning for the Opening of Research Data and a Culture of Sharing

Interview with Nina Weisweiler and Heinz Pampel – Helmholtz Open Science Office

The Registry of Research Data Repositories (re3data) was established ten years ago. Today, the platform is the most comprehensive source of information regarding research data – global and cross-disciplinary in scope – and is used by researchers, research organisations, and publishers around the world. In the present interview, Nina Weisweiler and Heinz Pampel from the Helmholtz Open Science Office report on its genesis and plans for the service’s future.

What were the most important milestones in ten years of re3data?

Heinz Pampel: I first introduced the idea of developing a directory of research data repositories in 2010 in the Electronic Publishing working group of the German Initiative for Networked Information (DINI). A consortium of institutions was soon created that made a proposal to the German Research Foundation (DFG) in April 2011 to develop the “re3data – Registry of Research Data Repositories” The initiating institutions were the Karlsruhe Institute of Technology (KIT), the Humboldt-Universität zu Berlin, and the Helmholtz Open Science Office at the GFZ German Research Centre for Geosciences. The proposal was approved in September 2011. We started developing the registry in the same year. As a first step, a metadata schema to describe digital repositories for research data was created. In spring 2012, we came into contact with a similar initiative at Purdue University in the USA, known as “Databib”.

Fig. 1. Number of research data repositories indexed per year in re3data. [CC BY 4.0]

The idea of combining both projects soon developed, in dialogue with Databib. After the conception and implementation phase, this cooperation and internationalisation was decisive for re3data. Many stakeholders on an international level supported it. After Databib and re3data had merged, the service was continued as a partner of DataCite. Up until today, various third party funded projects support the continuous development of the service – currently “re3data COREF” for example, a project Nina Weisweiler manages here at the Helmholtz Open Science Office.

What makes re3data so unique for you?

Nina Weisweiler: re3data is the largest directory for research data repositories and is used and recommended by researchers, funding organisations, publishers, scientific institutions as well as other infrastructures around the world. It not only covers individual research fields and regions, it also targets the holistic mapping of the repository landscape for research data.

With re3data, we are actively supporting a culture of sharing and transparent handling of research data management, thereby encouraging the realisation of Open Science at an international level. re3data ensures that the sharing of data and the infrastructural work in the field of research data management receives more visibility and recognition.

In terms of Open Science, why is re3data so important?

Heinz Pampel: The core idea of re3data was always to support scientists in their handling of research data. re3data helps researchers to search for and to identify suitable infrastructures for storage and for making digital research data accessible. For this reason, many academic institutions and funding organisations, but also publishers and scholarly journals, have firmly anchored re3data in their policies. Furthermore, diverse stakeholders reuse data from re3data for their community services, for example regarding the European Open Science Cloud (EOSC) and the National Research Data Infrastructure (NFDI). The data retrieved from re3data are also increasingly used to monitor the landscape of digital information structures. Particularly in information science, researchers use re3data for analyses relating to the development of Open Science.

In your birthday post on the DataCite blog, you write that inclusivity is one of your aims. How do you want to achieve it? How do you manage, for example, to record repositories in other regions of the world? Isn’t the language barrier a problem?

Nina Weisweiler: Yes, the language barrier is a challenge of course. We responded to this challenge early on by establishing an international editorial board. There are experts on this board who check the entries in re3data, and who kindly support the service and promote it in their respective region. Furthermore, re3data collaborates with numerous stakeholders to improve the indexing of repositories outside Europe and the United States.

Happy 10th Anniversary, re3data! Witt, M., Weisweiler, N. L., & Ulrich, R. (2022). DataCite, [CC BY 4.0]

We are active members of the internationally focussed Research Data Alliance (RDA) and regularly exchange information with national initiatives as well as other services and stakeholders with whom we develop and intensify partnerships. For example, we are currently working with the Digital Research Alliance of Canada, in order to improve the quality of the entries of Canadian repositories.

Are you planning to offer re3data in other languages apart from English?

Nina Weisweiler: In the comprehensive metadata schema, which is used in re3data for the description of research data repositories, the names and descriptions can be added in any language. Basically, the team discusses the topic of multilingualism a lot. We try to design the service as openly and as internationally as possible. In this, we depend on the languages our editors speak in order to guarantee the quality of the datasets. Thanks to our international team, we were able to incorporate many infrastructures that are being operated in China or India for example.

How can the success of re3data be measured?

Nina Weisweiler: We consider the numerous recommendations and the wide reuse of our service as the central measurement factors for the success of re3data. Important funding organisations such as the European Commission (PDF), the National Science Foundation (NSF) or the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG) recommend that researchers use the service to implement these organisations’ Open Science requirements. re3data also provides information to the Open Science Monitor of the European Commission as well as to OpenAIRE’s Open Science Observatory. The European Research Council (ERC) also refers to re3data in its recommendations for Open Science.

Furthermore, on the re3data website, we also document references that mention or recommend the service. Based on this collection, our colleague Dorothea Strecker from the Humboldt-Universität zu Berlin has made an exciting analysis that we have published in the re3data COREF project blog.

Do you know if there are also companies like publishers that use re3data as a basis for chargeable services?

Heinz Pampel: Yes. We decided on an Open Data policy when starting the service. re3data metadata are available for reuse as public domain, via CC0. Any interested party can use it via the API. Various publishers and companies in the field of scholarly information are already using re3data metadata for their services. Without this open availability of re3data metadata, several commercial services would certainly be less advanced in this field. We are sure that the advantage of Open Data ultimately outweighs the disadvantages.

re3data has many filters and functions. Which of them is your personal favourite?

Nina Weisweiler: I like the diverse browsing options, particularly the map view, which visualises the countries where institutions that are involved in the operation of the repositories are located. We have published a blog post on this topic that is well worth reading.

I am also enthusiastic about the facetted filter search, which allows for targeted searches across the almost 3,000 repository entries. At first glance, this search mode appears to be very detailed and perhaps somewhat challenging, but thanks to the exact representation of our comprehensive metadata schema in the filter facets, users can use it to search for and find a suitable repository according to their individual criteria and needs.

For technically savvy users, who would like reuse our data to prepare their own analyses, we have developed a special “treat” in the context of COREF. The colleagues at Humboldt-Universität zu Berlin and KIT have designed inspiring examples for the use of the re3data API, which are published in our GitHub repository as Jupyter Notebooks. If anyone has any queries about these examples, we would be delighted to help!

What’s more, in re3data you can also have metrics illustrated, which provide a clear overview of the current landscape of the research data repositories.

In a perfect world, where will re3data be in the year 2032?

Nina Weisweiler: I have the following vision: re3data is a high-quality and complete global directory for research data repositories from all academic disciplines. The composition of our team and our partners reflects this internationalism. We are thereby able to continue to increase coverage in regions from which not many infrastructures have yet been recorded.

Researchers, funders, publishers, and scientific institutions use the directory to reliably find the most suitable repositories und portals for their individual requirements. re3data is closely networked with further infrastructures for research data. In this way it supports an interconnected worldwide system of FAIR research data. Scientific communities use re3data actively and contribute to ensuring that the entries are current and complete.

Through greater awareness of the importance of Open Research Data and a corresponding remuneration of activities in the field of research data management, more scientists are motivated to research and publish in line with Open Science principles.publizieren.

What’s more: In re3data, datasets can be very easily updated via the link “Submit a change request” in a repository entry. We are also always delighted to receive information about new repositories. Simply fill out the “Suggest” form on our website.

This text has been translated from German.

This might also interest you:

We were talking to:
Nina Weisweiler, Open Science Officer at the Helmholtz Open Science Office where she is working on the re3data COREF project. You can also find her on Twitter, ORCID and Linkedin
Portrait: Nina Weisweiler©

Dr Heinz Pampel, Open Science Officer & Assistant Head of Helmholtz Open Science Office. You can also find him on Twitter, ORCID and Linkedin
Portrait: Heinz Pampel©

The post Anniversary of re3data: 10 Years of Active Campaigning for the Opening of Research Data and a Culture of Sharing first appeared on ZBW MediaTalk.

Open Science in Economics: Selected Findings From the ZBW Awareness Analysis 2022

by Doreen Siegfried

From 1 March to 10 May 2022, the ZBW – Leibniz Information Centre for Economics carried out a wide-ranging awareness analysis among economics and business studies researchers. 401 researchers were surveyed online in a targeted way with a layered test sample of ten defined subgroups. The aim was to get a representative image of the total population of scientifically working people in the field of business studies and economics – both in terms of status groups and specialist discipline. Research assistants and professors from the fields of economics and business administration at universities, universities of applied sciences (UoAS) and non-university research institutions in Germany were surveyed.

Part of the representative study deals with the topic of Open Science. We have summarised selected findings that are not specific to ZBW here.

Open Science: general relevance in economics and business studies research

Question: Research funding organisations (for instance the German Research Foundation, the Federal Ministry of Education and Research and the EU) are increasingly more urgently demanding free access to academic publications and research data from funded projects (keyword: Open Science) (German). Open Science includes for instance Open Access Publications, Open Research Data and disclosure of the entire research process. Has academic policy already had an impact on your work?

Of all of the parties surveyed, 47 percent said that Open Science currently already plays an important role in their work. 77 percent believe that Open Science will play an important role in the future. Only 16 percent can’t really relate to Open Science (see Fig. 1).

Taking a look at the ZBW 2019 Open Science Study (PDF, German), the proportion of business studies and economics researchers who are unaware of the term ‘Open Science’ has reduced slightly. In 2019, one in five business studies and economics researchers had never heard of the term “Open Science” before.

Looking at the different subgroups, the following picture emerges (see Fig. 2):

In economics, Open Science already plays an important role in the current work routine for almost two thirds (64 percent) of those surveyed. By contrast, this figure is less than a half for business administration at just 45 percent. The picture is also similar for future projections: whereas 85 percent of economics academics say that Open Science will play a role for them in the future, this figure is just 76 percent for business administration academics. As a logical consequence of this is that fewer economists have no connection to the topic of Open Science (9 percent) compared to business economists (17 percent).

There are also disparities between the status groups. Open Science already plays a more important role for research assistants than for professors (54 percent) and will also do so in the future (80 percent), where 38 percent of professors consider Open Science to play an important role now, and 74 percent believe that it will do so in the future. Regarding status groups, research assistants can relate to the topic of Open Science better than professors (see Fig. 2).

Relevance of Open Science Practices

Question: How important are the following Open Science Practices for you personally and/or your own academic work? This includes the use of openly shared research and actively sharing own research?

The researchers who rated Open Science as important now and in the future (see Fig. 1) were asked how important specific Open Science Practices are to them. Open Access Publications play the most important role – they are very important to 44 percent of those surveyed and fairly important to 35 percent.

The ZBW 2019 Open Science Study already showed that Open Access plays a very important role for business studies and economics researchers, scoring an average of 2.5 on a scale of 1=very important role to 5=no role at all. In 2019, 23 percent of economists in Germany confirmed that the concept of Open Access played a very important role. Furthermore, in 2019, 62 percent considered Open Access to be important for them personally. In 2022, this figure was 79 percent.

Open Research Data (see Fig. 3) also seemed to be key for business studies and economics researchers. Open Research Data is a very important topic for a quarter of those surveyed and fairly important for another quarter (27 percent) – open research data thus plays a role for 52 percent of those surveyed. Let’s compare this with the findings of 2019: the fact that research data is provided and published in line with open principles played a very important role for 11 percent and a fairly important role for 31 percent in the year 2019. That is 42 percent combined, meaning the importance of it has increased compared to 2019.

Disclosing the research process is very important for 16 percent of those surveyed and fairly important for 13 percent, meaning a total of 29 percent find it to be important. This is less than a third of those surveyed. For the majority, disclosing the research process currently does not play a key role.

Open Science Services: importance for business studies and economics researchers

Question: And what about the following services in the field of Open Science…how important are these services for you personally?

A well-structured search function for research data plays an important role for business studies and economics researchers. 38 percent find it very important, a further 35 percent find it fairly important – a total of 73 percent, almost three quarters of all those surveyed in all specialist disciplines. By way of comparison, the ZBW 2019 Open Science Study showed similar values. At this time, 77 percent of all people working in business studies and economics wanted information on how to locate Open Research Data more easily.

The ZBW’s 2022 awareness study also shows that the support in locating Open Access Publications is very important for 29 percent and fairly important for 34 percent. The 63 percent in total shows the relevance of this field. Comparing to 2019 again, 76 percent wanted information on Open Access Publication three years ago.

Subject-specific information and guidelines on Open Science Practices currently seem to be relevant for 47 percent in total, that is almost half of all those surveyed. 14 percent find it to be very relevant; 32 percent find it to be fairly relevant. By way of comparison, over three quarters of economics researchers wanted an overview of platforms, tools and applications that support Open Science Practices in 2019. These figures indicate that this need is diminishing.

Tangible subject-specific seminars and workshops on how to handle Research Data represent an exciting offer for two fifths of all those surveyed.

Open Science Services: use by business studies and economics researchers

Question: Have you already tangibly used these services in the field of Open Science

Let’s now take a look at the difference between the ascribed importance of Open Science Services and how they are used. Whereas 73 percent of those surveyed said that they find a well-structured search function for business studies economics research data important, only 32 percent said that they had already used such a search function. Among employees of universities of applied sciences, this figure was 49 percent.

Almost two thirds (63 percent) said that they find it important to have support for Open Access Publications. By contrast, less than a third (26 percent) use such a service – calculated based on all subgroups surveyed. Considering the subgroups, it is noticeable that 31 of economists and as many as 44 percent of researchers at non-university research institutions (usually economists too) have already tangibly used this kind of support at least once.

There is also a difference for subject-specific information und guidelines on Open Science Practices and Tools (see Fig. 4). A fifth (19 percent) of researchers use this offering – among researchers at non-university research institutions, this figure is a third (33 percent; see Fig. 5). Among those who find subject-specific seminars and workshops on how to handle Research Data important, half have also already used these kind of educational services.

Archiving publication and research data: trustworthiness of different providers

Question: With respect to archiving publications and research data, how trustworthy do you find the following providers?

We then asked business studies and economics researchers in Germany how trustworthy they consider various archiving providers to be. Public institutions are the most trusted, with approval from 87 percent in total. It is interesting that this figure is even higher among employees at universities of applied sciences, where 94 percent trust public institutions. Publishers, including the publishing companies Elsevier and Springer, also enjoy a high level of trust at 74 percent. Around two fifths of all those surveyed (39 percent) said that they believe publishers to be very trustworthy and a further 35 percent believed them to be trustworthy. Here too, researchers at universities of applied sciences are ahead with 87 percent of them saying that they trust this group of providers. Big tech companies, on the other hand, are only trusted by 14 percent and 21 percent respectively, which is a fifth of business studies and economics researchers say that big tech companies are not trustworthy at all. Most of those surveyed answered “neither trustworthy or untrustworthy”.

Awareness of the German National Research Data Infrastructure

Question: The National Research Data Infrastructure (NFDI) should be used to systematically access, network, and secure academics and research databases – which are merely temporarily stored in a decentralised way today – in the long-term, while making these accessible across disciplines and throughout different countries. In one place. For the entire research system. It should be possible to easily locate and use many types of data (including social media data, representative population data and much more). NFDI development is module by module, through various consortia, on a subject-specific basis. In business studies and economics, such consortia include the Consortium for Business, Economic and Related Data (BERD@NFDI) and the Consortium for the Social, Educational, Behavioural and Economic Sciences (KonsortSWD). Have you heard of this NFDI project, or the BERD and/or KonsortSWD consortia?

The NFDI pie chart is self-explanatory. The National Research Data Infrastructure (NFDI) is not very familiar yet. Then again, this is hardly surprising since these infrastructure projects are still in development.

NFDI: relevance to economists’ work

Question: How important will the new National Research Data Infrastructure (NFDI) and/or the two economics consortia BERD and KonsortSWD be in the future for your work?

Compared to the current NFDI familiarity among economics researchers in Germany (see Fig. 7), its expected future importance and/or that of the two economics consortia BERD@NFDI and KonsortSWD is relatively high. Around half of those surveyed (53 percent) view it as relevant for their own work. The NFDI is actually very important for 9 percent (see Fig. 8). But as the NFDI is still unknown among 84 percent, a large proportion of those surveyed did not answer the question (31 percent). Only 4 percent are critical and say that the NFDI is not important to them.

The survey has shown that Open Science and the NFDI in particular are regarded as important or potentially important – but more likely in the future. It is the responsibility of the consortia to make their work and the progress made in developing their infrastructures transparent and well-known, and to communicate this on a continuous basis. Furthermore, the survey shows that the academic library work with publishers and/or publishing corporations needs to become the focus of communication.

Conclusion: status quo of Open Science in business studies and economics

So how can these findings be summarised? Has Open Science already made its mark on economists or has interest plateaued somewhat? In which areas should we – the library and Open Science community – now take action?

Not only research funding organisations but also top economics research journals are now demanding that academics share their data and codes. For this reason, there are numerous special research fields or post-graduate programmes that have integrated training in Open Science Practices into their curricula. It’s almost impossible to ignore the discussion surrounding Open Science. That’s why it is also not surprising that over three quarters of those surveyed believe that Open Science will play a major role in the future.

It is however very clear that younger researchers – that are research assistants – are more interested in Open Science than professors. An awareness of the need for future skills in academic work and a creative drive to change the research system (at least in part) combine to form a “young avantgarde”.

The high level of trust in publishing corporations is noteworthy. Critical scrutiny of power structures and independent community-owned infrastructures has not yet taken place to a sufficient degree.

Libraries can play a role here: it would be good if they could be vocal in communicating their own skills and services for networked and digitally independent academia. The times of libraries quietly working away unnoticed are definitely over.

This text has been translated from German.

This might also interest you:

About the Author:

Dr Doreen Siegfried is Head of Marketing and Public Relations at the ZBW – Leibniz Information Centre for Economics. She can also be found on LinkedIn and Twitter.
Portrait: ZBW©

The post Open Science in Economics: Selected Findings From the ZBW Awareness Analysis 2022 first appeared on ZBW MediaTalk.

AI in Academic Libraries, Part 3: Prerequisites and Conditions for Successful Use

Interview with Frank Seeliger (TH Wildau) and Anna Kasprzik (ZBW)

We recently had a long talk with experts Anna Kasprzik (ZBW – Leibniz Information Centre for Economics) and Frank Seeliger (Technical University of Applied Sciences Wildau – TH Wildau) about the use of artificial intelligence in academic libraries. The occasion: Both of them were involved in two wide-ranging articles: “On the promising use of AI in libraries: Discussion stage of a white paper in progress – part 1” (German) and “part 2” (German).

In their working context, both of them have an intense connection and great interest in the use of AI in the context of infrastructure institutions and libraries. Dr Frank Seeliger is the director of the university library at the TH Wildau and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology. Anna Kasprzik is the coordinator of the automation of subject indexing (AutoSE) at the ZBW.

This slightly shortened, three-part series has emerged from our spoken interview. These two articles are also part of the series:

What are the basic prerequisites for the successful and sustainable use of AI at academic libraries and information institutions?

Anna Kasprzik: I have a very clear opinion here and have already written several articles about it. For years, I have been fighting for the necessary resources and I would say that we have manoeuvred ourselves into a really good starting position by now, even if we are not out of the woods yet. The main issue for me is commitment – right up to the level of decision makers. I’ve developed an allergy to the “project” format. Decision makers often say things like, “Oh yes, we should also do something with AI. Let’s do a project, then a working service will develop from it and that’s it.” But it’s not that easy. Things that are developed as projects tend to disappear without a trace in most cases.

We also had a forerunner project at the ZBW. We deliberately raised it to the status of a long-term commitment together with the management. We realised that automation with machine learning methods is a long-term endeavour. This commitment was essential. It was an important change of strategy. We have a team of three people here and I coordinate the whole thing. There’s a doctoral position for a scientific employee who is carrying out applied research, i.e. research that is very much focused on practice. When we received this long-term commitment status, we started a pilot phase. In this pilot phase, we recruited an additional software architect. We therefore have three positions for this, which correspond to three roles and I regard all three of them as very important.

The ZBW has also purchased a lot of hardware because machine learning experiments require serious computing power. We have then started to develop the corresponding software infrastructure. This system is already productive, but will be continually developed based on the results of our in-house applied research. What I’m trying to say is this: the commitment is important and the resources must reflect this commitment.

Frank Seeliger: This is naturally the answer of a Leibniz institution that is well endowed with research professors. However, apart from some national state libraries and larger libraries, this is usually difficult to achieve. Most libraries do not have a corresponding research mandate nor the personnel resources to finance such projects on a long-term basis. Nevertheless, there are also technologies that smaller institutions need to invest in such as cloud-based services or infrastructure as service. But they need to commit to this, including beyond the project phases. It is anchored in the Agenda 2025/30 that it is a long-term commitment within the context of the automation that is coming up anyway. This has been boosted by the coronavirus pandemic in particular, when people saw how well things can function even when they take place online. The fact that people regard this as a task and seek out information about it correspondingly. The mandate is to explore the technology deliberately. Only in this way can people at working or management level see not only the degree of investment required, but also what successes they can expect.

But it’s not only libraries that have recently, i.e. in the last ten years, begun to explore the topic of AI. It is comparable with small and medium-sized businesses or other public institutions that deal with the Online Access Act and other issues. They are also exploring these kinds of algorithms, in order to find solidarity. Libraries are not the only ones here. This is very important because many of the measures, particularly those at the level of the German federal states, were not necessarily designed with libraries in mind in respect of the distribution of AI tasks or funding.

That’s why we intended our publication (German) also as a political paper. Political in the sense of informing politicians or decision-makers about financial possibilities that we also need the framework to be able to apply. In order to then test things and decide whether we want to use any indexing or other tools such as language tools permanently in the library world and to network with other organisations.

The task for smaller libraries who cannot manage to have research groups is definitely to explore the technology and to develop their position for the next five to ten years. This requires such counterpoints to what is commonly covered by meta-search engines such as Wikipedia. Especially as libraries have a completely different lifespan than companies, in terms of their way of thinking and sustainability. Libraries are designed to last as long as the state or the university exists. Our lifecycles are therefore measured differently. And we need to position ourselves accordingly.

Not all libraries and infrastructure institutions have the capacity to develop a comprehensive AI department with corresponding personnel. So does it make sense to bundle competences and use synergy effects?

Anna Kasprzik:Yes and no. We are in touch with other institutions such as the German National Library. Our scientific employee and developer is working on the further development of the Finnish toolkit Annif with colleagues from the National Library of Finland, for example. This toolkit is also interesting for many other institutions for primary use. I think it’s very good to exchange ideas, also regarding our experiences with toolkits such as this one.

However, I discover time and again that there are limits to this when I advise other institutions; for example, just last week I advised some representatives from Swiss libraries. You can’t do everything for the other institutions. If they want to use these instruments, institutions have to train them on their own data. You can’t just train the models and then plant them one-to-one into other institutions. For sure, we can exchange ideas, give support and try to develop central hubs where at least structures or computing power resources are provided. However, nothing will be developed in this kind of hub that is an off-the-shelf solution for everyone. This is not how machine learning works.

Frank Seeliger: The library landscape in Germany is like a settlement and not like a skyscraper. In the past, there was a German library institute (DBI) that tried to bundle many matters in the academic libraries in Germany across all sectors. This kind of central unit no longer exists, merely several library groups relating to institutions and library associations relating to personnel. So a central library structure that could take on the topic of AI doesn’t exist. There was an RFID working group (German) (or also Special Interest Group RFID at the IFLA), and there should actually also be a working group for robots (German), but of course someone has to do it, usually alongside their actual job.

In any case, there is no central library infrastructure that could take up this kind of topic as a lobby organisation, such as Bitkom, and break it down into the individual companies. The route that we are pursuing is broadly based. This is related to the fact that we operate in very different ways in the different German federal states, owing to the relationship between national government and federal states. The latter have sovereignty in many areas, meaning that we have to work together on a project basis. It will be important to locate cooperation partners and not try to work alone, because it is simply too much. There is definitely not going to be a central contact point. The German Research Center for Artificial Intelligence (DFKI) does not have libraries on its radar either. There’s no one to call. Everything is going to run on a case-by-case and interest-related basis.

How do you find the right cooperation partners?

Frank Seeliger: That’s why there are library congresses where people can discuss issues. Someone gives a presentation about something they have done and then other people are interested: they get together, write applications for third-party funding or articles together, or try to organise a conference themselves. Such conference already exist, and thus a certain structure of exchange has been established.

I am the conservative type. I read articles in library journals, listen to conference news or attend congresses. That’s where you have the informal exchange – you meet other people. Alongside social media, which is also important. But if you don’t reach people via the social media channels, then there is (hopefully soon to return) physical exchange on site via certain section days, for example. Next week we have another Section IV meeting of the German Library Association (DBV) in Dresden where 100 people will get together. The chances of finding colleagues who have similar issues or are dealing with a similar topic are high. Then you can exchange ideas – the traditional way.

Anna Kasprzik: But there are also smaller workshops for specialists. For example, the German National Library has been organising a specialist congress of the network for automated subject indexing (German) (FNMVE) for those who are interested in automated approaches to subject indexing.

I also enjoy networking via social media. You can also find most people who are active in the field on the internet, e.g. on Twitter or Mastodon. I started using Twitter in 2016 and deliberately developed my account by following people with an interest in semantic web technologies. These are individuals, but they represent an entire network. I can’t name individual institutions; what is relevant are individual community members.

And how did you get to know each other? I’m referring to the working group that compiled this non-white paper.

Anna Kasprzik: It’s all Frank’s fault.

Frank Seeliger: Anna came here once. I had invited Mr Puppe in the context of a digitalisation project in which AI methods supported optical character recognition (OCR) and image identification of historical works. Exactly via the traditional route that I’ve just described, i.e. via a symposium; this was how the first people were invited..

Then the need to position ourselves on this topic developed. I had spoken with a colleague from the Netherlands at a conference shortly before. He said that they had been too late with their AI white paper, meaning that politics had not taken them into account and libraries had not received any special funding for AI tools. That was the wake-up call for me and I thought, here in Germany there is also nothing I am aware of that is specifically for information institutions. I then researched who had publications on the topic. That’s how the network, which is still active, developed. We are working on the English translation at the moment.

What is your plea to the management of information institutions? At the beginning, Anna, you already spoke about commitment, also from “the very top”, being a crucial factor. But going beyond this: what course needs to be set now and which resources need to be built up, to ensure that libraries don’t lose out in the age of AI?

Anna Kasprzik: For institutions who can, it’s important to develop long-term expertise. But I completely understand Frank’s point of view: it is valid to say that not every institution can afford this. So two aspects are important for me: one is to cluster expertise and resources at certain central institutions. The other is to develop communication structures across institutions or to share a cloud structure or something similar. To create a network in order to spread it around. To enable dissemination, i.e. the sharing of these experiences for reuse.

Frank Seeliger: Perhaps there is a third aspect: to reflect on the business process that you are responsible for so that you can identify whether it is suitable for an AI-supported automation, for example. To reflect on this yourself, but to encourage your colleagues to reflect on their own workflows too, as to whether routine tasks can be taken over by machines and thereby relieve them of some of the workload. For example, in our library association, the Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV), we had the problem that we would have liked to set up a lab. Not only to play, but also to see together how we can technically support tasks that are really very close to real life. I don’t want to say that the project failed, but the problem was that first you needed the ideas: What can you actually tackle with AI? What requires a lot of time? Is it the indexing? Other work processes that are done over and over again like a routine with a high degree of similarity? We wanted the lab to look at exactly these processes and check if we could automate them, independently of what library management systems do or all the other tools with which we work.

It’s important to initiate the process of self-reflection on automation and digitalisation in order to identify fields of work. Some have expertise in AI, others in their own fields, and they have to come together. The path leads through one’s own reflection to enter into conversation and to sound out whether solutions can be found..

And to what extent can the management support?

Frank Seeliger: Leadership is about bringing people together and giving impetus. The coronavirus pandemic and digitalisation have put a lot of pressure on many people. There is a saying by Angela Merkel. She once said that she only got around to thinking during the Christmas period. However, you want to interpret that now. Out of habit and because you want to clear the pile of work on your desk during working hours, it’s often difficult to reflect on what you are doing and if there isn’t already a tool that could help. Then it’s the task of the management level to look at these processes and where appropriate to say, yes, maybe the person could be helped with this. Let’s organise a project and take a closer look.

Anna Kasprzik: Yes, that’s one of the tasks, but for me the role of management is above all to take the load off the employees and clear a path for them. This brings another buzzword into play: agile working. It’s not only about giving an impetus, but also about supporting people by giving them some leeway so that they can work in a self-dependent manner. The agile manifesto, so to speak, which also leads to the fact that one creates space for experimenting and allows for failure sometimes. Otherwise, nothing will come to fruition.

Frank Seeliger:We will soon be doing a “Best of Failure” survey, because we want to ask what kind of error culture we really have, as it is sacrosanct. This will also be the topic of the Wildau Library Symposium (German) from 13 to 14 September 2022. In it, we will explore this error culture more intensively. Because it is right. Even in IT projects, you simply have to allow things to go wrong. Of course, they don’t have to be taken on as a permanent task if they don’t go well. But sometimes it’s good to just try, because you can’t predict whether a service will be accepted or not. What do we learn from these mistakes? We talk about it relatively little, mostly about successful projects that go well and attract crazy amounts of funding. But the other part also has to come into focus in order to learn better from it and be able to utilise aspects of it for the next project.

Is there anything else that you would like to say at the end?

Frank Seeliger: AI is not just a task for large institutions.

Anna Kasprzik: Exactly, AI concerns everyone. Even though AI should not be dealt with just for the sake of AI, but rather to develop new innovative services that would otherwise not be possible.

Frank Seeliger: There are naturally other topics, no question about that. But you have to address it and sort out the various topics.

Anna Kasprzik: : It’s important that we get the message across to people that automated approaches should not be regarded as a threat, but rather that by now this digital jungle exists anyway, so we need tools to find our way through it. AI therefore represents new potential and added value, and not a threat that will be used to eliminates people’s jobs..

Frank Seeliger: We have also been asked the question: What is the added value of automation? Of course, you spend less time on routine processes that are very manually. This creates scope to explore new technologies, to do advanced training or to have more time for customers. And we need this scope to develop new services. You simply have to create that scope, also for agile project management, so that you don’t spend 100% of your time clearing some pile of work or other from your desks, but can instead use 20% for something new. AI can help give us this time.

Thank you for the interview, Anna and Frank.

Part 1 of the interview on “AI in Academic Libraries” is about areas of activity, the big players and the automation of indexing.
In part 2 of the interview on “AI in Academic Libraries” we explore interesting projects, the future of chatbots and the problem of discrimination through AI.

This might also interest you:

We were talking to:

Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Twitter and Mastodon.
Portrait: Photographer: Carola Gruebner, ZBW©

Dr Frank Seeliger (German) has been the director of the university library at the Technical University of Applied Sciences Wildau since 2006 and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology since 2015. One module explores AI. You can find Frank on ORCID.
Portrait: TH Wildau

Featured Image: Alina Constantin / Better Images of AI / Handmade A.I / Licensed by CC-BY 4.0

The post AI in Academic Libraries, Part 3: Prerequisites and Conditions for Successful Use first appeared on ZBW MediaTalk.

AI in Academic Libraries, Part 2: Interesting Projects, the Future of Chatbots and Discrimination Through AI

Interview with Frank Seeliger (TH Wildau) and Anna Kasprzik (ZBW)

We recently had an intense discussion with Anna Kasprzik (ZBW) and Frank Seeliger (Technical University of Applied Sciences Wildau – TH Wildau) on the use of artificial intelligence in academic libraries. Both of them were recently involved in two wide-ranging articles: “On the promising use of AI in libraries: Discussion stage of a white paper in progress – part 1” (German) and “part 2” (German).

Dr Anna Kasprzik coordinates the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Dr Frank Seeliger (German) is the director of the university library at the Technical University of Applied Sciences Wildau and is jointly responsible for part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology.

This slightly shortened, three-part series has been drawn up from our spoken interview. These two articles are also part of it:

What are currently the most interesting AI projects in libraries and infrastructure institutions?

Anna Kasprzik: Of course, there are many interesting AI projects. Off the top of my head, the following two come to mind: The first one is interesting for you if you are interested in the issue of optical character recognition (OCR). Because, before you can even start to think about automated subject indexing, you have to create metadata, i.e. “food” for the machine. So to speak: segmenting digital texts into their structural fragments, extracting an abstract automatically. In order to do this, you run OCR on the scanned text. Qurator (German) is an interesting project in which machine learning methods are used as well. The Staatsbibliothek zu Berlin (Berlin State Library) and the German Research Center for Artificial Intelligence (DFKI) are involved, among others. This is interesting because at some point in the future it might give us the tools we need in order to be able to obtain the data input required for automated subject indexing.

The other project is the Open Research Knowledge Graph (ORKG) of the TIB Hannover. The Open Research Knowledge Graph is a way of representing scientific results no longer as a document, i.e. as a PDF, but rather in an entity-based way. Author, research topic or method – all nodes in one graph. This is the semantic level and one could use machine learning methods in order to populate it.

Frank Seeliger: Only one project: it is running at the ZBW and the TH Wildau and explores the development of a chatbot with new technologies. The idea of chatbots is actually relatively old. A machine conducts a dialogue with a human being. In the best case, the human being does not recognise that a machine is running in the background – the Turing Test. Things are not quite this advanced yet, but the issue we are all concerned with is that libraries are being consulted – in chat rooms, for example. Many libraries aim to offer a high level of service at the times when researchers and students work, i.e. round the clock. This can only take place if procedures are automated, via chatbots for example, so that difficult questions can be also answered outside the opening hours, at weekends and on public holidays.

I am therefore hoping firstly that the input we receive concerning chatbot development means that it will become a high-quality standard service that offers fast orientation and gives information with excellent predictive quality about a library or special services. This would create the starting point for other machines such as moving robots. Many people are investing in robots, playing around with them and trying out various things. People are expecting that they will be able to go to them and ask, “Where is book XY?” or “How do I find this and that?”, and that these robots can deal with such questions profitably and show “there’s that” in an oriented way and point their finger at it. That’s one thing.

The second thing that I find very exciting for projects is to win people over to AI at an early stage. Not just to save AI as a buzzword, but to look behind the scenes of this technology complex. We tried to offer a certificate course (German). However, demand has been too low for us to offer the course. But we will try it again. The German National Library provides a similar course that was well attended. I think it’s important to make a low-threshold offer across the board, i.e. for a one-person library or for small municipal libraries that are set up on a communal basis, as well as for larger university libraries. That people get to grips with the subject matter and find their own way, where they can reuse something, where there are providers or cooperation partners. I find this kind of project is very interesting and important for the world of libraries.

But this too can only be the starting point for many other offers of special workshops, on Annif for example or other topics that can be discussed at a level that non-informaticians can understand as well. It’s an offer to colleagues who are concerned with it, but not necessarily at an in-depth level. As with a car – they don’t manufacture the vehicle themselves, but want to be able to repair or fine-tune it sometimes. At this level, we definitely need more dialogue with the people who are going to have to work with it, for example as system administrators who set up or manage such projects. The offers must also be focused towards the management level – the people who are in charge of budgeting, i.e. those who sign third-party funding applications.

At both institutions, the TH Wildau and the ZBW, you are working on the use of chatbots. Why is this AI application area for academic libraries so promising? What are the biggest challenges?

Frank Seeliger: The interesting perspective for me is that we can operate the development of a chatbot together with other libraries. It is nice when not only one library serves as a knowledge base in the background for the typical examples. This is not possible with locally specific information such as opening hours or spatial conditions. Nevertheless, many synergy effects are created. We can bring them together and be in a position to generate as large a quantity of data as possible, so that the quality of the assertions that are automatically generated is simply better than if we were to set it up individually. The output quality has a lot to do with the data quality. Although it is not true that the more data, the better the information. Other factors also play a role. But generally, small solutions tend to fail because of the small quantity of data.

Especially in view of the fact that a relatively high number of libraries are keen to invest in robot solutions that “walk” through the library outside the opening hours and offer services, like the robot librarian. If the service is used, it therefore makes twice as much sense to offer something online, but also to retrieve it using a machine that rolls through the premises and offers the service. This is important, because the personal approach from the library to the clients is a very decisive and differentiating feature as opposed to the large meta levels that offer their services in the commercial field. Looking for dialogue and paying attention to the special requirements of the users: this is what makes the difference.

Anna Kasprzik: Even though I am not involved in the chatbot project at ZBW, I can think of three challenges. The first is that you need an incredible amount of training data. Getting hold of that much data is relatively difficult. Here at ZBW we have had a chat feature for a long time – without a bot. These chats have been recorded but first they had to be cleaned of all personal data. This was an immense amount of editorial work. That is the first challenge.

The second challenge: it’s a fact that relatively trivial questions, such as the opening hours, are easily answered. But as soon as things become more complex, i.e. when there are specialised questions, you need a knowledge graph behind the chatbot. And setting this up is relatively complex.

Which brings me to the third challenge: during the initial runs, the project team established that quite a few of the users had reservations and quickly thought, “It doesn’t understand me”. So there were reservations on both sides. We therefore have to be mindful of the quality aspect and also of the “trust” of the users.

Frank Seeliger: But the interactions also follow the direction of speech, particularly from the younger generations who are now coming through as students in the libraries. This generation communicates via voice messages: the students speak with Siri or Alexa and they are informal when speaking to technologies. FIZ Karlsruhe has attempted to define search queries using Alexa. That went well in itself, but it failed because of the European General Data Protection Regulation (GDPR), the privacy of information and the fact that data was processed somewhere in the USA. Naturally, that is not acceptable.

That’s why it is good that libraries are doing their own thing – they have data sovereignty and can therefore ensure that the GDPR is maintained and that user data is treated carefully. But it would be a strategic mistake if libraries did not adapt to the corresponding dialogue. Very simply because a lot of these interactions no longer take place with writing and reading alone, but via speech. As far as apps and features are concerned, much is communicated via voice messages, and libraries need to adapt to this fact. It starts with chatbots, but the question is whether search engines will be able to cope with (voice) messages at some point and then filter out the actual question. Making a chatbot functional and usable in everyday life is only the first step. With spoken language, this then incorporates listening and understanding.

Is there a timeframe for the development of the chatbot?

Anna Kasprzik: I’m not sure, when the ZBW is planning to put its chatbot online; it could take one or two years. The real question is: when will such chatbots become viable solutions in libraries globally? This may take at least ten years or longer – without wanting to crush hopes too much.

Frank Seeliger: There are always unanticipated revivals popping up, for which a certain impetus is needed. For example, I was in the IT section of the International Federation of Library Associations and Institutions (IFLA) on statistics. We considered whether we could determine statistics clearly and globally, and depict them as a portfolio. Initially it didn’t work – it was limited to one continent: Latin America. Then the section received a huge surprise donation from the Bill and Melinda Gates Foundation and with it, the project IFLA Library Map of the World could be implemented.

It was therefore a very special impetus that led to something that we would normally not have achieved with ten years’ work. And when this impetus exists through tenders, funding, third-party donors that accelerate exactly this kind of project, perhaps also from a long-term perspective, the whole thing takes on a new dynamic. If the development of chatbots in libraries continues to stagnate like this, they will not use them on a market-wide scale. There was also a movement with contactless object recognition via radio waves (Radio-Frequency Identification, RFID). It started in 2001 in Siegburg, then Stuttgart and Munich. Now, it is used in 2,000 to 3,000 libraries. I don’t see this impetus with chatbots at all. That’s why I don’t think that, in ten or 15 years, chatbots will be used in 10% to 20% of libraries. It’s an experimental field. Maybe some libraries will introduce them, but it will be a handful, perhaps a dozen. However if a driving force occurs owing to external factors such as funding or a network initiative, the whole concept may receive new momentum.

The fact that AI-based systems make discriminatory decisions is often regarded as a general problem. Does this also apply to the library context? How can this be prevented?

Anna Kasprzik: That’s a very tricky question. Not many people are aware that potential difficulties almost always arise from the training data because training data is human data. These data sources contain our prejudices. In other words, whether the results may have a discriminating effect or not depends on the data itself and on the knowledge organisation systems that underpin it.

One movement that is gathering pace is known as de-colonisalisation. People are therefore taking a close look at the vocabularies they use, thesauri and ontologies. The problem has come up for us as well: since we also provide historical texts, terms that have racist connotations today appeared in the thesaurus . Naturally, we primarily incorporate terms that are considered politically correct. But these definitions can shift over time. The question is: what do you do with historical texts where this word occurs in the title? The task is then to find different ways to provide them as hidden elements of the thesaurus but not to display them in the interface.

There are knowledge organisation systems that are very old and have developed in times very different from ours. We need to restructure them completely as a matter of urgency. It’s always a balancing act if you want to display texts from earlier periods with the structures that were in use at that time. Because I must both not falsify the historical context, but also not offend anyone who wants to search in these texts and feel represented or at least not discriminated against. This is a very difficult question, particularly in libraries. People often think: that’s not an issue for libraries, it’s only relevant in politics, or that sort of thing. But on the contrary, libraries reflect the times in which they exist, and rightly so.

Frank Seeliger: Everything that you can use can also be misused. This applies to every object. For example, I was very impressed in Turkey. They are working with a big Koha approach (library software), meaning that more than 1,000 public libraries are using the open source solution Koha as their library management software. They therefore know, among other things, which book is most often borrowed in Turkey. We do not have this kind of information at all in Germany via the German Library Statistics (DBS, German). This doesn’t mean that this knowledge discredits the other books, that they are automatically “leftovers”. You can do a lot with knowledge. The bias that exists with AI is certainly the best known. But it is the same for all information: should monuments be pulled down or left standing? We need to find a path through the various moral phases that we live through as a society.

In my own studies, I specialised in pre-Colombian America. To name one example, the Aztecs never referred to themselves as Aztecs. If you searched in catalogues of libraries pre-1763, the term “Aztec” did not exist. They called themselves Mexi‘ca. Or we could take the Kerensky Offensive – search engines do not have much to offer on that. It was a military offensive that was only named that afterwards. It used to be called something else. It is the same challenge: to refer to both terms, even if the terminology has changed, or if it is no longer “en vogue” to work with a certain term.

Anna Kasprzik: This is also called concept drift and it is generally a big problem. It’s why you always have to retrain the machines: concepts are continually developing, new ones emerge or old terms change their meaning. Even if there is no discrimination, terminology is constantly evolving
.

And who does this work?

Anna Kasprzik: The machine learning experts at the institution.

Frank Seeliger: The respective zeitgeist and its intended structure.

Thank you for the interview, Anna and Frank.

Part 1 of the interview on “AI in Academic Libraries” is about areas of activity, the big players and the automation of indexing.
Part 3 of the interview on “AI in Academic Libraries” focuses on prerequisites and conditions for successful use
We will share the link here as soon as the post is published

This text has been translated from German.

This might also interest you:

We were talking to:

Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Twitter and Mastodon.
Portrait: Photographer: Carola Gruebner, ZBW©

Dr Frank Seeliger (German) has been the director of the university library at the Technical University of Applied Sciences Wildau since 2006 and has been jointly responsible for the part-time programme part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology since 2015. One module explores AI. You can find Frank on ORCID.
Portrait: TH Wildau

Featured Image: Alina Constantin / Better Images of AI / Handmade A.I / Licensed by CC-BY 4.0

The post AI in Academic Libraries, Part 2: Interesting Projects, the Future of Chatbots and Discrimination Through AI first appeared on ZBW MediaTalk.

INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions

by Anastasia Kazakova

The third INCONECSS – International Conference on Economics and Business Information – took place online from 17 to 19 May 2022. The panels and presentations focused on artificial intelligence, Open Access and (research) data. INCONECSS also addressed collaboration in designing services for economics research and education and how these may have been influenced by the corona crisis.

Unleash the future and decentralise research!

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, gave the keynote address “The next chapter for research information: decentralised, digital and disrupted”. With this, she wanted to inspire the participants to “unleash the future” and decentralise research. The first topic of her presentation was about German universities. Isabell Welpe took us on a journey through three stations:

  1. What happens at universities?
  2. What does the work of students, researchers and teachers and the organisation at universities look like?
  3. How can universities and libraries be made future-proof?

In her lecture, she pointed out that hierarchically organised teaching is currently often unable to cope with the rapid social changes and new developments in the world of work. Isabell Welpe therefore suggested opening up teaching and organising it “bottom up”. This means relying on the decentralised self-organisation of students, offering (digital) spaces for exchange and tailoring teaching to their needs. Through these changes, students can learn while actively participating in research, which simultaneously promotes their creativity and agility. This is a cornerstone for disruptive innovation; that is, innovation that breaks and radically changes existing structures.

Prof. Dr Isabell Welpe, Chair of Business Administration – Strategy and Organisation at the Technical University of Munich, drawing: Karin Schliehe

Libraries could support and even drive the upcoming changes. In any case, they should prepare themselves for enormous changes due to the advancing digitisation of science. Isabell Welpe observed the trend towards “digital first” in teaching – triggered by the coronavirus situation. In the long term, this trend will influence the role of libraries as places of learning, but will also determine interactions with libraries as sources of information. Isabell Welpe therefore encouraged libraries to become a market-place in order to promote exchange, creativity and adaptability. The transformation towards this is both a task and an opportunity to make academic libraries future-proof.

In her keynote speech, Isabell Welpe also focused on the topic of decentralisation. One of the potentials of decentralisation is that scientists exchange data directly and share research data and results with each other, without, for example, publishers in between. Keywords were: Web 3.0, Crypto Sci-Hub and Decentralisation of Science.

In the Q&A session, Isabell Welpe addressed the image of libraries: Libraries could be places where people would go and do things, where they would exchange and would be creative; they could be places where innovation took place. She sees libraries as a Web 3.0 ecosystem with different services and encouraged them to be more responsive to what users need. Her credo: “Let the users own a part of the library!”

How can libraries support researchers?

Following on from the keynote, many presentations at INCONECSS dealt with how libraries can succeed even better in supporting researchers. On the first day, Markus Herklotz and Lars Oberländer from the University of Mannheim presented their ideas on this topic with a Poster (PDF, partly in German). The focus was on the interactive virtual assistant (iVA), which enables data collaboration by imparting legal knowledge. Developed by the BERD@BW and BERD@NFDI initiatives, the iVA helps researchers to understand the basic data protection regulations in each case and thereby helps them to evaluate their legal options for data use. The selfdirected assistant is an open-source learning module and can be extended.

Paola Corti from SPARC Europe introduced the ENOEL toolkit with her poster (PDF). It is a collection of templates for slides, brochures and Twitter posts to help communicate the benefits of Open Education to different user groups. The aim is to raise awareness of the importance of Open Education. It is openly designed, available in 16 language versions and can be adapted to the needs of the organisation.

On the last day of INCONECSS, Franziska Klatt from the Economics and Management Library of the TU Berlin reported in her presentation (PDF) on another toolkit that supports researchers in applying the Systematic Literature Review (SLRM) method. Originating from the medical field, the method was adapted to the economic context. SLRM helps researchers to reduce bias and redundancy in their work by following a formalised and transparent process that is reproducible. The toolkit provides a collection of information on the stages of this process, as well as SLR sources, tutorial videos and sample articles. Through the use of the toolkit and the information on the associated website, the media competence of the young researchers could be improved. An online course is also planned.

Field reports: How has the pandemic changed the library world?

The coronavirus is not yet letting go of the world, which also applies to the world of the INCONECSS community: In the poster session, Scott Richard St. Louis from the Federal Reserve Bank of St. Louis presented his experiences of onboarding in a hybrid work environment. He addressed individual aspects of remote onboarding, such as getting to know new colleagues or the lack of a physical space for meetings.

The poster (PDF) is worth a look, as it contains a number of suggestions for new employees and management, e.g.:

  • “Be direct, and even vulnerable”,
  • “Be approachable” or
  • “What was once implicit or informal needs to become explicit or conscious”.

Arjun Sanyal from the Central University of Himachal Pradesh (CUHP) reported in his presentation (PDF) on a project of his library team. They observed that the long absence from campus triggered a kind of indifference towards everyday academic life and an “informational anxiety” among students. The latter manifests itself in a reluctance to use information resources for studying, out of a fear of searching for them. To counteract this, the librarians used three types of measures: Mind-map sessions, an experimental makerspace and supportive motivational events. In the mind-map session, for example, the team collected ideas for improving library services together with the students. The effort had paid off, they said, because after a while they noticed that the campus and the libraries in particular were once again popular. In addition, Makerspace and motivational events helped students to rediscover the joy of learning, reports Arjun Sanyal.

Artificial Intelligence in Libraries

One of the central topics of the conference was without doubt the use of artificial intelligence (AI) in the library context. On the second day of INCONECSS, the panel participants from the fields of research, AI, libraries and thesaurus/ontology looked at aspects of the benefits of AI for libraries from different perspectives. They discussed the support of researchers through AI and the benefits for library services, but also the added value and the risks that arise through AI.

Discussion, drawing: Karin Schliehe

The panellists agreed that new doors would open up through the use of AI in libraries, such as new levels of knowledge organisation or new services and products. In this context, it was interesting to hear Osma Suominen from the National Library of Finland say that AI is not a game changer at the moment: it has the potential, but is still too immature. In the closing statements, the speakers took up this idea again: They were optimistic about the future of AI, yet a sceptical approach to this technology is appropriate. It is still a tool. According to the panellists, AI will not replace librarians or libraries, nor will it replace research processes. The latter require too much creativity for that. And in the case of libraries, a change in business concepts is conceivable, but not the replacement of the institution of the library itself.

It was interesting to observe that the topics that shaped the panel discussion kept popping up in the other presentations at the conference: Data, for example, in the form of training or evaluation data, was omnipresent. The discussants emphasised that the quality of the data is very important for AI, as it determines the quality of the results. Finding good and usable data is still complex and often related to licences, copyrights and other legal restrictions. The chatbot team from the ZBW also reported on the challenges surrounding the quality of training data in the poster session (PDF).

The question of trust in algorithms was also a major concern for the participants. On the one hand, it was about bias, which is difficult and requires great care to remove from AI systems. Again, data was the main issue: if the data was biased, it was almost impossible to remove the bias from the system. Sometimes it even leads to the systems not going live at all. On the other hand, it was about the trust in the results that an AI system delivers. Because AI systems are often non-transparent, it is difficult for users and information specialists to trust the search results provided by the AI system for a literature search. These are two of the key findings from the presentation (PDF) by Solveig Sandal Johnsen from AU Library, The Royal Library and Julie Kiersgaard Lyngsfeldt from Copenhagen University Library, The Royal Library. The team from Denmark investigated two AI systems designed to assist with literature searches. The aim was to investigate the extent to which different AI-based search programmes supported researchers and students in academic literature search. During the project, information specialists tested the functionality of the systems using the same search tasks. Among other results, they concluded that the systems could be useful in the exploratory phase of the search, but they functioned differently from traditional systems (such as classic library catalogues or search portals like EconBiz) and, according to the presenters, challenged the skills of information specialists.

This year, the conference took place exclusively online. As the participants came from different time zones, it was possible to attend the lectures asynchronously and after the conference. A selection of recorded lectures and presentations (videos) is available on the TIB AV portal.

Links to INCONECSS 2022:

  • Programme INCONECSS
  • Interactive Virtual Assistant (iVA) – Enabling Data Collaboration by Conveying Legal Knowledge: Abstract and poster (PDF)
  • ENOEL toolkit: Open Education Benefits: Abstract and poster (PDF)
  • Systematic Literature Review – Enhancing methodology competencies of young researchers: Abstract and slides (PDF)
  • Onboarding in a Hybrid Work Environment: Questions from a Library Administrator, Answers from a New Hire: Abstract and Poster (PDF)
  • Rethinking university librarianship in the post-pandemic scenario: Abstract and slides (PDF)
  • „Potential of AI for Libraries: A new level for knowledge organization?“: Abstract Panel Discussion
  • The EconDesk Chatbot: Work in Progress Report on the Development of a Digital Assistant for Information Provision: Abstract and slides (PDF)
  • AI-powered software for literature searching: What is the potential in the context of the University Library?: Abstract and slides (PDF)

This might also interest you:

About the Author:

Anastasia Kazakova is a research associate in the department Information Provision & Access and part of the EconBiz team at the ZBW – Leibniz Information Centre for Economics. Her focus is on user research, usability and user experience design, and research-based innovation. She can also be found on LinkedIn, ResearchGate and XING.
Potrait: Photographer: Carola Grübner, ZBW©

The post INCONECSS 2022 Symposium: Artificial Intelligence, Open Access and Data Dominate the Discussions first appeared on ZBW MediaTalk.

Open Access: Is It Fostering Epistemic Injustice?

by Nicki Lisa Cole and Thomas Klebel

One of the key aims of Open Science is to foster equity with transparent, participatory and collaborative processes and by providing access to research materials and outputs. Yet, the academic context in which Open Science operates is unequal. For example, core-periphery dynamics are present, with researchers from the global north dominating authorship and collaborative research networks. Sexism is present, with women experiencing underrepresentation within academia (see also) and especially within senior career positions (PDF); and racism manifests within academia, with white people being over-represented among higher education faculty. Inequality is the water in which we swim, therefore we cannot be naive about the promises of Open Science.

In light of this reality, the ON-MERRIT project set out to investigate whether Open Science policies actually worsen existing inequalities by creating cumulative advantage for already privileged actors. We investigated this question within the contexts of academia, industry and policy. We found that, indeed, some manifestations of Open Science are fostering cumulative advantage and disadvantage in a variety of ways, including epistemic injustice.

Miranda Fricker defines epistemic injustice in two ways. She explains that testimonial injustice “occurs when prejudice causes a hearer to give a deflated level of credibility to a speaker’s word,” while hermeneutical injustice “occurs at a prior stage, when a gap in collective interpretive resources puts someone at an unfair disadvantage when it comes to making sense of their social experiences”. Here, we take a look at ways in which Open Access (OA) publishing, as it currently operates, is fostering both kinds of epistemic injustice.

APCs and the stratification of OA publishing

Research shows that article processing charges (APCs) lead to unequal opportunities for researchers to participate in Open Access publishing. The likelihood of US researchers publishing OA, especially when APCs are involved, is higher for male researchers from prestigious institutions, having received federal grant funding. Similarly, APCs are associated with lower geographic diversity of authors within journals, suggesting that they act as a barrier for researchers from the Global South, in particular. In our own research, specifically investigating the role of institutional resources, we found that authors from well-resourced institutions both publish and cite more Open Access literature, and in particular, publish in journals with higher APCs than authors from less-resourced institutions. Disparities in policies that promote and fund OA publication is likely a significant driver of these trends.

While these policies are obviously helpful to those who benefit from them, they are reproducing existing structural inequalities within academia, by fuelling cumulative advantages of already privileged actors, and further side-lining the voices of those with fewer resources. This form of testimonial injustice is historically rooted and widespread within academia, with research from the Global South often deemed less relevant and less credible (see also). With the rise of APC-based Open Access, actors with fewer resources face additional barriers to contributing to the most recognised outlets hosting scientific knowledge, since journal prestige and APC amounts have been found to be moderately correlated. Given that scientific research is expected to aid in tackling urgent societal challenges, it is alarming that current trends in scholarly communications are exacerbating the marginalisation of research and knowledge from the Global South and from less-resourced scholars more generally.

Access Isn’t Enough

One of the arguments in support of Open Access is that it fosters greater scientific use by societal actors. This is a commonly cited refrain in the literature, but we found that OA has virtually no impact in this way. Rather, we heard from policy-makers that they rely on existing personal relationships with researchers and other experts when they seek expert advice. Moreover, we heard from researchers that it is far more important that scientific outputs be cognitively accessible, or understandable, when disseminating research to lay audiences.

Communicating scientific results to lay audiences requires time, resources, and a particular skill set, and failing to account for this reality limits the pool of actors able to do it (to those already well-resourced and ‘at the table’) and inhibits the potential for science to impact policy-making and to be useful to impacted communities. In this way, Open Access absent understandability creates hermeneutical injustice among any population that would benefit from understanding research and how it impacts their lives, but especially among those who are marginalised, who may have participated in research or been the subjects of study, and to whom the outcomes of research could provide a direct benefit. People cannot advocate for their rights and for their communities if they are not provided with the tools to understand social, environmental and economic problems and possible solutions. In this way, the concept of Open Access must go beyond removing a paywall to readership and provide understandability, aligning with the “right to research”, as articulated by Arjun Appadurai.

What We Can Do About It

In response to these and other equity issues within Open Science, the ON-MERRIT team worked with a diverse stakeholder community from across the EU and beyond to co-create actionable recommendations aimed at funders, leaders of research institutions, and researchers. We produced and published 30 consensus-based recommendations, and here we spotlight a few that can respond to epistemic injustice and that may be actionable by libraries.

  • Supporting alternative, more inclusive publishing models without author-facing charges and the use of sustainable, shared and Open Source publishing infrastructure could help to ameliorate the inequitable stratification of Open Access publishing.
  • Supporting researchers to create more open and understandable outputs, including in local languages when appropriate, could help to ameliorate the hermeneutical injustice that results from the inaccessibility of academic language. In conjunction, supporting partnerships with other societal actors in the translation and dissemination of understandable research findings could also help to achieve this.
  • We believe that librarians could be especially helpful by supporting (open and sustainable) infrastructure that enables the findability and understandability of research by lay audiences.

Visit our project website to learn more about ON-MERRIT and our results, and click here to read our full recommendations briefing.

This might also be interesting for you:

About the Authors:

Nicki Lisa Cole, PhD is a Senior Researcher at Know-Center and a member of the Open and Reproducible Research Group. She is a sociologist with a research focus on issues of equity in the transition to Open and Responsible Research and Innovation. She was a contributor across multiple work packages within ON-MERRIT. You can find her on ORCID, ResearchGate and LinkedIn.
Portrait: Nicki Lisa Cole: Copyright private, Photographer: Thomas Klebel

Thomas Klebel, MA is a member of the Open and Reproducible Research Group and a Researcher at Know-Center. He is a sociologist with a research focus on scholarly communication and reproducible research. He was project manager of ON-MERRIT, as well as investigating Open Access publishing, and opinions and policies on promotion, review and tenure. You can find him on Twitter, ORCID and LinkedIn.
Portrait: Thomas Klebel: Copyright private, Photographer: Stefan Reichmann©

The post Open Access: Is It Fostering Epistemic Injustice? first appeared on ZBW MediaTalk.