SPARC Europe organised a face-to-face event for The Council of National Open Science Coordinators (CoNOSC) in June 2022 at the inspirational Delft University of Technology. National policymakers from over ten countries […]
Interview with Frank Seeliger (TH Wildau) and Anna Kasprzik (ZBW)
We recently had a long talk with experts Anna Kasprzik (ZBW – Leibniz Information Centre for Economics) and Frank Seeliger (Technical University of Applied Sciences Wildau – TH Wildau) about the use of artificial intelligence in academic libraries. The occasion: Both of them were involved in two wide-ranging articles: “On the promising use of AI in libraries: Discussion stage of a white paper in progress – part 1” (German) and “part 2” (German).
In their working context, both of them have an intense connection and great interest in the use of AI in the context of infrastructure institutions and libraries. Dr Frank Seeliger is the director of the university library at the TH Wildau and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology. Anna Kasprzik is the coordinator of the automation of subject indexing (AutoSE) at the ZBW.
This slightly shortened, three-part series has emerged from our spoken interview. These two articles are also part of the series:
- Part 1: Areas of Activity, Big Players and the Automation of Indexing
- Part 2: Interesting Projects, the Future of Chatbots and Discrimination Through AI
What are the basic prerequisites for the successful and sustainable use of AI at academic libraries and information institutions?
Anna Kasprzik: I have a very clear opinion here and have already written several articles about it. For years, I have been fighting for the necessary resources and I would say that we have manoeuvred ourselves into a really good starting position by now, even if we are not out of the woods yet. The main issue for me is commitment – right up to the level of decision makers. I’ve developed an allergy to the “project” format. Decision makers often say things like, “Oh yes, we should also do something with AI. Let’s do a project, then a working service will develop from it and that’s it.” But it’s not that easy. Things that are developed as projects tend to disappear without a trace in most cases.
We also had a forerunner project at the ZBW. We deliberately raised it to the status of a long-term commitment together with the management. We realised that automation with machine learning methods is a long-term endeavour. This commitment was essential. It was an important change of strategy. We have a team of three people here and I coordinate the whole thing. There’s a doctoral position for a scientific employee who is carrying out applied research, i.e. research that is very much focused on practice. When we received this long-term commitment status, we started a pilot phase. In this pilot phase, we recruited an additional software architect. We therefore have three positions for this, which correspond to three roles and I regard all three of them as very important.
The ZBW has also purchased a lot of hardware because machine learning experiments require serious computing power. We have then started to develop the corresponding software infrastructure. This system is already productive, but will be continually developed based on the results of our in-house applied research. What I’m trying to say is this: the commitment is important and the resources must reflect this commitment.
Frank Seeliger: This is naturally the answer of a Leibniz institution that is well endowed with research professors. However, apart from some national state libraries and larger libraries, this is usually difficult to achieve. Most libraries do not have a corresponding research mandate nor the personnel resources to finance such projects on a long-term basis. Nevertheless, there are also technologies that smaller institutions need to invest in such as cloud-based services or infrastructure as service. But they need to commit to this, including beyond the project phases. It is anchored in the Agenda 2025/30 that it is a long-term commitment within the context of the automation that is coming up anyway. This has been boosted by the coronavirus pandemic in particular, when people saw how well things can function even when they take place online. The fact that people regard this as a task and seek out information about it correspondingly. The mandate is to explore the technology deliberately. Only in this way can people at working or management level see not only the degree of investment required, but also what successes they can expect.
But it’s not only libraries that have recently, i.e. in the last ten years, begun to explore the topic of AI. It is comparable with small and medium-sized businesses or other public institutions that deal with the Online Access Act and other issues. They are also exploring these kinds of algorithms, in order to find solidarity. Libraries are not the only ones here. This is very important because many of the measures, particularly those at the level of the German federal states, were not necessarily designed with libraries in mind in respect of the distribution of AI tasks or funding.
That’s why we intended our publication (German) also as a political paper. Political in the sense of informing politicians or decision-makers about financial possibilities that we also need the framework to be able to apply. In order to then test things and decide whether we want to use any indexing or other tools such as language tools permanently in the library world and to network with other organisations.
The task for smaller libraries who cannot manage to have research groups is definitely to explore the technology and to develop their position for the next five to ten years. This requires such counterpoints to what is commonly covered by meta-search engines such as Wikipedia. Especially as libraries have a completely different lifespan than companies, in terms of their way of thinking and sustainability. Libraries are designed to last as long as the state or the university exists. Our lifecycles are therefore measured differently. And we need to position ourselves accordingly.
Not all libraries and infrastructure institutions have the capacity to develop a comprehensive AI department with corresponding personnel. So does it make sense to bundle competences and use synergy effects?
Anna Kasprzik:Yes and no. We are in touch with other institutions such as the German National Library. Our scientific employee and developer is working on the further development of the Finnish toolkit Annif with colleagues from the National Library of Finland, for example. This toolkit is also interesting for many other institutions for primary use. I think it’s very good to exchange ideas, also regarding our experiences with toolkits such as this one.
However, I discover time and again that there are limits to this when I advise other institutions; for example, just last week I advised some representatives from Swiss libraries. You can’t do everything for the other institutions. If they want to use these instruments, institutions have to train them on their own data. You can’t just train the models and then plant them one-to-one into other institutions. For sure, we can exchange ideas, give support and try to develop central hubs where at least structures or computing power resources are provided. However, nothing will be developed in this kind of hub that is an off-the-shelf solution for everyone. This is not how machine learning works.
Frank Seeliger: The library landscape in Germany is like a settlement and not like a skyscraper. In the past, there was a German library institute (DBI) that tried to bundle many matters in the academic libraries in Germany across all sectors. This kind of central unit no longer exists, merely several library groups relating to institutions and library associations relating to personnel. So a central library structure that could take on the topic of AI doesn’t exist. There was an RFID working group (German) (or also Special Interest Group RFID at the IFLA), and there should actually also be a working group for robots (German), but of course someone has to do it, usually alongside their actual job.
In any case, there is no central library infrastructure that could take up this kind of topic as a lobby organisation, such as Bitkom, and break it down into the individual companies. The route that we are pursuing is broadly based. This is related to the fact that we operate in very different ways in the different German federal states, owing to the relationship between national government and federal states. The latter have sovereignty in many areas, meaning that we have to work together on a project basis. It will be important to locate cooperation partners and not try to work alone, because it is simply too much. There is definitely not going to be a central contact point. The German Research Center for Artificial Intelligence (DFKI) does not have libraries on its radar either. There’s no one to call. Everything is going to run on a case-by-case and interest-related basis.
How do you find the right cooperation partners?
Frank Seeliger: That’s why there are library congresses where people can discuss issues. Someone gives a presentation about something they have done and then other people are interested: they get together, write applications for third-party funding or articles together, or try to organise a conference themselves. Such conference already exist, and thus a certain structure of exchange has been established.
I am the conservative type. I read articles in library journals, listen to conference news or attend congresses. That’s where you have the informal exchange – you meet other people. Alongside social media, which is also important. But if you don’t reach people via the social media channels, then there is (hopefully soon to return) physical exchange on site via certain section days, for example. Next week we have another Section IV meeting of the German Library Association (DBV) in Dresden where 100 people will get together. The chances of finding colleagues who have similar issues or are dealing with a similar topic are high. Then you can exchange ideas – the traditional way.
Anna Kasprzik: But there are also smaller workshops for specialists. For example, the German National Library has been organising a specialist congress of the network for automated subject indexing (German) (FNMVE) for those who are interested in automated approaches to subject indexing.
I also enjoy networking via social media. You can also find most people who are active in the field on the internet, e.g. on Twitter or Mastodon. I started using Twitter in 2016 and deliberately developed my account by following people with an interest in semantic web technologies. These are individuals, but they represent an entire network. I can’t name individual institutions; what is relevant are individual community members.
And how did you get to know each other? I’m referring to the working group that compiled this non-white paper.
Anna Kasprzik: It’s all Frank’s fault.
Frank Seeliger: Anna came here once. I had invited Mr Puppe in the context of a digitalisation project in which AI methods supported optical character recognition (OCR) and image identification of historical works. Exactly via the traditional route that I’ve just described, i.e. via a symposium; this was how the first people were invited..
Then the need to position ourselves on this topic developed. I had spoken with a colleague from the Netherlands at a conference shortly before. He said that they had been too late with their AI white paper, meaning that politics had not taken them into account and libraries had not received any special funding for AI tools. That was the wake-up call for me and I thought, here in Germany there is also nothing I am aware of that is specifically for information institutions. I then researched who had publications on the topic. That’s how the network, which is still active, developed. We are working on the English translation at the moment.
What is your plea to the management of information institutions? At the beginning, Anna, you already spoke about commitment, also from “the very top”, being a crucial factor. But going beyond this: what course needs to be set now and which resources need to be built up, to ensure that libraries don’t lose out in the age of AI?
Anna Kasprzik: For institutions who can, it’s important to develop long-term expertise. But I completely understand Frank’s point of view: it is valid to say that not every institution can afford this. So two aspects are important for me: one is to cluster expertise and resources at certain central institutions. The other is to develop communication structures across institutions or to share a cloud structure or something similar. To create a network in order to spread it around. To enable dissemination, i.e. the sharing of these experiences for reuse.
Frank Seeliger: Perhaps there is a third aspect: to reflect on the business process that you are responsible for so that you can identify whether it is suitable for an AI-supported automation, for example. To reflect on this yourself, but to encourage your colleagues to reflect on their own workflows too, as to whether routine tasks can be taken over by machines and thereby relieve them of some of the workload. For example, in our library association, the Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV), we had the problem that we would have liked to set up a lab. Not only to play, but also to see together how we can technically support tasks that are really very close to real life. I don’t want to say that the project failed, but the problem was that first you needed the ideas: What can you actually tackle with AI? What requires a lot of time? Is it the indexing? Other work processes that are done over and over again like a routine with a high degree of similarity? We wanted the lab to look at exactly these processes and check if we could automate them, independently of what library management systems do or all the other tools with which we work.
It’s important to initiate the process of self-reflection on automation and digitalisation in order to identify fields of work. Some have expertise in AI, others in their own fields, and they have to come together. The path leads through one’s own reflection to enter into conversation and to sound out whether solutions can be found..
And to what extent can the management support?
Frank Seeliger: Leadership is about bringing people together and giving impetus. The coronavirus pandemic and digitalisation have put a lot of pressure on many people. There is a saying by Angela Merkel. She once said that she only got around to thinking during the Christmas period. However, you want to interpret that now. Out of habit and because you want to clear the pile of work on your desk during working hours, it’s often difficult to reflect on what you are doing and if there isn’t already a tool that could help. Then it’s the task of the management level to look at these processes and where appropriate to say, yes, maybe the person could be helped with this. Let’s organise a project and take a closer look.
Anna Kasprzik: Yes, that’s one of the tasks, but for me the role of management is above all to take the load off the employees and clear a path for them. This brings another buzzword into play: agile working. It’s not only about giving an impetus, but also about supporting people by giving them some leeway so that they can work in a self-dependent manner. The agile manifesto, so to speak, which also leads to the fact that one creates space for experimenting and allows for failure sometimes. Otherwise, nothing will come to fruition.
Frank Seeliger:We will soon be doing a “Best of Failure” survey, because we want to ask what kind of error culture we really have, as it is sacrosanct. This will also be the topic of the Wildau Library Symposium (German) from 13 to 14 September 2022. In it, we will explore this error culture more intensively. Because it is right. Even in IT projects, you simply have to allow things to go wrong. Of course, they don’t have to be taken on as a permanent task if they don’t go well. But sometimes it’s good to just try, because you can’t predict whether a service will be accepted or not. What do we learn from these mistakes? We talk about it relatively little, mostly about successful projects that go well and attract crazy amounts of funding. But the other part also has to come into focus in order to learn better from it and be able to utilise aspects of it for the next project.
Is there anything else that you would like to say at the end?
Frank Seeliger: AI is not just a task for large institutions.
Anna Kasprzik: Exactly, AI concerns everyone. Even though AI should not be dealt with just for the sake of AI, but rather to develop new innovative services that would otherwise not be possible.
Frank Seeliger: There are naturally other topics, no question about that. But you have to address it and sort out the various topics.
Anna Kasprzik: : It’s important that we get the message across to people that automated approaches should not be regarded as a threat, but rather that by now this digital jungle exists anyway, so we need tools to find our way through it. AI therefore represents new potential and added value, and not a threat that will be used to eliminates people’s jobs..
Frank Seeliger: We have also been asked the question: What is the added value of automation? Of course, you spend less time on routine processes that are very manually. This creates scope to explore new technologies, to do advanced training or to have more time for customers. And we need this scope to develop new services. You simply have to create that scope, also for agile project management, so that you don’t spend 100% of your time clearing some pile of work or other from your desks, but can instead use 20% for something new. AI can help give us this time.
Thank you for the interview, Anna and Frank.
This might also interest you:
- On the Promising Use of AI in Libraries: Discussion Status of a White Paper in Progress – Part 1 and Part 2 (both in German)
- Establishment of a Productive Service for Automated Content Indexing at the ZBW – a Status and Experience Report (German), presentation slides from the Library Congress in Leipzig
- Get everybody on board and get going – the automation of subject indexing at ZBW (PDF), presentation slides by Anna Kasprzik from the IFLA Satellite Event „New Horizons on Artificial Intelligence in Libraries“
- Automating subject indexing at ZBW – the costs of the digital transformation and why we need less projects, presentation slides from the LIBER Conference 2022
- 14th Wildau Library Symposium on 13. & 14. September 2022, including “Best of Failure” (German)
- On the working focus “Automation of subject indexing using methods from artificial intelligence“ of the ZBW
- Discrimination Through AI: To What Extent Libraries are Affected and how Staff Can Find the Right Mindset
- Libraries: Books in Clouds (German)
- Artificial Intelligence: New Opportunities for Citizen Science, Research and Libraries
- Science Checker: Open Access and Artificial Intelligence Help Verify Claims
- Digital Open Science Tools: How to Achieve More Openness Through an Inclusive Design
- Horizon Report 2020: AI, XR and OER Begin to Catch on in Higher Education
- DGI Day Practicum: How Does Artificial Intelligence Change the World of Information Professionals?
- Hackathon Coding.Waterkant: How to Improve Library Services Through Chatbots and Artificial Intelligence
- Revival of Chatbots: Revolution of Customer Contact, Disruption of the Software Market?
- Google Engineer Put on Leave After Saying AI Chatbot has Become Sentient
Dr Anna Kasprzik, coordinator of the automation of subject indexing (AutoSE) at the ZBW – Leibniz Information Centre for Economics. Anna’s main focus lies on the transfer of current research results from the areas of machine learning, semantic technologies, semantic web and knowledge graphs into productive operations of subject indexing of the ZBW. You can also find Anna on Twitter and Mastodon.
Portrait: Photographer: Carola Gruebner, ZBW©
Dr Frank Seeliger (German) has been the director of the university library at the Technical University of Applied Sciences Wildau since 2006 and has been jointly responsible for the part-time programme Master of Science in Library Computer Sciences (M.Sc.) at the Wildau Institute of Technology since 2015. One module explores AI. You can find Frank on ORCID.
Portrait: TH Wildau
The post AI in Academic Libraries, Part 3: Prerequisites and Conditions for Successful Use first appeared on ZBW MediaTalk.
Awards are coming your way!
Arts and humanities researchers tend to be multitasking heroes and versatility buffs. This is probably not a matter of choice. Whether we work on digital editions of literary works, analyze historical events by creating and exploiting corpora of digitized newspapers, or model archaeological sites in 3D, our research processes are often quite complex: they involve multiple steps, different tools and a combination of methods. We are no strangers to heterogeneous datasets, modular system architectures, metadata crosswalks and software pipelines. And we are increasingly aware of the importance of data sharing and the notion of reproducible research in the age of Open Science. A scholarly process may start with identifying and collecting data and end with the publication of some research outputs, but the very beginning and the very end never tell the full story of the research data lifecycle.
In this year’s DARIAH Theme Call, we are looking for proposals and projects that will explore, assess, analyze and embody the challenges of designing, implementing, documenting and sharing digitally-enabled workflows in the context of arts and humanities research from a technical, methodological, infrastructural and conceptual point of view.
What is the state of the art in research workflows in the digital arts and humanities? What are we doing well, and what should we do better? How can we evaluate the appropriateness of a workflow or assess its efficiency? What makes a workflow innovative? What does it mean for a workflow to be truly reproducible? Are there modeling or standardization frameworks that make this job easier? What kind of documentation is necessary and at what level of granularity? What are the hidden costs of our workflows? What should DARIAH do – in addition to treating workflows as a particular content type on the SSH Open Marketplace – to help researchers develop, deploy and disseminate better workflows?
This position is responsible for facilitating community consultation and engagement for the international Open Access eBook Usage (OAeBU) Data Trust effort, funded initially through the Andrew W. Mellon Foundation-funded project, “OAeBU Data Trust: Advancing to Launch by Developing IDS Governance Building Blocks.” This project is a collaboration led by the University of North Texas, with co-PIs from OPERAS, OpenAIRE, and Educopia Institute. This position will work under the supervision of the Canadian-American Executive Director of the OAeBU Data Trust effort to develop and manage mechanisms to engage community partners and solicit community input for the work-packages and projects related to the global OAeBU Data Trust effort. Based in Europe to provide the Data Trust with increased staff capacity to attend meetings within the Eastern Hemisphere, the position will be staffed through the OPERAS international not-for-profit association (AISBL).
As the second of two full-time positions working for the Data Trust, this individual will be responsible for developing and managing engagement strategies for OA book usage metrics stakeholder constituencies. This position is highly international and interdisciplinary in scope; the manager must have a positive record of communicating and engaging professionally with commercial, academic, and non-profit audiences worldwide. The individual recruited for this position must also have professional experience in scholarly communication and must be a reliable, independent worker that appreciates the importance of open access policies to global knowledge distribution.
CLARIN ERIC, the central organisation of the European Research Infrastructure for language resources, has a vacancy for an experienced management assistant to work as member of the central organisation. We are looking to hire somebody who feels at home in an (international) scientific environment, who is a team player but also able to work independently and who is interested in joining the team that is responsible for the operational and administrative aspects of the central organisation of CLARIN.
Activities & tasks include (all or part of the following, depending on the eventual size of the appointment):
running general front office tasks, such as adding content to the website, handling emails arriving in the general mailbox (e.g., answering questions, forwarding specific requests to the relevant CLARIN representative(s) or other staff members);
preparing of the logistics of virtual and physical meetings;
collecting information for management overviews and memos, and digital archiving;
managing mailing lists and registrations;
supporting committee meetings (agenda preparation, minute taking, etc.);
supporting the layout and production of printed material.
Our team works in a hybrid office with short lines of communication and flexible working hours, and the possibility to work remotely for short periods of time (in mutual agreement).
Wikimedia UK is seeking a Head of Finance and Operations.
The overall purpose of the role is to lead the finance and operational function of Wikimedia UK. Reporting to the Chief Executive, you will work closely with other members of the Senior Management Team and the Honorary Treasurer, and line-manage the Finance and Operations Coordinator.
Dear OABN members,
On behalf of the H2020 project reCreating Europe and COMMUNIA, we would like to invite the OABN community to the expert workshop on “Copyright Flexibilities: mapping, explaining, empowering”, which will be held in a hybrid format at the Institute for Information Law (IViR), University of Amsterdam and online (Zoom) on 21 September 2022, from 9:00 to 17:00 CEST.
The workshop will bring together the core research teams that have developed three websites/databases devoted to users’ rights and copyright flexibilities (www.copyrightexceptions.eu, www.copyrightflexibilities.eu and www.copyrightuser.eu), national copyright experts who contributed to the mapping, and stakeholders representing various groups of beneficiaries.
The aims of the workshop are (i) to launch the three platforms, gather feedback on their functionalities and plan their future; (II) to discuss the state of copyright flexibilities and necessary policy actions at the EU and national level, with three expert panels on (a) teaching and research; (b) freedom of expression and (c) cultural uses and preservation; (iii) to present and test reCreating Europe’s best practices on copyright flexibilities with interested stakeholders.
We would be delighted if OABN members would join us for the entire event or any panel of interest, and particularly to discuss reCreating Europe’s best practices at the dedicated roundtable that will take place after the lunch break (ca. 14.30 CEST). Please note that the event is not restricted to citizens of EU member countries.
“One of the big advantages of citizen science is the fact that it promotes open data practices. In this way, the approach contributes to science innovation by opening science up to society and advancing collaborations between various actors, including citizens, which helps to make science more participatory and inclusive….”
“In spring 2020, COPIM Work Package 3 started work on devising a new revenue model for university presses and open access books. Through a series of fact-finding meetings, workshops and reports the team gathered lots of information on the business models of scholarly presses with the aim of creating a sustainable revenue stream that would allow presses to publish their books openly, without using unaffordable book processing charges.
That research led to us devising and launching an innovative revenue model called Opening the Future in October 2020 with our first partner publisher Central European University (CEU) Press. In essence, it is a library subscription membership programme whereby the press provides term access to portions of their (closed) backlist books at a special price, and then uses the revenue from members’ subscriptions to allow the frontlist to be OA from the date of publication. This model presents a potential route for the mass and sustainable transition to OA of many small-to-mid sized university presses. Liverpool University Press (LUP), joined as our second project partner with their own Opening the Future initiative in June 2021. The programme is proving to be a success and, to date, the two presses have together accrued enough library funding to produce 10+ new OA monographs. Opening the Future continues to grow with both publishers. …”
“ChemRxiv was launched on August 15, 2017 to provide researchers in chemistry and related fields a home for the immediate sharing of their latest research. In the past five years, ChemRxiv has grown into the premier preprint server for the chemical sciences, with a global audience and a wide array of scholarly content that helps advance science more rapidly. On the service’s fifth anniversary, we would like to reflect on the past five years and take a look at what is next for ChemRxiv.”
“Public discussion of the Nelson Memo has already begun on the listservs, and various stakeholder groups including the Association of Research Libraries, SPARC, and the Association of American Publishers have issued public responses. But many questions remain to be answered. Among them:
Where the Holdren Memo laid out requirements, the Nelson Memo seems only to advance suggestions. Is this difference in language meaningful in practice?
What is the nature of the OSTP’s statutory authority over federal agencies? If an agency opts not to follow the terms laid out in the Nelson Memo, will there be any consequences? If so, what would they be?
Does the Nelson Memo wholly supersede the Holdren Memo, or are terms of the latter not directly altered by those of the Nelson Memo – such as, for example, the requirement that agencies incorporate in their plans a “strategy… for fostering public-private partnerships with scientific journals relevant to the agency’s research” – still in force?…”
“A scenario that has been on the minds of publishers over the past decade (and incorporated into strategic planning scenarios by many publishers) is the possibility of “zero embargo.” In 2013, the White House Office of Science and Technology Policy (OSTP) issued policy guidance to agencies in the form of the OSTP memorandum on “Increasing Access to the Results of Federally Funded Scientific Research” (the “2013 Memorandum,” also widely referred to as the “Holdren Memo” because it was issued by John Holdren, at the time the Director of OSTP). The Holdren Memo directed federal agencies in the US with annual research and development budgets of more than $100 million to develop access policies to ensure public access to federally funded research. While the Holdren Memo provided wide latitude to agencies on many of the specifics, the memo put forth a 12-month post-publication embargo period as a guideline. By “post-publication embargo period,” the Holdren Memo was referring to the period between publication of an article resulting from funded research in a journal and the freely accessible public release of that journal article in the form of either the author accepted manuscript (AAM) or the final published version of record (VOR).
OSTP is an office of the White House and as such sets policy on behalf of the US President. The US federal agencies—including the National Institutes of Health (NIH), Department of Energy (DOE), Department of Defense (DOD), National Science Foundation (NSF), Department of Agriculture (USDA), National Aeronautics and Space Administration (NASA), and so on—are part of the Executive Branch and therefore under White House oversight. Any OSTP policy can be revised by a subsequent administration, and one possibility has always been that the 12-month post-publication embargo could be shortened, potentially to zero. Indeed, such a scenario almost occurred during the Trump administration when such a memorandum was drafted, though it was never ultimately issued.
Rumors have been circulating for months that the Biden administration has been reviewing the Holdren Memo as part of a wider review of open science policy. Last week, Alondra Nelson (currently heading OSTP) issued a memorandum titled “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research” (the “2022 Memorandum, or the “Nelson Memo”). The Nelson Memo is accompanied by an impact statement titled “Economic Landscape of Federal Public Access Policy” (the “2022 Impact Statement”), which was submitted to Congress pursuant to the Consolidated Appropriations Act, 2022. The 2022 Memorandum directs federal agencies to develop policies that will require free public release of research articles upon publication, and that all supporting research data behind the articles be similarly made immediately and freely available.
The zero embargo scenario has arrived….”
University of Twente and the ING Group have put their signatures to a five-year collaboration agreement covering artificial intelligence and data science in the financial world. The partnership was celebrated at FinanceCom 2022, a leading-edge conference hosted by UT. It marks the first time that this international congress in finance and fintech has been held in the Netherlands.
“The FAIR data principles are rapidly becoming a standard through which to assess responsible and reproducible research. In contrast to the requirements associated with the Interoperability principle, the requirements associated with the Accessibility principle are often assumed to be relatively straightforward to implement. Indeed, a variety of different tools assessing FAIR rely on the data being deposited in a trustworthy digital repository. In this paper we note that there is an implicit assumption that access to a repository is independent of where the user is geographically located. Using a virtual personal network (VPN) service we find that access to a set of web sites that underpin Open Science is variable from a set of 14 countries; either through connectivity issues (i.e., connections to download HTML being dropped) or through direct blocking (i.e., web servers sending 403 error codes). Many of the countries included in this study are already marginalized from Open Science discussions due to political issues or infrastructural challenges. This study clearly indicates that access to FAIR data resources is influenced by a range of geo-political factors. Given the volatile nature of politics and the slow pace of infrastructural investment, this is likely to continue to be an issue and indeed may grow. We propose that it is essential for discussions and implementations of FAIR to include awareness of these issues of accessibility. Without this awareness, the expansion of FAIR data may unintentionally reinforce current access inequities and research inequalities around the globe.”