SPARC Europe to facilitate high-level European OS policymaker group CoNOSC

SPARC Europe is honoured to support the Council for National Open Science Coordination (CoNOSC) in their efforts to advance national European Open Science policies. The CoNOSC mission is to help countries […]

The post SPARC Europe to facilitate high-level European OS policymaker group CoNOSC appeared first on SPARC Europe.

European Open Science Cloud: small projects, big plans and 1 billion EUR

by Claudia Sittner

Prof. Dr Klaus Tochtermann is Director of the ZBW – Leibniz Information Centre for Economics, Member of the German Council for Scientific Information Infrastructures (RfII) and board member of the recently established European Open Science Cloud Association (EOSC Association). He was a member of the EOSC’s High Level Expert Group and the EOSC working group for sustainability for many years. He also founded, in 2012, the Leibniz Research Alliance Open Science, the international Open Science Conference and the associated Barcamp Open Science.

Recently, he was interviewed by host Dr Doreen Siegfried (ZBW) in the ZBW podcast “The Future is Open Science” on the future of the European Open Science Cloud and the complexity of the landscape for research data. This blog post is a shortened version of the podcast episode “European Open Science Cloud – Internet of FAIR Data and Services” with Klaus Tochtermann. You can listen to the entire episode (35 minutes) here (German).

Why the name European Open Science Cloud never fitted

Something that will surprise many people: “The terminology of the EOSC was never appropriate – even in 2015”, according to Tochtermann. Back then – as the initial ideas for the EOSC were being developed and small projects were commencing – it was neither European, nor Open, nor Science nor a Cloud:

“It isn’t European – because research doesn’t stop at the regional borders of Europe, but instead many research groups are internationally networked. It isn’t open – because even in science there is data that requires protection such as patient data. It isn’t science – because many scientific research projects also use data from economy. And it isn’t cloud – because the point is not to deposit all data centrally in a cloud solution”, explains Klaus Tochtermann. The term was specified by the European Commission at the time and is now established. Among experts, the term “Internet of FAIR Data and Services” (IFDS) is preferred, says Tochtermann.

Preparatory phase 2015 to 2020

The EOSC started in 2015 with the aim “to provide European researchers, innovators, companies and citizens with a federated and open multi-disciplinary environment where they can publish, find and re-use data, tools and services for research, innovation and educational purposes.” (European Commission).

Since then, 320 million EUR have been deployed to fund 50 projects relating to research data management. These have however only shed light on individual aspects of the EOSC. “In fact, we are still a long way from being able to offer EOSC operationally in the scientific system”, says Tochtermann.

The funds were integrated into a research framework programme that only financed smaller projects at a time – this is owing to the way the European Commission functions and how it funds research. That’s why there was never one big EOSC project, but many small individual projects. These examined issues such as: “What would a search engine for research data look like? How can identifiers for research data be managed?”, explains the ZBW director.

Large projects EOSC Secretariat and EOSC Future

Then the EOSC went into the next phase with two large projects: EOSC Secretariat and EOSC Future. Running time: 30 months. Budget: 41 million EUR. Both are intended to bring together all previous projects in the direction of EOSC, i.e. to enable convergence and actually draw up a “System EOSC”. All puzzle parts from earlier small projects are now being put together to form a large EOSC blueprint.

Founding of the EOSC Association

The EOSC Association was founded in 2020. It is a formal institution and a foundation under Belgian law. It is headquartered in Brussels and will consolidate all activities. A board of directors has been appointed to coordinate the activities, made up of the president Karl Luyben and a further eight members, including Klaus Tochtermann.

In February 2021, the Strategic Research and Innovation Agenda (SRIA, PDF) laid down what the EOSC Association should achieve over the next few years. From now on, all EOSC projects must be orientated on these SRIA guidelines.

Initial time plan for the European Open Science Cloud

The Strategic Research and Innovation Agenda anticipates various development stages with precisely defined timetables. Basis functionalities are classified as “EOSC Core”, a level that should be implemented by 2023. Here, elements such as search, storage/save or a log-in function will be realised. This will be followed by the launch of “EOSC Exchange”, which deals with more complex functionalities and services for special data analyses of research datasets.

Collaboration between the EOSC Association and the European Commission

On the question of how the European Open Science Cloud Association and the European Commission cooperate with each other, Tochtermann emphasises the good relationship to the Commission. The so called partnership model, which is new for everyone and first needs to be experienced, forms the framework for this. However, sometimes the time windows in which the Commission wants reactions from the EOSC Association are very narrow. “I’m glad we have a very strong president of the EOSC Association, who also has the backbone to ensure that we are not always confronted with such short time windows, where reactions are sometimes simply not possible because the subject matter is too complex. But overall it works well”, Tochtermann sums up.

Financing the EOSC Association: 1 billion EUR

For the next ten years, 1 billion EUR is being made available for the development of the EOSC – half from the European Commission and half by the 27 member states of the EU. This was negotiated between the European Commission and the EOSC Association from December 2020 to July 2021 and laid down in an agreement (PDF, the Memorandum of Understanding for the Co-progammed Euroepean Partnership on the European Open Science Cloud.

The EOSC Association also raises further funds through membership fees. According to Klaus Tochtermann: “Members are not individuals, but organisations such as the ZBW or the NFDI Association in Germany. (…) Members can choose between full membership, meaning they can take part in all votes and currently pay a contribution of 10,000 EUR per year. Or they can be an observer, where (…) they have a less active role and are not allowed to vote in the annual general meeting. As an observer, you pay 2,000 EUR.” The contributions of the 200 members currently generate a budget of around 1.5 million EUR for the EOSC Association. This is being utilised to build up staff in the office, among other things.

EOSC, NFDI and Gaia-X: a confusing mishmash?

As well as the EOSC, there are further projects in Germany and Europe aimed at implementing large research data infrastructures. The most well-known from a German perspective are the National Research Data Infrastructure (NFDI) and Gaia-X. All three projects – EOSC, NFDI and Gaia-X are technically linked. They are all technical infrastructures. But how do they differ?

  • National Research Data Infrastructure

    As well as the European EOSC, there is the NFDI (German) in Germany, which was founded by the German Council for Scientific Information Infrastructures (RfII).

    The NFDI – similarly to the EOSC – deals with the technical infrastructure for research data, but is also concerned with the networking people, i.e. the scientific community, says Tochtermann. The NFDI thereby focusses on individual disciplines such as economics, social sciences, material sciences or chemistry.

    The NFDI directorate, a central coordinating body, brings the individual NFDI initiatives together, so that they interact. This takes places through working groups and applies above all to cross-discipline or discipline-independent topics. Klaus Tochtermann gives the following examples:

    • digital long-term archiving of research data,
    • allocation of unique identifiers for a data set,
    • single login or single sign-in for the research data infrastructure NFDI,
    • interoperability of systems,
    • uniform metadata standards and
    • uniform protocols.
  • Gaia-X

    On the other hand, there is Gaia-X: “Gaia-X is an initiative which aims to offer companies in Germany and Europe a European infrastructure for the management, i.e. storage of their data, for example, because many of them opt for services from America or China”, explains Tochtermann. As well as in its target group (including industry, companies), Gaia-X also differs from the EOSC and the NFDI in relation to the major role that the topic of data sovereignty plays in the project. Klaus Tochtermann summarises this as follows: “Data sovereignty means that when I generate data, I can follow who is using my data for what purposes at any time. And if I don’t want this, then I can also say, ’I don’t want my data to go there.’”

How can you learn more about the EOSC?

The EOSC Portal is an information platform that gives details about the services that will be playing a role at the EOSC at a later date. These include services such as European research data repositories. It’s a good place to start if you want to find out more about the EOSC.

Take part in the development of the EOSC

Anyone who wants to get involved in the EOSC can do so in the Advisory Groups. Six of these have been set up initially, to explore topics such as curricula in the field of research data, FAIR data and metadata standards. There was an open call to participate in these groups, for which around 500 applications were received. Most of them came from France (18 percent) and Germany (17 percent) which shows how much the EOSC has already caught on in both countries, says Tochtermann. A selection from these 500 applications will now be used to fill the six working groups.

On the website of the EOSC Association, you will also find regular “Calls and Grants”, which people can apply for, or job applications https://www.eosc.eu/careers. For up-to-date information, you can subscribe to the monthly newsletter https://www.eosc.eu/newsletter or follow the EOSC Association on Twitter @eoscassociation.

This blogpost is a translation from German.

Related Links

This might also interest you:

  • Episode 12 of the ZBW podcast „The Future is Open Science“ with Prof. Dr Klaus Tochtermann on the European Open Science Cloud (German)
  • The post European Open Science Cloud: small projects, big plans and 1 billion EUR first appeared on ZBW MediaTalk.

    Your Input Needed: Barcamp Open Science

    oscibar The Barcamp Open Science has a survey running to look at how to shape its future for you. Please take a moment to fill out this short questionnaire. The Barcamp Open Science has run several partnership events over 2021 as well as its annual barcamp that accompanies the Open Science Conference in Berlin, which took place in February 2021. You can read about the Berlin Open Science Barcamp…

    Source

    Speedy Literature Reviews Using Wikidata and Mining Tools

    Image: Indian National Young Academy of Sciences ( INYAS), India, 2021-08-13. Open Science Principles and Practice, slide 38. Peter Murray-Rust, Ayush Garg and Shweata N. Hegde. CC BY 4.0. By Shweata N. Hegde and CEVOpen community. Hashtag: #cevopen CEVOpen is an open research project developing open-source tools to enable search tools for Open Access repositories. The project has a prototype…

    Source

    Open Science: How to Implement It in a Multidisciplinary Faculty – 7 Recommendations

    An interview with Ari J. Asmi

    Ari J. Asmi is research infrastructure coordinator at the University of Helsinki, Faculty of Science, a multidisciplinary faculty. There he has been involved in the process of developing common and workable Open Science recommendations with all stakeholders for the medium-sized science-oriented university faculty. The result is seven recommendations, which he already presented at the Open Science Conference 2021 in a poster presentation.

    Poster Seven Recommendations presented at the Open Science Conference 2021.

    In the interview, he reports on how the recommendations came about, why it is so important to also have Open Science sceptics involved, what the biggest challenges were and still are in developing and implementing them, and what he would advise others who would themselves like to create suitable recommendations for a more Open Science practice at their own faculty.

    Ari, you accompanied the process of developing a common Open Science policy at the Faculty of Science at the University of Helsinki. What was the outcome of this process?

    We created a working group, with the help of the dean and the faculty administration. This group included representatives from all divisions of the Faculty of Science at the University of Helsinki, and importantly did not have only “Open Science advocates”, but mostly normal scientists and research coordinators from different divisions. We agreed that the Open Science recommendations should be easy to implement, with relatively quick time-scale (from months to few years), not resource demanding, and above all – acceptable to the science community in the faculty. This led to a suitable ambition level for the recommendations, which in turn helped their acceptance in the faculty. We agreed on seven key recommendations:

    1. Set the overall faculty policy on science products: “as open as possible, as closed as necessary”.
    2. Value the Open Science products in the staff annual development discussions.
    3. Consider Open Science products in unit, department and tenure track evaluations.
    4. Require listing of Open Science products in recruiting.
    5. Create a short, clear and well documented knowledge base of Open Science best practices in the faculty.
    6. Organise structured staff training on the best practices, facilitate peer support, and Open Science culture in the faculty.
    7. Develop Open Science content for curriculum MSc and Doctoral programmes.

    Open Science recommendations should be easy to implement, with relatively quick time-scale (from months to few years), not resource demanding, and above all – acceptable to the science community in the faculty.
    – Ari J. Asmi

    The first recommendation is more a statement, second to fourth are based on long term change in internal science evaluation towards openness, fifth and sixth on helping the faculty staff to adjust for Open Science, and seventh towards future generations. A more detailed version of the recommandations can be found in this document: Open Science Recommendations for the Faculty of Science.

    Open Science Recommendations for the Faculty of Science.

    A key point was also to always look for a holistic view on scientific end products, not only on scientific journal articles. This includes then, e.g., software, datasets, teaching material, etc. Another key point for us was not to prescribe value difference between open/closed scientific products, but instead just highlight the openness in all activity, and asking to justify the closed products if needed.

    (How) Have you practically implemented the seven recommendations in the faculty?

    The recommendations were accepted with enthusiasm on the faculty board level and from the dean, which made including the development discussion and staff recruitment changes in principle easy. They were faculty decisions, however, so I am not sure how well they have yet been implemented by the divisions’ administrations. The knowledge base is clearly more effort requiring part, and even with some level of resources, it is very dependent on finding proper contact points on each division (and even individual groups) to give information on domain-specific repositories, journals, etc. I am now personally trying to recruit semi-volunteers to do these. The training part is quite well dependent on the knowledge base, and the inclusion of new Open Science courses to curricula will most likely happen on the next round of MSc and PhD programme development.

    What were your biggest challenges? What were the biggest concerns from the faculty and researchers? How were you able to overcome the obstacles and convince the persons concerned?

    The main issue came from time and resource limitation of researchers. There is already a lot of “extra” work added to the researchers, as administrative staff has been reduced, and some of the Open Science relevant tasks (e.g. data management plans) are seen by some researchers as additional burden. This was somewhat reflected on the response for the plan, and how we developed it. The idea of having a common knowledge base was directly responding to the idea of reducing time required for these tasks. Also, some worries were about too rapid changes on how research and researchers are evaluated, making career planning challenging. This was responded by specifically avoiding any specific value for Open Science products in comparison to traditional evaluation criteria.

    To what extent were and are libraries involved in this process?

    We had a few times some contact with the university library, and I am personally well connected to some parts of their team on Open Science. The recommendations themselves did not go through any kind of close evaluation with them, but their services will be of course extremely important for the knowledge base, training and potentially even career advancement follow up, i.e. on following the publication of Open Science products.

    What are your tips for other faculties that would like to anchor these principles and put them into practice?

    An important part was to have a working group with enough of sceptical people along with Open Science enthusiasts. It is easy to come up with very idealistic approaches, which cannot then be implemented. Ambition is good, but realistic and short-to-medium time frame and minimal resource needs worked at least for us well. Support from top level (faculty dean and university strategy) is important, but these things have to be supported from bottom as well – so having diversity is excellent addition.

    We were talking to Ari J Asmi.

    This article emerged from the Open Science Conference 2021. The next International Open Science Conference (#OSC2022) will be held on March 08-09, 2022. Stay tuned for more information on the conference website.

    You may also find this interesting:

  • Open Science Recommendations for the Faculty of Science (University of Helsinki).
  • Open science recommendations for a multidisciplinary Faculty – goals, process & challenges.
  • Open Science Conference 2021: On the Way to the “New Normal”.
  • Open Science: Grassroots Initiative from Students for Students at the University of Amsterdam.
  • Barcamp Open Science 2021: Opening up new perspectives.
  • Research Data Management Project bw2FDM: Best Practice for Consultations and Training Seminars.
  • Open Science Podcasts: 7 + 3 Tips for Your Ears.
  • The post Open Science: How to Implement It in a Multidisciplinary Faculty – 7 Recommendations first appeared on ZBW MediaTalk.

    The Munin Conference on Scholarly Publishing

    The Munin Conference is an annual conference on scholarly publishing and communication, primarily revolving around open access, open data, and open science. The 16th annual Munin Conference on Scholarly Publishing will […]

    The post The Munin Conference on Scholarly Publishing appeared first on SPARC Europe.

    GenR and Co-Producing Guides for Open Science Communities

    Image: GenR’s [guide needed] – in style of Wikipedia’s popularized slogan [citation needed], sources here PNG and SVGCC BY SA 4.0. #guideneeded GenR invites you to join it on a new editorial direction for 2022. The plan is to co-produced short actionable guides to support and promote—Open Science communities, and Open Science values and culture. Many Open Science communities have projects and…

    Source

    Open Economics Guide: New Open Science Support for Economics Researchers

    by Birgit Fingerle and Guido Scherp

    Open Science represents the best practice for academic work and is a toolkit for “good scientific practice”. In addition to the general benefits of Open Science for the scholarly system and society, Open Science offers many individual benefits for researchers. Among them are a higher visibility of research work and a greater impact in research and society.

    Nevertheless, many researchers in economics and business studies see hurdles and are discouraged from practicing Open Science: A lack of time and of appropriate support are the main reasons for their hesitation. This was revealed by the 2019/2020 study “Die Bedeutung von Open Science in den Wirtschaftswissenschaften – Ergebnisbericht einer Online-Befragung unter Forschenden der Wirtschaftswissenschaften an deutschen Hochschulen 2019” (“The Importance of Open Science in Economics – Result Report of an Online Survey among Researchers in Economics at German Universities 2019”) conducted by the ZBW. See our blog post Open Economics: Study on Open Science Principles and Practice in Economics reporting the studies main findings. Furthermore, the survey on which the study was based expressed a strong desire for support in the form of online materials, especially with regard to Open Science platforms, tools and applications.

    With the new Open Economics Guide (German), the ZBW aims to address these wishes and to support economics and business studies researchers in implementing open practices.

    Support for open science practice

    The Open Economics Guide addresses the challenges and support needs identified in the study. It is based on the perspective and the needs of economics and business studies researchers. It takes into account, for example, that for them lack of time is the top obstacle to Open Science. This is why the texts of the Guide are concise and clear. Therefore, the Open Economics Guide starts with concrete benefits for researchers, for example by recommending first steps for getting started with Open Science easily and quickly to implement.

    Accordingly, where necessary, the content reflects the specifics of economics and business studies research. The Open Economics Guide is also based on systematically reviewed existing content, which it picks up or refers to and recommends where necessary. Since the range of information, tutorials and tools related to Open Science is constantly growing, the Open Economics Guide offers good orientation for researchers and takes up current developments.

    The ZBW has thus designed the Open Economics Guide as the central entry point specifically for Open Science in economics and business studies, initially for German-speaking countries. In the Open Economics Guide, economists can discover how openness enriches their research and how they can benefit from the advantages of open research.

    Quick start, tool overview and knowledge base

    The Open Economics Guide supports economics and business studies researchers with practical tips, methods and tools to practice Open Science independently and successfully and thus to promote their academic career. To this end, the Guide contains, among other things:

    • easy-to-understand quick-start guides to Open Science topics (currently Open Science, Open Access, Open Data and Open Tools),
    • a comprehensive overview of more than 70 tools (German), subdivided by the phases of the research workflow,
    • a growing knowledge database with currently about 100 entries (German) with extensive background information and practical tips on how to proceed,
    • a clear glossary (German), which answers comprehension questions about the most important terms related to open research at a glance.

    Content under open license and further expansion

    The content of the Open Economics Guide is offered under an open license as far as possible. Thus, it can be reused in other contexts according to the principles of Open Science, for example by other libraries for their researchers.

    The Open Economics Guide will be continuously expanded and extended. For instance, further focal points, such as Open Educational Resources and Open Research Software, will be added. All aspects of Open Science relevant to economics and business studies research will be covered. In doing so, a close communication as well as a close cooperation with researchers of economics and business studies will be strived for, in order to develop new contents also jointly. In addition, the guide will aim at an international target group in the future.

    Visit the Open Economics Guide now

    Featured Image: Mockup created by freepik – www.freepik.com

    The post Open Economics Guide: New Open Science Support for Economics Researchers first appeared on ZBW MediaTalk.

    IsoBank – Stable Isotope Research + Open Data


    The use of stable isotopes (the non-radioactive form of an element) has become increasingly prevalent in a wide variety of scientific research fields. The fact that many elements have stable isotopes, which exhibit unique properties, allows for their distribution and ratios in natural environments to be measured. These data can be used to shed insight on the history, fate and transport of elements in water, soil and even archeological specimens. Our curated collection of research using stable isotopes highlights the diversity of fields that utilize these invaluable measurements.

    To meet the needs of this growing research community, and to facilitate accessibility and data sharing, the US National Science Foundation has funded the IsoBank project – a common repository for stable isotope data.

    Here, we chat with some of the IsoBank organizers about the importance of the project, and how they use stable isotopes in their own research.


    Jonathan Pauli is an Associate Professor in the Department of Forest and Wildlife Ecology at University of Wisconsin-Madison. His research explores the response of mammal populations and communities to human disturbance, particularly as it relates to developing effective conservation strategies. He works in diverse ecosystems and employs a variety of techniques, from traditional ones like live capture, radiotelemetry and observation to more advanced ones involving molecular markers, stable isotopes and population modeling to answer questions relating to mammalian ecology and conservation.


    Gabriel Bowen is a Professor of Geology and Geophysics and member of the Global Change and Sustainability Center at the University of Utah, where he leads the Spatio-temporal Isotope Analytics Lab (SPATIAL) and serves as co-director of the SIRFER stable isotope facility. His research focuses on the use of spatial and temporally resolved geochemical data to study Earth system processes ranging from coupled carbon and water cycle change in geologic history to the movements of modern and near-modern humans. In addition to fundamental research, he has been active in developing cyberinformatics tools and training programs supporting the use of large-scale environmental geochemistry data across a broad range of scientific disciplines, including the waterisotopes.org and IsoMAP.org web sites and the Inter-University Training for Continental-scale Ecology training program.


    Brian Hayden is an Assistant Professor in Food Web Ecology at the University of New Brunswick, Canada, where he leads the Stable Isotopes in Nature Laboratory. His research focuses on the trophic responses to environmental change, predominantly in aquatic systems — he considers himself extremely fortunate to collaborate with researchers around the globe addressing these issues.


    Seth Newsome is an animal ecology and eco-physiologist whose research blends biochemical, morphometric, and phylogenetic analyses to provide a holistic understanding of the role of energy transport in the assembly and maintenance of biological communities. He is the Associate Director of the University of New Mexico (UNM) Center for Stable Isotopes and an Associate Professor in the UNM Biology Department. Besides science and fixing mass spectrometers, he enjoys mountain biking, rafting, and fly fishing.


    Oliver Shipley is an applied ecologist at the University of New Mexico, with training in a suite of laboratory and field techniques. He is broadly interested in food-web dynamics and animal ecophysiology and employs a suite of chemical tracer and biotelemetry approaches to investigate these processes with a strong focus on marine ecosystems. His research can be defined by three interconnected themes 1) defining the drivers and food web implications of ecological niche variation at various levels of biological organization, 2) applying ecophysiological principles to predict the timing of important biological events, 3) investigating the fitness consequences of niche variation for food web and broader ecosystem dynamics.

    Research using stable isotopes spans a wide array of fields, from the geosciences to ecology to archeology – has organizing the IsoBank group highlighted the different forms that isotopic research can take? Have there been any challenges in communication with scientists of such varied backgrounds?

    BH: This is one of the main challenges we faced when developing IsoBank. Isotopes have huge a diversity of applications and researchers working in environmental, ecological, and archaeological isotope systems have developed metadata relevant to their specific discipline. Our goal was to build a single large database capable to serving all of these disciplines, which meant we needed to somehow combine all of the distinct metadata into a single framework. This can be challenging within a field; for example, most of my research involves freshwater fish but much the information I use to describe a datapoint, (e.g., habitat, organism size, tissue type) may or may not be relevant to ecologists studying birds, insects or plants. Working across disciplines exacerbates things considerably. For example, ‘date’ means very different things to ecologists, archaeologists, and paleoecologists, despite us all using the same techniques. We tried to address this by developing core metadata terms which are common to all disciplines and therefore required in order for a datapont to be uploaded to IsoBank, and discipline specific optional metadata terms which can be selected by the user.

    JP: Indeed, one of the greatest assets of IsoBank is also one of its greatest challenges. Because isotopes span so many different disciplines – e.g., environmental, geological, archaeological, biomedical, ecological, physiological – there are a variety of discipline-specific metadata that are needed. To accommodate these different needs, we have convened a number of working group meetings to bring together experts within these disciplines to identify what metadata are necessary, and fold them into a single and operational framework. I’ve been impressed, though, that our discussions with scientists with such varied interests and backgrounds have been able to effectively communicate what is needed. I’d even offer that these discussions with other people, employing isotopes for different questions, has been a highlight of this project for me personally and has expanded my thinking and generated new ideas of application to my own work.

    Tell us about how you use stable isotopes in your own research.

    BH: I think I am drawn to isotopes because of the diversity of the applications of the techniques, it’s such a useful tool the only limit is our imagination. I am an aquatic ecologist at heart – my research focuses on understanding how aquatic ecosystems, especially food webs, respond to environmental change. Initially I used isotopes to improve our understanding of the trophic ecology of specific species, but over time this has changed to a community level perspective.

    GB: Isotopes are incredibly powerful tracers of the flow of matter (including organisms!) through the environment. Many of the applications in my research group leverage this potential in one way or another. We use isotopes in water to understand hydrological connectivity – how rain falling in different seasons or weather systems contributes to water resources or plant water uptake and transpriation. We use isotope values of solutes to better understand biogeochemical cycles – sources of carbon stored in soils or how mineral weathering in different systems contributes to global geochemical cycling. We use isotope values measured in human and animal tissues to map the movement of individuals – migration pathways, sources of potentially poached game, or the childhood residence location of the victims of violent crime.

    SN: As an animal ecologist and eco-physiologist, I’m interested in tracing the flow of energy within and among organisms, which is governed by species interactions and food web structure. To do so, I meld isotopic, morphometric, and phylogenetic analyses to provide a holistic understanding of the role of energy transport in the assembly and maintenance of ecological communities. I use lab-based feeding experiments in which the stable isotope composition and concentrations of dietary macromolecules are varied to understand how animals process dietary macromolecules to build and maintain tissues. I use this information to quantify niche breadth from individual to community-levels to better understand the energetic basis of community assembly and structure. Finally, I adopt a broad temporal perspective by comparing species interactions in modern versus ancient ecosystems, providing the full range of behavioral and ecological flexibility important for designing effective management strategies and assessing a species sensitivity to environmental change.

    JP: I am a community ecologist and conservation biologist, and am interested in the biotic interplay between organisms that ultimately shape community structure and dynamics, and how we can predict these interactions into the future and within emerging novel environments. To that end, I use isotopes to understand animal foraging and trophic identities and combine these data with fieldwork studying animal behavior, movement and space use as well as species distributions and abundances. After developing a better understanding of contemporary community structure and interactions, I use this information to explore past communities and project what future communities will look like and how they will behave. 

    You recently organized the IsoEcol workshop to provide researchers in the Ecology community with training on sharing their data through IsoBank. How has IsoBank allowed for better collaboration in the ecological sciences community? Are there any particular themes or questions that have arisen?

    OS: We were extremely excited to host the first IsoBank workshop for the broader research community at this years IsoEcol – this was held in an online format through Zoom. The workshop provided participants with a brief history of IsoBank’s development but focused heavily on the metadata structure and data ingest process. Since the workshop we have received many new modern and historical datasets across terrestrial, freshwater and marine systems. As we continue to ingest a growing number of datasets, the collaborative potential of IsoBank becomes increasingly realized. This moves us closer to exciting questions that can be addressed using the big-data model IsoBank will soon support. At the last IsoBank workshop we identified several potential research priorities that can be addressed in the coming years, these include but are by no means limited to 1) the development of novel isoscapes (spatial interpolations of stable isotope data) and 2) broadscale patterns in animal trophic interactions and broader food-web dynamics.  

    Oliver, for Early Career Researchers, being part of a robust and supportive research community can be instrumental to growth as a scientist and to career success. How has your involvement in the IsoBank project led to opportunities that you may not have otherwise had?

    OS: As a postdoctoral research fellow, it has been an extremely rewarding experience serving as the project manager for IsoBank. One of the primary reasons I was excited to work on IsoBank, were the potential collaborative and networking opportunities facilitated by the projects diverse userbase. Since I began working with the IsoBank team, and extended userbase I have formed new collaborations with researchers across the US and Europe. For example, working closely with Drs Seth Newsome (University of New Mexico, USA) and Bailey McMeans (University of Toronto Mississauga, CA) we are using stable isotopes of individual amino acids to understand how energy flow mediates the nutritional condition in lake trout. Further, in collaboration with PhD student Lucien Besnard (University of Western Brittany, France) we are building mercury stable isotope clocks to quantifying the age at which scalloped hammerhead sharks migrate from inshore nurseries to offshore foraging grounds. These exciting opportunities have been possible through working with IsoBanks advisory committee and the repositories diverse userbase. 

    Gabe, you were one of the first people to use the term “isoscape”, which has since become a hallmark of numerous scientific studies. What is an isoscape, and how do they feature in your research?

    GB: Isoscapes are quantitative models representing spatiotemporal isotopic variation in any natural or anthropogenic system…they are isotopic maps. And I think they embody the biggest reason we need IsoBank. Isoscapes are useful because almost any isotopic measurement needs to be interpreted in the context of reference data. We can use isotope values of animal tissues to understand the individual’s diet, but only if we know the isotope values of the foods it might eat. We can use isotope values of groundwater to assess where and when recharge occurred, but only if we know the isotopic compositions of those potential sources. Isoscapes are generated by combining isotopic datasets with statistical or process models to predict the values we would expect for sources at different locations and times, and we can make isoscapes for different substrates. Whether they are used to support the development of isoscapes, or more directly as reference data for a local study, access to the vast wealth of isotopic data that our different communities have produced is a critical limitation for most isotopic studies.

    In some environments, stable isotope ratios alone do not provide sufficiently detailed information. What combination of techniques or analytical methods do you use to yield more conclusive results and to elucidate unseen patterns or trends?

    BH: As isotope ecologists, we are often drawn to using techniques which have worked well for us in the past, but it’s always important to remember that isotope analysis is just another tool in our kit. In my work, I typically use isotopes to understand trophic interactions. They can fill in a lot of the gaps other methods of diet analysis leave open, but they still just provide one piece of the puzzle. Isotopes are a really nice way of getting a broad idea of what a specific consumer is doing or what sources of primary production are most important to a food web, but for questions which require more detailed answer, such as whether consumers are feeding on specific species of prey, isotopes may be limited. We typically use isotopes in combination with diet analyses, fatty acid analysis or even mercury analysis to get a more complete understanding of the community we are interested in. Sometimes the best insights come when different techniques give contrasting results, that can really help us to understand the complexity of the ecological systems we are studying.

    SN: Stable isotope analysis has become a standard tool in animal ecology because it can provide time-integrated measures of diet composition, albeit at a limited taxonomic resolution. As such, a new frontier is combining isotope analysis with proxies that can identify the taxonomic composition of animal diets, such as fecal DNA metabarcoding. The advantage of combining these two dietary proxies is that their respective strengths complement the weaknesses of the other. Specifically, fecal metabarcoding provides high-resolution taxonomic information for recently consumed (~24 hours) resources, but estimating the proportional consumption and assimilation of individual resources is confounded by assumptions about the relative digestibility of different foods. In contrast, isotope analysis provides a time-integrated measure of resource assimilation with low taxonomic resolution often only capable of discriminating between plant functional groups (e.g., C3 or C4) and providing an estimate of relative trophic level for consumers. Such multi-proxy metrics will transform how animal ecologists use diet composition data to understand foraging strategies, species interactions, and food web structure.

    PLOS is dedicated to Open Science, which expands upon the notion of Open Access to include concepts such as Open Data. Do you envision IsoBank changing data sharing and transparency amongst the stable isotopes community? – And what impact will this have on scientific research?

    BH: This was one of the driving force behind our desire to develop IsoBank. Jon Pauli, Seth Newsome, and another colleague, Dr. Shawn Stefan, wrote an opinion article in Bioscience in 2014 highlighting how isotope ecology was at a similar position to molecular ecology when GenBank was developed. We had all seen how crucial GenBank had become to molecular ecology by facilitating new science from old data and felt that IsoBank could have a similar effect on the ecological, geological, and anthropological sciences. So much of our work is still being done in relative isolation, the knowledge gained from our research is available through our papers; but unless the data are readily available in a usable and publicly accessible format, they will end up being stored in a hard drive on someone’s computer. This limits our ability to do large scale metanalysis or continental-global scale spatial studies using isotopes. Our hope is that IsoBank will allow us to generate new insights by combining many small datasets.

    The post IsoBank – Stable Isotope Research + Open Data appeared first on EveryONE.

    Open Science: Grassroots Initiative from Students for Students at the University of Amsterdam

    The Student Initiative for Open Science (SIOS) at the University of Amsterdam was initiated and is still run by students. The grassroots movement wants to introduce students as early as possible, voluntarily and sometimes playfully, to the sometimes quite abstract subject area of open practices and thus make university life easier for students. Good academic practice should be learned and internalised as early as possible, is the motto.

    Marla Dressel and Franziska Nippold from SIOS presented the project at the Open Science Conference this year. Now we have spoken with them and asked them about their motivation, because there are no credit points for the commitment. In the interview, they also tell us how their university environment reacted to the grassroots initiative and how academic libraries can support them. At the end, there are starting points and links for anyone who would like to establish a similar movement at her/his university.

    An Interview with Marla Dressel and Franziska Nippold

    Your grassroots initiative is very interesting as it targets Open Science education for students at the University of Amsterdam. What was your motivation?

    A course called “Good research practices” at our Psychology master’s programme was an important motivating factor. The course teaches students how to conduct reliable science and discusses current research practices. Fellow students of ours found it quite disappointing that they learned Open Science only during their master’s degree and many programmes do not offer such courses at all. Besides, most Open Science initiatives mainly target PhDs, post-docs, and professors while not adapting resources and materials to students’ needs.

    They felt that students were being overlooked in the Open Science movement.

    We think that this could be fatal because students are the researchers of tomorrow.
    This is why SIOS ( Student Initiative for Open Science) was born. We wanted to involve students in the movement and to provide them with open education on Open Science. 

    We are Data Sharing. We are Open Access.
    We are Reproducibility.
    We are Open Science, from students for students.

    What are your activities?

    Our event team organises a broad range of activities. We host lectures on Open Science topics (e.g., the difference between exploratory vs confirmatory research, Bayesian statistics, pre-registered reports), workshops to provide students with practical tools (e.g., how to pre-register your thesis, version control with GitHub, power analysis, JASP), and more fun activities to get students together, such as Open Science movie nights, pub quizzes, or discussion panels. We also have a communication team at SIOS that is pretty active on social media, especially on Twitter, but also on Instagram and on Facebook. Here, we spread our events and resources with other students, scholars, universities and everyone else, who is interested. At the same time, we attend conferences and write grant applications. We also provide materials and resources to students on our website and our Slack Channel. Here, students can also ask questions and debate current issues. Besides the purely educational part, we are currently running a study on research practices among students.

    How did your environment (e.g. profs, lecturers …) react to it?

    We have received immense support from our study coordinators, profs and lecturers. Many of them have offered to give lectures themselves and help us share our endeavours. For us, it is extremely rewarding to see the resonance in the community but at the same time we also know that we are lucky that our university is very method-conscious and that it may be different at universities outside the Netherlands. More importantly, students find our events helpful, and we receive a lot of positive feedback from them. 

    Are any of your activities part of the university curriculum, so that students get credits for them? Would that even be a goal for you?

    Besides the course we talked about before (Good research practices), students can get credit points for visiting our lectures. That is at least a start and so our goals are more focused on spreading our message and helping to set up other SIOS’s at different universities. However, we just heard from a newly founded SIOS that they will definitely focus on integrating Open Science in their curriculum because they do not even have a course on good research practices there. We hope that someday every research student can have access to Open Science materials if he:she wish to. 

    How do you ensure that your efforts and projects are sustainable and long-lasting?

    An easy answer would be that we currently digitise all our projects (thanks, COVID!). That means, we record all our lectures and we provide our whole range of resources for free on our website and social media. We also created a step-by-step guide to create an own initiative for Open Science and we pitch this guide at other universities. At the same time, we really think about what students need. That is why, most of our lectures are very introductory.

    We think that this is a general problem in the Open Science movement – that everyone who does not know so much about it yet will have problems organising all the information and debates that are currently going on.

    That is why often PhD students and other-level researchers are visiting our lectures – we offer comprehensive bunches of information. 

    We also believe that it is best to start as early as possible to teach students Open Science practices. Take pre-registrations, for example: If you already do this for your very first research project, the bachelor thesis in most cases, it will become normal for students to follow these practices. In this way, you are teaching students and building awareness as early as possible to integrate Open Science practices in the long run. 

    How can academic libraries support initiatives like yours?

    We think that there is a lot that can be done. The most important step is to help us share our endeavours. That can be on social media and on your website. Libraries could also always ask us for collaboration and especially now it is easier to just organise workshops together online. Libraries can also ask their students to create their own SIOS. And more generally, they can provide all kinds of resources themselves and participate in our Slack Channel.

    Do you have any tips for other students who want to start such an initiative? (How) Can they get any support from you?

    We have actually created a step-by-step guide to create your own SIOS. These are just guidelines of course, not necessarily a rulebook. We think that creating a SIOS is not super easy but that you can get a lot of support if you ask for it. That can be asking us at SIOS Amsterdam (we will always find time for you to have a meeting with us and give you some recommendations) but also lecturers and other people from university. Also, creating such an initiative has many incentives. From learning a lot about Open Science and current debates, over networking, to doing something worthwhile next to your study – creating such an initiative is inherently very rewarding.

    We were talking to Franziska Nippold and Marla Dressel

    SIOS link list

    Links to the course “Good Research Practices”

    Further readings

    The post Open Science: Grassroots Initiative from Students for Students at the University of Amsterdam first appeared on ZBW MediaTalk.

    Horizon Report 2021: Focus on Hybrid Learning, Microcredentialing and Quality Online Learning

    by Claudia Sittner

    The 2021 EDUCAUSE Horizon Report Teaching and Learning Edition was published at the end of April 2021 and looks at what trends, technologies and practices are currently driving teaching and learning and how they will significantly shape its future.

    The report runs through four different scenarios of what the future of higher education might look like: growth, constraint, collapse or transformation. Only time will tell which scenario prevails. With this in mind, we looked at the Horizon Report 2021 to see what trends it suggests for academic libraries and information infrastructure institutions.

    Artificial Intelligence

    Artificial intelligence (AI) has progressed so rapidly since the last Horizon Report 2020 that people cannot catch up fast enough to test the technical advances of machines in natural language proceedings. Deep learning has evolved into self-supervised learning, where AI learns from raw or unlabelled data.

    Artificial intelligence has a potential role to play in all areas of higher education where learning, teaching and success are concerned: support for accessible apps, student information and learning management systems, examination systems and library services, to name but a few. AI can also help analyse learning experiences and identify when students seem to be floundering academically. The much greater analytics opportunities that have emerged as the vast majority of learning events take place online, leaving a wide trail of analysable data, can help to better understand students and adapt learning experiences to their needs more quickly.

    But AI also remains controversial: for all its benefits, questions about privacy, data protection and ethical aspects often remain unsatisfactorily answered. For example, there are AI-supported programmes that paraphrase texts so that other AI-supported programmes do not detect plagiarism.

    Open Educational Resources

    For Open Educational Resources (OER), the pandemic has not changed much, many of the OER offerings are “born digital” anyway. However, advantages of OER such as cost savings (students have to buy less literature), social equality (free and from everywhere) and the fact that materials are updated faster are gaining importance. Despite these obvious advantages and the constraints that corona brought with it, however, only a few teachers have switched to OER so far as the report „Digital Texts in Times of COVID” (PDF) shows. 87% of teachers still recommend the same paid textbooks.

    OER continue to offer many possibilities, such as teachers embedding self-assessment questions directly into pages alongside text, audio and video content, and students receiving instant feedback. In some projects, libraries and students are also involved in the development of materials as OER specialists, alongside other groups from the academic ecosystem, helping to break down barriers within the discipline and redesign materials from their particular perspective.

    In Europe, for example, the ENCORE+ – European Network for Catalyzing Open Resources in Education is working to build an extensive OER ecosystem. Also interesting: the „Code of Best Practices in Fair Use für Open Educational Resources”. It can be a tool for librarians when they want to create OER and use other data, including copyrighted data.

    Learning Analytics

    Online courses generate lots of data: How many learners have participated? When did they arrive? When did they leave? How did they interact? What works and what doesn’t? In higher education, learning data analysis should help make better, evidence-based decisions to best support the increasingly diverse group of learners. Academic libraries also often use such data to better understand and interpret learner needs, respond promptly and readjust.

    The Syracuse University Libraries (USA), for example, have transmitted its user data via an interface to the university’s own learning analysis programme (CLLASS). A library profile was developed for this purpose, which was consistent with the library’s values, ethics, standards, policies and practices. This enabled responsible and controlled transmission of relevant data, and a learner profile could be created from different campus sources.

    Just as with the use of artificial intelligence, there are many objections in this area regarding moral aspects and data protection. In any case, the handling of such learning data requires sensitisation and special training so that teachers, advisors and students can use data sensibly and draw the right conclusions. In the end, students could also receive tailored virtual support throughout the entire process from enrolment to graduation. Infrastructures for data collection, analysis and implementation are essential for this.

    Microcredentials

    Microcredentials are new forms of certification or proof of specific skills. They are also better suited to the increasingly diverse population of learners than traditional degrees and certificates. Unlike these, they are more flexible, designed for a shorter period of time and often more thematically focused. The spectrum of microcredentials spans six areas from short courses and badges, to bootcamps and the classic degree or accredited programmes.

    Microcredentials are becoming increasingly popular and can also be combined with classic certifications. The Horizon Report 2021 sees particular potential for workers who can use them to retrain and further their education. It is therefore hardly surprising that companies like Google are also appearing on the scene with Google Career Certificates. For many scientific institutes, this means that they will have to further develop and rethink the architecture, infrastructure and work processes of their traditional certification systems.

    Blended and Hybrid Course Models

    Due to the corona pandemic, diverse blended and hybrid course models mushroomed, especially in the summer of 2020. “It is clear that higher education has diversified quickly and that these models are here to stay”, the report says. Hybrid courses allow more flexibility in course design; institutions can ramp up capacity as needed and cater even more to the diverse needs of students. However, most students still prefer face-to-face teaching.

    Newly learned technical skills and technical support have played a predominant role. In some places, new course models have been developed together with the learners. On the other hand, classic practices (such as frequent assessments, breakout groups during live course meetings, and check-in messages to individual students) remain high on the agenda. However, corona has brought mental and social health of all participants into sharper focus; it should also receive even more attention according to the Horizon Report.

    Quality Online Learning

    The coronavirus came along and everything suddenly had to take place online. So it is little wonder that the need to design, meaningfully evaluate and adapt high-quality online learning opportunities has increased enormously. Some were surprised to find that teaching online involved more effort than simply offering the on-site event via Zoom. In order to achieve learning success, online quality assurance became an issue of utmost relevance.

    Early in the pandemic, therefore, institutes began to develop online portals or hubs that included materials and teaching strategies adapted to the situation: for content delivery, to encourage student participation and to rethink assessment mechanisms.

    A positive example is the twelve-module course “Quickstarter Online-Lehre” (Quickstarter Online Teaching, German) by the Hochschulforum Digitalisierung – German Forum for Higher Education in a digital age and the Gesellschaft für Medien in der Wissenschaft (Society for media in science) from Germany. This course aims to support teachers with no or little online experience.

    This text has been translated from German.

    This might also interest you:

    The post Horizon Report 2021: Focus on Hybrid Learning, Microcredentialing and Quality Online Learning first appeared on ZBW MediaTalk.

    Research Data Management Project bw2FDM: Best Practice for Consultations and Training Seminars

    We were talking to Elisabeth Böker and Peter Brettschneider

    Research data management (RDM, known as FDM in German) is an essential topic regarding Open Science. Baden-Württemberg’s support and development project for research data management (German, bw2FDM) is dedicated to this issue. One of the aims of the bw2FDM project is to create a multifaceted educational programme to drive forward sustainability and networking within the entire research data management community. bw2FDM also operates the information platform forschungsdaten.info (Forschungsdaten means research data), which offers a wide-ranging collection of articles on RDM topics. None of these programmes is limited to a specific institution; rather they are aimed at the entire German-speaking community. Elisabeth Böker and Peter Brettschneider, who are involved in the project, explain how it works in detail, which topics are particularly popular within the RDM community, and what role libraries and information infrastructure institutions can play.

    Please introduce the project in three sentences: What is the mission / aim / vision of bw2FDM?

    Elisabeth Böker: bw2FDM is an initiative for research data management, funded by the Baden-Württemberg Ministry for Science, Research and Art. We follow four primary aims:

    • The coordination of the interdisciplinary issues of the four Science Data centres (German) in Baden-Württemberg.
    • We want to develop the information platform forschungsdaten.info further to become the central RDM platform for the German-speaking countries.
    • We offer consultations and training seminars on the topic of research data management, primarily for researchers from Baden-Württemberg.
    • bw2FDM is responsible for the planning and implementation of the E-Science-Tage conference .

    Fig. 1: Diagram showing overview of bw2FDM project areas / Axtmann and
    Reifschneider / CC BY 4.0

    We are particularly interested in the bw2FDM consultations and training seminars on research data management, which you also presented at the Open Science Conference 2021. What was your approach? What is your target group? What you do offer, specifically? And who can participate in the training seminars and consultations?

    Elisabeth Böker: That differs slightly, depending on the format: We focus our training seminars primarily on researchers from Baden-Württemberg. The students of the University of Konstanz are the target audience for our Open Science course. We want to introduce them to the topic of Open Science during their studies and are delighted at the considerable interest it has already drawn. The course “Open Science: From Data to Publications ” is subject to a CC BY licence – reuse is most welcome! Following the principle of openness, we have also published the videos as Open Educational Resources (OER) on Zenodo (German) and the central OER repository of the Institutions of Higher Education in Baden-Württemberg (ZOERR, German) as well as the material collection of the sub-working group training seminars / continuing education of the DINI/nestor AG research data (German) and also on the website of the Konstanz Open Science team.
    By way of contrast, forschungsdaten.info live (German) focuses on all interested persons – both researchers as well as RDM officers – throughout the entire German-speaking countries.

    Peter Brettschneider: : We try to consider research data management in a holistic sense. This also means that we focus on different target groups, and it also ensures that our training activities never become boring.

    To what extent do you specifically address Open Science topics, for example in the field of Open Data?

    Elisabeth Böker: Research data management is our central focus point. The guiding principle of the EU data strategy “as open as possible, as closed as necessary” is extremely important to us, which is why we emphasise it continually.

    Libraries fit wonderfully into a data-based academic world.
    – Peter Brettschneider

    (How) Are academic libraries and other digital infrastructure institutions integrated into the field of training seminars and consultations?

    Peter Brettschneider: Libraries fit wonderfully into a data-based academic world. Their core business is collecting information and making it available to the users. Research data have been part of the digital inventory of libraries for a considerable time. For example, that is the reason why universities will usually task their library or IT department with the implementation and operation of a research data repository. However, this kind of services should be accompanied by consultation and training programmes. Once again, our aim is to approach research data management holistically: It is not sufficient to provide just the hardware. There is also a need for people who explain and promote these services and are ready to assist researchers that may require help.

    The project runs from 2019 to 2023. This means that you are just about half way through. Could you draw some interim conclusions? What are the most important lessons that you have learned over the past two years?

    Elisabeth Böker: RDM is a team sport” – this is what we wrote in a publication (German) about our project. In this spirit, I would say it is crucial to approach issues collaboratively, use synergies and then progress towards the target. That works wonderfully. It is particularly gratifying to see this happening with the forschungsdaten.info platform. Even before “half-time”, we have been able to bring colleagues from Austria and Switzerland into the team – and we intend to continue building on this even more intensively in the second phase.There is an enormous demand for legal topics, and we are very lucky to have with Peter Brettschneider a legal specialist in the team.

    FDM is a Teamsport.
    – Elisabeth Böker

    Peter Brettschneider: Indeed, there is a lot of uncertainty regarding legal issues. In our training seminars, we like to combine legal topics with fundamental RDM know-how. This is important to us, because we can’t emphasise enough the central messages on research data and its management – such as FAIR data. But on the other hand, research data management is not an end in itself. It’s not our task to proselytise. Rather, it is our intention to support researchers and to make their research visible and reusable.

    What has the feedback to your RDM consultation and training seminar programmes been like so far?

    Elisabeth Böker: We are extremely satisfied. With forschungsdaten.info live in particular, we were able to average over a hundred participants. The demand is definitively there!

    Which of your programme’s RDM topics or formats are particularly popular?

    Elisabeth Böker: The forschungsdaten.info live format has been very popular – in part due to its focus on the entire German-speaking RDM community. Moreover, events that explore legal topics are always sure-fire successes.

    The bw2FDM project can definitely be called a success so far in the area of training and consulting. Are there plans to expand your project throughout Germany?

    Elisabeth Böker: : We are already active throughout the German-speaking countries with forschungsdaten.info live. However, we intend to advertise our other training seminars primarily for researchers in Baden-Württemberg – at both, universities as well as other higher education institutions. The reason for this is that our funding comes from the Baden-Württemberg Ministry for Science, Research and Art. Moreover, other federal states have their own RDM initiatives that offer great training opportunities.

    Are there already similar projects in other federal states? To what extent do you collaborate with them?

    Elisabeth Böker: Yes, many other federal states have comparable RDM projects or dedicated initiatives. They introduce themselves on the platform forschungsdaten.info (German). We have close and very fruitful collaboration with our colleagues, both, within a joint discussion forum as well as via the editorial network of forschungsdaten.info.

    From a legal point of view, we ensure sustainability by systematically releasing the project results under free licences in order to promote reuse.
    – Peter Brettschneider

    Sustainability plays an important role in your project. How do you ensure it?

    Peter Brettschneider: Sustainability has several dimensions: Structurally, we try to safeguard our programmes in the long-term through partnerships with other institutions. For example, our project team does not run forschungsdaten.info on its own, but rather relies on an editorial network of approximately 20 institutions.
    From a legal point of view, we ensure sustainability by systematically releasing the project results under free licences in order to promote reuse. This means that all training materials are licenced under Creative Commons BY 4.0. The contents of the forschungsdaten.info page can be reused completely without any restrictions, as we waive our rights by using CC 0 1.0.. Perhaps the most difficult thing is securing sustainability in terms of personnel. Currently, research data infrastructures are primarily sustained by project employees – the National Research Data Infrastructure (NFDI) provides a good example. That is a real issue since RDM is a long-term task.

    This text has been translated from German.

    Weblinks to the bw2FDM project and to forschungsdaten.info:

    This might also interest you:

    The post Research Data Management Project bw2FDM: Best Practice for Consultations and Training Seminars first appeared on ZBW MediaTalk.

    Pluralism vs. Monoculture in Scholarly Communication, Part 2

    Calls for a monoculture of scholarly communication keep multiplying. But wouldn’t a continued diversity of models be healthier?

    The post Pluralism vs. Monoculture in Scholarly Communication, Part 2 appeared first on The Scholarly Kitchen.

    Brainstorming Capacity Building for Citizen Science and Open Science in Research Libraries

    A co-creation session taking place online 6th July 2021, 14:00-15:30 to look at creating a roadmap for capacity building of Citizen Science and Open Science in research libraries, not only for their staff but also for academic staff, researchers, and students. The session is organised by the INOS project which looks to support Open Science and Citizen Science in higher education.

    Source