WorldFAIR Project (D13.1) Cultural Heritage Mapping Report: Practices and policies supporting Cultural Heritage image sharing platforms | Zenodo

Abstract:  Deliverable 13.1 for the WorldFAIR Project’s Cultural Heritage Work Package (WP13) outlines current practices guiding online digital image sharing by institutions charged with providing care and access to cultural memory, in order to identify how these practices may be adapted to promote and support the FAIR Principles for data sharing.

The report has been compiled by the Digital Repository of Ireland as a key information resource for developing the recommendations forthcoming in Deliverable 13.2. The DRI is Ireland’s national repository for the arts, humanities and social sciences. A Working Group of cultural heritage professionals has been invited to contribute feedback.

There are well-established standards and traditions driving the various approaches to image sharing in the sector, both local and global, which influence everything from the creation of digital image files, their intellectual organisation and level of description, to statements of rights governing use. Additionally, there are technological supports and infrastructures that have emerged to facilitate these practices which have significant investment and robust community support. These practices and technologies serve the existing communities of users well, primarily the needs of government, business and higher education, as well as the broader general public. Recommendations for adapting established collections delivery mechanisms to facilitate the use of cultural heritage images as research data would ideally not supersede or duplicate processes that also serve these other communities of users, and any solutions proposed in the context of the WorldFAIR Project must be made in respect of these wider contexts for image sharing.

New from WorldFAIR! Cultural Heritage Mapping Report: ‘Practices and policies supporting Cultural Heritage image sharing platforms’ – out now – CODATA, The Committee on Data for Science and Technology

“New WorldFAIR Project Deliverable 13.1 ‘Cultural Heritage Mapping Report: Practices and Policies supporting Cultural Heritage image sharing platforms’ outlines current practices guiding online digital image sharing by institutions charged with providing care and access to cultural memory, in order to identify how these practices may be adapted to promote and support the FAIR principles for data sharing.

This report looks closely at the policies and best practices endorsed by a range of professional bodies and institutions representative of Galleries, Libraries, Archives and Museums (the ‘GLAMs’) which facilitate the acquisition and delivery, discovery, description, digitisation standards and preservation of digital image collections. The second half of the report further highlights the technical mechanisms for aggregating and exchanging images that have already produced a high degree of image interoperability in the sector with a survey of six national and international image sharing platforms: DigitalNZ, Digital Public Library of America (DPLA), Europeana, Wikimedia Commons, Internet Archive and Flickr….”

Core Router Update | The OA Switchboard I

“On the first working day of 2023, we shared our plans for the coming year. Building on the successes and lessons learned from 2022, we reconfirmed that our overarching focus will continue to be on:  authoritative data from source; interoperability of existing systems; and, connecting the dots of existing PIDs.

?

With this in mind, our first development iteration of 2023 involves a core router update, which is built on feedback from our participants.

Research institutions asked us to further develop the existing ‘auto-cc’ feature, that delivers alerts and metadata on publications from non-corresponding authors via a P1-PIO message (Public Information Only). What is now added, with today’s release, is the feature to also deliver these alerts and metadata in case of non-primary affiliations. This means that if an author has more than one affiliation in the version of record, and the institution is not the first affiliation listed, they now also receive a copy of the P1-PIO message….”

PLOS Adopts CCC Ringgold Identify Database as its PID Solution – The Official PLOS Blog

“CCC, a leader in advancing copyright, accelerating knowledge, and powering innovation, today announced The Public Library of Science (PLOS) has adopted the industry-leading Ringgold Identify Database as its Persistent Identifier (PID) solution to streamline organizational data, helping power its Open Access (OA) publishing process with reliability and inclusivity.

A critical aspect leading to the decision was the precision with which PLOS could match accepted papers to institutional funding under its Community Action Publishing (CAP) program….

With over 600,000 Ringgold PIDs and metadata records, Ringgold Identify Database provides a curated view of organization data to help stakeholders improve data quality, drive strategic decision-making, and support data interoperability across the scholarly communications ecosystem. Used by intermediaries, funders, and a growing list of leading publishers, Ringgold Identify Database is the only solution to offer structured organizational hierarchies and consortia connections to help stakeholders quickly understand complex relationships. The database also includes rich metadata and additional identifiers, including the ISNI ID, an ISO Standard open ID to support wider interoperability….”

PLOS Adopts CCC Ringgold Identify Database as its PID Solution – The Official PLOS Blog

“CCC, a leader in advancing copyright, accelerating knowledge, and powering innovation, today announced The Public Library of Science (PLOS) has adopted the industry-leading Ringgold Identify Database as its Persistent Identifier (PID) solution to streamline organizational data, helping power its Open Access (OA) publishing process with reliability and inclusivity.

A critical aspect leading to the decision was the precision with which PLOS could match accepted papers to institutional funding under its Community Action Publishing (CAP) program….

With over 600,000 Ringgold PIDs and metadata records, Ringgold Identify Database provides a curated view of organization data to help stakeholders improve data quality, drive strategic decision-making, and support data interoperability across the scholarly communications ecosystem. Used by intermediaries, funders, and a growing list of leading publishers, Ringgold Identify Database is the only solution to offer structured organizational hierarchies and consortia connections to help stakeholders quickly understand complex relationships. The database also includes rich metadata and additional identifiers, including the ISNI ID, an ISO Standard open ID to support wider interoperability….”

Data Sharing Across Sectors Creates Better Early Warning Systems – data.org

“The existing public sector’s early warning systems for infectious disease and climate events are commonly disconnected; there are limited mechanisms in place that relate the two. In other words, there is a lack of data that helps understand and predict the impacts of extreme weather events and environmental changes on disease risk.

Attempting to find and connect climate and health data proves next to impossible with the current infrastructure in developing countries. For instance, when faced with an outbreak of dengue fever in Peru, the health minister has data on only health and demographics. If you wanted to combine that with climate data you would need to ask the minister of the environment. Want to relate economic data? Ask the minister of the economy and finance….

 

The Harmonize Project seeks to build a digital infrastructure of harmonized databases to feed early warning systems for epidemics exacerbated by climate change in the LAC region.

 

In collaboration with the Barcelona Supercomputing Center (BSC)—and a network in Brazil, Colombia, and the Dominican Republic—and supported by Wellcome, the project will bring together ministries, universities, private companies, social impact organizations, and more to create a complex data infrastructure and collect real longitudinal data on the ground. These new data sets will provide valuable information on seasonal variation in land use and human behavior has given climate hazards, which are generally assumed to be unchanging in health impact models.

The outcome of such an infrastructure? Actionable knowledge to inform local risk mapping and create strong early warning systems to drive resilience in low-resource communities….”

Rethinking data and rebalancing digital power | Ada Lovelace Institute

“This report highlights and contextualises four cross-cutting interventions with a strong potential to reshape the digital ecosystem:

Transforming infrastructure into open and interoperable ecosystems.
Reclaiming control of data from dominant companies.
Rebalancing the centres of power with new (non-commercial) institutions.
Ensuring public participation as an essential component of technology policymaking….”

New from WorldFAIR: Cross-national Social Sciences survey FAIR implementation case studies report – CODATA, The Committee on Data for Science and Technology

“New from the WorldFAIR project (https://worldfair-project.eu/), this report provides an overview of the data harmonisation practices of comparative (cross-national) social surveys, through case studies of: (1) the European Social Survey (ESS) and (2) a satellite study, the Australian Social Survey International – European Social Survey (AUSSI-ESS).  To do this, we compare and contrast the practices between the Australian Data Archive and Sikt.no, the organisations responsible for the data management of ESS and AUSSI-ESS.

The case studies consider the current data management and harmonisation practices of study partners in the ESS, including an analysis of the current practices with FAIR data standards, particularly leveraging FAIR Information Profiles (FIPs) and FAIR Enabling Resources (FERs).

The comparative analysis of the two case studies considers key similarities and differences in the management of the two data collections. Core differences in the use of standards and accessible, persistent registry services are highlighted, as these impact on the potential for shared, integrated reuse of services and content between the two partner organisations.

The report concludes with a set of recommended practices for improved management and automation of ESS data going forward—setting the stage for Phase 2 of WorldFAIR Work Package 6—and outlines the proposed means for implementing this management in the two partner organisations.

These recommendations focus on three areas of shared interest:
• Aligning standards
• Establishing common tools
• Establishing and using registries
in order to advance implementation of the FAIR principles, and to improve interoperability and reusability of digital data in social sciences research….”

scilake-a-scientific-lake-for-the-research-ecosystem

“SciLake is a Horizon Europe project that aims to introduce and establish the concept of the scientific lake, a research ecosystem where scientific knowledge is contextualised, connected, interoperable, and accessible overcoming challenges related to the heterogeneity and large interconnectivity of the underlying data….”

An iterative and interdisciplinary categorisation process towards FAIRer digital resources for sensitive life-sciences data | Scientific Reports

Abstract:  For life science infrastructures, sensitive data generate an additional layer of complexity. Cross-domain categorisation and discovery of digital resources related to sensitive data presents major interoperability challenges. To support this FAIRification process, a toolbox demonstrator aiming at support for discovery of digital objects related to sensitive data (e.g., regulations, guidelines, best practice, tools) has been developed. The toolbox is based upon a categorisation system developed and harmonised across a cluster of 6 life science research infrastructures. Three different versions were built, tested by subsequent pilot studies, finally leading to a system with 7 main categories (sensitive data type, resource type, research field, data type, stage in data sharing life cycle, geographical scope, specific topics). 109 resources attached with the tags in pilot study 3 were used as the initial content for the toolbox demonstrator, a software tool allowing searching of digital objects linked to sensitive data with filtering based upon the categorisation system. Important next steps are a broad evaluation of the usability and user-friendliness of the toolbox, extension to more resources, broader adoption by different life-science communities, and a long-term vision for maintenance and sustainability.

 

Giving students everywhere up-close access to a world of art – Harvard Gazette

“Since its inception, the database of cultural heritage images available for free online with IIIF capability has continued to grow. In 2022, the IIIF community estimated that between all their participating cultural heritage institutions, they’ve made available more than 1 billion items available.

“With IIIF, we’re investing in the cultural heritage image community,” Snydman said. “Our goal is global, universal, as open as possible. It’s not just about Harvard’s images; it’s about enabling students and faculty to interact in the very same way with images at Oxford, the Library of Congress, or the Vatican that they do with images held at Harvard. The code word for this is interoperability.”

Of the 1 billion IIIF-compatible items, about 6 million are held in Harvard’s library collections. Everything from 500-year-old maps to modern photographs are viewable in high resolution by anyone with an internet connection. Emily Dickinson’s pencil strokes can be magnified and examined, and Persian manuscripts like the one studied by Kim’s class can be compared with illustrations from the same region and period held at the Library of Congress….

“The fact that IIIF has been able to become a universal standard, and that it’s all open-source — that has exciting implications for democratized learning,” said Snydman. “Students and scholars of all ages have the opportunity to learn with images — not just in a physical classroom or library, not just during certain hours, and not just on Harvard’s campus. This is a great example of how technology can be used to minimize inequalities in education and give open access to knowledge.” …”

Linux Foundation Announces Overture Maps Foundation to Build Interoperable Open Map Data

“The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced the formation of the Overture Maps Foundation, a new collaborative effort to develop interoperable open map data as a shared asset that can strengthen mapping services worldwide. The initiative was founded by Amazon Web Services (AWS), Meta, Microsoft, and TomTom and is open to all communities with a common interest in building open map data.

Overture’s mission is to enable current and next-generation map products by creating reliable, easy-to-use, and interoperable open map data. This interoperable map is the basis for extensibility, enabling companies to contribute their own data. Members will combine resources to build map data that is complete, accurate, and refreshed as the physical world changes. Map data will be open and extensible by all under an open data license. This will drive innovation by enabling a network of communities that create services on top of Overture data….”

Linux, Amazon, Meta, and Microsoft want to break the Google Maps monopoly | Ars Technica

“Google Maps is getting some competition. The Linux Foundation has announced Overture Maps, a “new collaborative effort to develop interoperable open map data as a shared asset that can strengthen mapping services worldwide.” It’s an open source mapping effort that includes a list of heavy hitters: Amazon Web Services (AWS), Meta, Microsoft, and TomTom, with the foundation adding that the project is “open to all communities with a common interest in building open map data.”…

If you’re saying, “Wait! isn’t there already an open source map community out there?”  There is, and it’s called “OpenStreetMap,” the Wikipedia of maps that anyone can edit. The Overture press release says, “The project will seek to integrate with existing open map data from projects such as OpenStreetMap and city planning departments, along with new map data contributed by members and built using computer vision and AI/ML techniques to create a living digital record of the physical world.” …”