“The purpose of this call for tenders is to carry out a study to map parameters, formats, standards, benchmarks, methodologies and guidelines, relating to 3D digitisation of tangible cultural heritage, to the different potential purposes or uses, i.e. preservation, reconstruction, reproduction, research, and general-purpose visualisation, by type of tangible cultural heritage, i.e. immovable or movable, and by degree of complexity of tangible cultural heritage, e.g. low, medium, high, and very high (reference VIGIE-2020-654)….”
“Apply for funding to explore ways to archive arts and humanities research data….
Your proposal could focus on one of the following:
large or complex 3D objects
‘born-digital’ material and complex digital objects
practice research, including performance and visual arts….”
“INASP believes there is an opportunity to leverage new technologies in service of Southern knowledge systems, and we seek partners to work with us to identify possibilities and to test and build new tools.
We are inviting proposals from Africa, Asia and Latin America for small grants of approximately $3000 (£2,100) to enable groups to organise and host a series of discovery workshops to explore these ideas further….”
“The Academic Network on the Right to Research in International Copyright is calling for research relevant to the development of global norms on copyright policy in its application to research. Text and data mining research, for example, is contributing insights to respond to urgent social problems, from combatting COVID to monitoring hate speech and disinformation on social media. Other technologies make it possible to access the materials of libraries, archives and museums from afar – an especially necessary activity during the COVID pandemic. But these and other research activities may require reproduction and sharing of copyright protected works, including across borders. There is a lack of global norms for such activities, which may contribute to uncertainty and apprehension, inhibiting research projects and collaborations.
We seek to partner with researchers interested in exploring the means and ends of recognizing a “right to research” in international copyright law. In our initial conception, there are at least three overlapping dimensions to the concept:
The first dimension relates to the work of academic and other investigators, whose success depends on their ability to access and analyze information that may be subject to copyright protection, and to make their findings available.
The second dimension points toward the audience that learns from, applies, and further disseminates research findings. It sounds in the human right to “receive and impart information,” as well as the right to “benefit from” creativity and scientific progress.
The third dimension focuses on institutions. Researchers and consumers alike rely on institutions that can collect, preserve, and assure the results of research over time….”
“The European Science Foundation (ESF), on behalf of the cOAlition S members, is seeking to contract with a supplier to build a secure, authentication-managed web-based service which will enable:
academic publishers to upload data, in accord with one of the approved cOAlition S Price and Service Transparency Frameworks;
approved users to be able to login to this service and for a given journal, determine what services are provided and at what price;
approved users to be able to select several journals and compare the services offered and prices charged by the different journals selected;
the Journal Checker Tool (JCT) – via an API call – to determine whether a journal has (or has not) provided data in line with one of the approved Price and Service Transparency Frameworks.
Given that some of the data that will be made accessible through this service is considered sensitive, it is imperative that suppliers can build a secure service such that data uploaded by a publisher, and intended by them for approved users only, cannot be accessed by any other publisher.
This service must be functional – in terms of allowing publishers to upload their “Framework Reports” by 1st of December 2021. The service must be accessible to all approved users – including the JCT – by the 1st June 2022….”
“The archive’s catalog currently holds more than 120 million digital records, as well as “archival metadata and other types of records, including electronic databases.” However, the system has “an unsophisticated search” function, according to a request for information.
While NARA employees add metadata tags to digital records, “There is a delta between what NARA has been able to describe and the specific information that users want from our records,” the RFI states, asking, “Can AI fill the gap?”
During an informational day held in early April, NARA executives outlined some of the challenge, including a single search returning a flood of results from the same source—making it difficult to sift through to find multiple sources—and difficulty distinguishing between records with similar names, such as a search for “Truman” the president versus “Truman” the aircraft carrier.
The current search function also is not able to return accurate results if the search term input is not exactly the same as it exists in the metadata.
The RFI is seeking feedback on automated solutions that can analyze how users search the digital archives and associate those search terms with the appropriate record….”
“The [National Archives] Catalog currently has a large data set (over 100 million digital pages of records, plus archival metadata and other types of records, including electronic databases) and an unsophisticated search. The archival hierarchy of the records is intended to assist the user in discovery, but in the digital realm, users find it difficult to use. The metadata that we have entered manually cannot provide the granular information for users to get the search results they want and it has taken NARA decades to produce. There is a delta between what NARA has been able to describe, and the specific information that users want from our records. Can AI fill the gap?…”
“The Activating Smithsonian Open Access Challenge (ASOA) from Cooper Hewitt’s Interaction Lab aims to support creative technology teams in designing engaging interactive experiences with Smithsonian Open Access collections for people all over the globe. Made possible by Verizon 5G Labs, this open call for proposals seeks to stimulate new ideas for inspiring digital interactions with over 3 million 2D and 3D objects in the Smithsonian’s Open Access collections, all available under a Creative Commons Zero (CC0) license for download, re-use, alteration, and even commercialization.
From these proposals, up to six finalists will receive $10,000 to develop their ideas into functioning prototypes to be presented and used by the public. A significant goal of the program is to identify compelling projects that the Interaction Lab might explore for wider use in the future. Creators will own all intellectual property they create in ASOA, subject to the Smithsonian’s license as set forth in the ASOA Participation Rules and Guidelines….”
“This call for proposals is for post-secondary instructors and faculty in British Columbia to develop sets of content types in H5P that support open textbooks in the B.C. Open Textbook Collection (see below for a list of suggested books). The intent of these grants is to develop activities in which students can practice applying new concepts and skills that align with content within the selected open textbook.
Eligible grantees include individual instructors, groups of faculty or instructors, departments, institutions, or external working groups made up of instructors and faculty connected to B.C. post-secondary institutions (i.e., articulation groups or other working groups). Collaboration between individuals and departments at different institutions is not only allowed, but encouraged.
The maximum value of each grant is $10,000….”
“NSF seeks to establish a Center fueled by open and freely available biological and other environmental data to catalyze novel scientific questions in environmental biology through the use of data-intensive approaches, team science and research networks, and training in the accession, management, analysis, visualization, and synthesis of large data sets. The Center will provide vision for speeding discovery through the increased use of large, publicly accessible datasets to address biological research questions through collaborations with scientists in other related disciplines. The Center will be an exemplar in open science and team science, fostering development of generalizable cyberinfrastructure solutions and community-driven standards for software, data, and metadata that support open and team science, and role-modeling best practices. Open biological and other environmental data are produced by NSF investments in research and infrastructure such as the National Ecological Observatory Network (NEON), the Ocean Observatories Initiative (OOI), the Long-Term Ecological Research (LTER) network, National Center for Atmospheric Research (NCAR), Critical Zone Observatories (CZOs), Integrated Digitized Biocollections (iDigBio), and the Global Biodiversity Information Facility (GBIF), as well as by many other public and private initiatives in the U.S. and worldwide. These efforts afford opportunities for collaborative investigation into, and predictive understanding of life on Earth to a far greater degree than ever before. The Center will help develop the teams, concepts, resources, and expertise to enable inclusive, effective, and coordinated efforts to answer the broad scientific questions for which these open data were designed, as well as key questions that emerge at interfaces between biology, informatics, and a breadth of environmental sciences. It will engage scientists diverse in their demography, disciplinary expertise, and geography, and in the institutions that they represent in collaborative, cross-disciplinary, and synthetic studies. It is expected that this new Center will build on decades of experience from NSF’s prior investments in other synthesis centers, while providing visionary leadership and advancement for data-intensive team science in a highly connected and increasingly virtual world. It will serve as an incubator for team-based, data-driven, and open research that includes cyberinfrastructure, tools, services, and application development and innovative and inclusive training programs. The Center is also expected to spur collaborative interactions among the facilities and initiatives that produce open biological and other environmental data, and cyberinfrastructure efforts that support the curation and use of those data, such as Biological and Chemical Oceanography Data Management Office (BCO-DMO), CyVerse, Environmental Data Initiative (EDI), DataOne, EarthCube, and Cyberinfrastructure (CI) Centers for Excellence, to address compelling research questions and to enable training and data product and tool development. The new Center will further enable data-driven discovery through immersive education and training experiences to provide the advanced skills needed to maximize the scientific potential of large volumes of available open data.”
“This funding supports researchers to develop and test incentives for making health research more open, accessible and reusable….”
“Now, thanks to support from the Andrew W. Mellon Foundation, Smarthistory is able to offer thirty additional $1,000 honoraria to emerging scholars who have suffered financial hardship due to the pandemic.
These honoraria are available for the successful publication, on Smarthistory, of a short, accessible essay of general interest and in the scholar’s area of specialization (the topic will be determined in consultation with the editors at Smarthistory), and is open to active Ph.D. students who are ABD, as well as those who have earned a Ph.D. in art history within the past two years. Smarthistory essays are aimed at non-specialist, undergraduate learners….
Authors will retain intellectual property rights to their work and will grant the right for Smarthistory to publish the resulting essay with a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License across all of its channels including Smarthistory.org and Khanacademy.org. Essays must be submitted before March 1, 2020. The acceptance of essays and the awarding of honoraria will be at the sole discretion of Smarthistory….”
“We are thrilled to announce a Rapid Response Fund in support of the JROST community and open technology and infrastructure projects. Awards will be given in amounts of $5,000 and $10,000 with the possibility of other gift amounts at the discretion of the program committee….”
“The HathiTrust Research Center (HTRC) requests proposals for a special funded round of its Advanced Collaborative Support (ACS) program, with support from the Andrew W. Mellon Foundation for HTRC’s “Scholar-Curated Worksets for Analysis, Reuse & Dissemination (SCWAReD)” project.
ACS is a scholarly service offering collaboration between researchers and HTRC staff to solve challenging problems related to computational analysis of the HathiTrust corpus. In this special cycle of ACS, we seek to collaborate with scholars to recover volumes in HathiTrust that tell the story of historically under-resourced and marginalized textual communities, and to identify gaps in the HathiTrust collection where such communities are not represented in the digital library. …”
“Today, in partnership with the Open Data Institute (ODI), we are delighted to announce an open call for participation in a new Peer Learning Network for Data Collaborations. Peer learning networks are an important tool to foster the exchange of knowledge and help participants learn from one another so they can more effectively address the challenges they face.
In April, with the launch of Microsoft’s Open Data Campaign, we committed to putting open and shared data into practice by addressing specific challenges through data collaborations. For a data collaboration to achieve its goals, there are many factors that must come together successfully. Oftentimes, this process can be incredibly challenging. From aligning on key outcomes and data use agreements to preparing datasets for use and analysis, these considerations require time and extensive coordination….
Awardees will have the opportunity to:
receive up to £20,000 for their time over the six months of the peer learning network
learn about and receive guidance from the ODI and Microsoft on different technical approaches, governance mechanisms, and other means for managing data collaborations
connect with peers also working on these challenges
For the purpose of the Peer Learning Network, data collaborations are defined as:
involving a collaboration of companies, research institutions, non-profits, and/or government entities
addressing a clear societal or business-related challenge
are working to make their data as open as possible in the context of the collaboration (collaborations working with restrictions related to privacy or commercial sensitivity are encouraged to apply)
ultimately demonstrate increased access to, and/or meaningful use of, data in reaching the specific goal …”