Making the Case for a PID-Optimized World

In the second of two posts on persistent identifiers in scholarly communications, Phill Jones and Alice Meadows share information about a new cost-benefit analysis showing the value of widespread PID adoption

The post Making the Case for a PID-Optimized World appeared first on The Scholarly Kitchen.

The Openness Profile of Knowledge Exchange: What can infrastructure providers do?

by Claudia Sittner

Knowledge Exchange (KE), a cooperative partnership of six national research-supporting organisations in Europe, has explored the development of an Openness Profile during an 18-month research evaluation of Open Science. In the report, instead of Open Science, the term “open scholarship” is used with a broader understanding. The final project report “Openness Profile: Modelling research evaluation for open scholarship” has recently been published. During the process, 80 people from 48 organisations at all levels of the “open scholarship ecosystem” were involved and surveyed.

In January 2020, the group already published preliminary results on the concept of the Openness Profile. In the blogpost Openness Profile Interim Report: What Libraries Could Take Away” we explored what libraries and infrastructure providers could learn from it.

We will briefly introduce the concept of the Openness Profile and take a look at which recommendations could be interesting for libraries and information infrastructures to promote open research practices and their acknowledgement, thereby supporting the Open Science community.

Why a global Openness Profile is a good idea

The concluding report ultimately concerns a well-known problem of Open Science: open activities are often invisible and unacknowledged. For researchers, therefore, they basically play hardly any role in career planning. This also applies to activities of partly non-scientific staff that are important for Open Science but are not even considered in the scientific evaluation system. Those activities include, for example, curating research data, developing infrastructures or conducting training for open practices. It also means that these kinds of qualified specialists tend to migrate from science to industry or commercial sectors, owing to lack of recognition and incentives.

Science is increasingly taking place at a global and interconnected level. A comprehensive global reform of the scientific incentive system, in which more stakeholders and open activities play a (larger) role, is required so that Open Science can ultimately gain acceptance.

Making open activities and stakeholders visible: the Openness Profile

This is where the Openness Profile comes into play. The Openness Profile is a kind of portfolio that makes activities in the field of Open Science visible, thereby increasing the awareness of the scientific community and all participants about the current lack of recognition for Open Science activities and stakeholders in the scientific evaluation system. In a first step, the Openness Profile should build upon existing persistent identifiers (PIDs), initially ORCID (Open Researcher and Contributor ID). The advantage is that many scientists already have an ORCID ID anyway.

An ORCID record would then also be supplemented by the Openness Profile; and open activities and further stakeholders such as data stewards or project managers, who remain unacknowledged and therefore not remunerated for open activities in the current scientific system, can be made visible. This thereby simultaneously creates an incentive for open activities. The Openness Profile is therefore not only useful for individuals, who would need to maintain it themselves – but can also be taken up by funders who have grants to award or institutes who have vacancies to fill.

Open activities can be recorded and linked in a structured way in the Openness Profile by existing identifiers such as DOI, ORG ID or Grant ID, but manual entries with URLs and descriptive text are also possible. The Openness Profile is thus intended to become the central hub for collecting and linking of Open Science activities and results.

General recommendations on realising the Openness Profile

At the end of the report, KE provides recommendations for joint activities that are required to actually implement the Openness Profile, for four different groups of stakeholders:

  1. Research funders,
  2. national research organisations,
  3. institutes and
  4. infrastructure providers.

Below we take a closer look at the general recommendations as well as those for the infrastructure providers.

The general recommendations are:

  1. All pull in the same direction: Diverse stakeholders are involved at all levels of the scientific system. They often pursue their own goals and interests. In order to implement the Openness Profile, it is often necessary to subordinate individual interests to the common goal. All those involved have to declare their willingness to do this. The aim is to make open projects interoperable and sustainable, leading to increased transparency, reproducibility and ultimately, a higher research quality.
  2. Bring all participants together (stakeholder summit): to keep an eye on the interest and experience of all involved, KE suggests a summit of all stakeholders for the purpose of productive exchange and collaboration. The term ‘all participants’ refers to, for example: science policy-makers, institute managements, technologists, providers of research information systems, researchers at all career levels and infrastructure experts.
  3. Establish a permanent working group: This working group (WG) should be made up of all stakeholders and deal with five topic areas:
    1. community governance model,
    2. validation of the OP reference model,
    3. taxonomy for contributors and contributions,
    4. technical facilitation of research management workflows,
    5. infrastructures survey and gap analysis.

    The integration of persistent identifiers and the interoperability of the systems through the use of APIs is emphasised in the technical implementation. In terms of the analysis of the infrastructure landscape, KE finds that much is already in place that could support the Openness Profile. It would be a good idea if employees from libraries or other infrastructure providers became part of this permanent working group.

  4. Finding sponsors: To implement the Openness Profile, it is necessary to find one or more sponsors who can guarantee long-term financing and thereby the sustainability of the project. In addition to the financial support, these would have a variety of tasks such as the development of software to connect information systems using PID metadata or the coordination of training programmes for Open Science communities. This role would certainly be well suited to infrastructure providers, who could integrate persistent identifiers into their systems themselves (in-house Open Access repositories, for example) or expand and share their often already existing training programmes.

Recommendations for infrastructure providers

KE sees the role of infrastructure providers in relation to the Openness Profile primarily in increasing and ensuring interoperability between research systems, which can be achieved through persistent identifiers. This would be more sustainable anyway and would lead to a further development of the Openness Profile. In the most recent JISC report on persistent identifiers (PIDs), five major players were identified: ORCID, Crossref, Datacite, ARDC (RAiD) and RoR. Libraries and infrastructure providers could therefore focus on taking care of the interoperability of their existing systems through PIDs.

Furthermore, the following recommendations are made expressly for infrastructure providers in the concluding report:

  • They should assume an active role in the development of research infrastructure and corresponding workflows, while closely collaborating with other stakeholders on a national level – such as research organisations, publishers or funders.
  • As the Openness Profile is integrated via ORCID, its use must be focussed more sharply. To encourage the use of ORCID records and application programming interfaces (APIs), it is recommended that they be more closely integrated into institutional research information and funding systems, and that capacities be increased where necessary
  • Another recommendation is to review governance structures to ensure that they are genuinely primarily responsive to community needs and not to individual interests

The report also proposes expanding and intensifying collaborations between national research organisations and infrastructure providers, thereby driving Open Science forward.

Conclusion: Openness Profile and libraries – will it be a match?

The Openness Profile is an ambitious project to make Open Science and all its participating stakeholders visible. A far-reaching reform of the monoculturally oriented scientific incentive system is long overdue. Whether the Openness Profile will actually be realised depends heavily on whether there are enough sponsors among the stakeholders who are willing to invest in the project – both financially and in terms of personnel.

Libraries and infrastructure providers would be important stakeholders here owing to their expertise; and their own (open) activities and contributions could also be better captured and recognised by inclusion in an Openness Profile. They should also ensure that they are represented when the stakeholders summit and send committed Open Science enthusiasts to the working group to be established in the long-term – so that their interests are represented and their comprehensive know-how can be used. On a practical level, they can already ensure the integration of persistent identifiers in their systems, thereby making them interoperable and sustainable.

You may also be interested in:

This text has been translated from German.

The post The Openness Profile of Knowledge Exchange: What can infrastructure providers do? first appeared on ZBW MediaTalk.

Online Platforms for Recruiting and Motivating Reviewers

Authors and publishers have easily understandable motivations for participating in scholarly publishing, but there is less clear motivation for reviewers. This post highlights the need of recognizing and rewarding reviewers and describes how online platforms can ease achieving this objective at the time of being a source for recruiting reviewers and recording review activity. A description and a comparison of the main online platforms available today are also provided.

The academic publishing process is driven by four main actors: authors, editors, publishers and reviewers, each of whom play a vital role in ensuring that high standards are maintained throughout the process of preparing the article, reviewing it and finally publishing it. Each main actor needs to have some motivation that drives participation and the quality of their contribution to the publishing process. I would like to summarize what I think are the main motivations of each party in the review process. Authors are driven by their wish to make public the results of their investigations. Besides that, the production of high quality scientific content is a highly valued merit in academia and research. Researchers whose curricular vitae boast of a large list of high-quality publications are well respected and have easier access to funding.
When it comes to editors, becoming a member of editorial board of scientific journals is in itself considered to be a merit. Editors normally serve in an “altruistic” mode, without expecting financial reward. They view being an editor as a means by which they can give back to the scientific and academic community. However, some editors are perhaps not as altruistic as one may think since they also gain recognition from the role which enhances their reputations and therefore access to funding. In addition, it is noteworthy that some publishers do provide some sort of compensation to editors for their work, which can be an additional motivation.
Scientific publishers are mainly based on two models of publication: 1. The traditional model, in which access to the full text of the articles is only accessible to subscribers (individual or institutional) 2. Open access model, in which publishers charge authors a fee for publishing articles with the full text available to all readers. In one way or other, major publishers manage to generate large chunks of revenue from the publishing process. The scientific publishing industry alone generates billions of dollars every year (1-4). Besides this, there is also a large group of non-profit and association/institutional publishers who make very little (if any) financial gain from their journals, but publish them as part of their mission to serve members and academia. Thus, the motivation of this last type of non-profit journal is radically different of that of publishers working as traditional for-profit companies.
While the motivation for three of the four actors in the publishing process can be clearly identified, the reason why reviewers participate in the publishing process is not so clear. There is no “material” reward for reviewers. Rather, it is the scientific altruism or commitment to the scientific model that motivates them to work. Reviewers are encouraged by the belief that they play an important role in ensuring that good quality research work reaches the community. The fact that reviewers themselves are also authors makes them more aware of the importance of good reviewers. In recent decades the number of scientific journals and the number of published articles has multiplied with a growth rate of approximately 3%-10% per year depending on the research area  (5-8), resulting in a true “explosion” of manuscripts that are submitted to publishers. As journals receive more and more manuscripts and the number of journals continues to grow, reviewers get saturated with multiple requests and invitations.  Thus, it is easy to understand “reviewer fatigue”, although many other factors may influence the reviewer’s decision to decline invitations to review manuscripts   (9). As a consequence editors often cannot find appropriate reviewers for manuscripts and this may result in delayed times for the various phases of the review process, and authors often have to wait months until their manuscripts get reviewed.
Getting more reviewers and making them more committed with providing good review reports on time is the main reason why it is necessary to increase the motivation of the reviewers. And indeed it seems fair to reward authors for their work in a sector that generates significant benefits. Several voices insist on this need again and again worldwide (10-15). Some journals/publishers are experimenting with direct payment of reviewers, although this is an exception. Anyway several arguments can be made against direct monetary compensation, in particular because paying reviewers would break the independence between editors/publishers and reviewers, which is one of the pillars of the academic publishing process. Most publishers acknowledge reviewers in front-matter summary pages or lists of reviewers or in letters upon request. Some others, such as Frontiers, make public the names of reviewers (and the name of the editor in charge) of all published articles including the names of the reviewers in a footnote in every published article. Others, such as Elsevier, are launching their own recognition platforms providing their reviewers with a personalized profile page where their reviewing history is documented and where they can download certificates. Authors and editors can also evaluate the quality of reviews done, providing feedback that may result in better quality of the review process. Nature, for example, recognizes reviewers with payment in kind, where reviewers receive free journal access, tools and services or vouchers for research supplies (16).  
In recent years, independent communities have developed online platforms offering review services for the scientific community. These platforms establish that it is possible to create an independent system where reviewers get recognition and reward for the efforts they put into ensuring that quality research reaches the scientific community. One of the main features of these platforms is that they are “third party companies” independent of publishers. This way, biases are completely prevented since editors and publishers are unable to influence reviewers, even when they may have a role in the workflow, since these platforms are designed to prevent direct communication among the different actors.
Basically, what these platforms do is provide authors and publishers with appropriate reviews and also provide reviewers with an extra motivation making them more willing to review manuscripts and complete the task in shorter periods (10, 11). They provide rewards to reviewers using two major strategies: 1. Credit through certificates or other elements that the reviewer can add to his curriculum vitae and 2. Other benefits such as monetary reward or rights to have their own manuscripts reviewed.
In this update, we report the global features of five of these platforms at the time of comparing them: Rubriq, Peereviewers, Publons.Peerage of Science and Academic Karma (Table 1). 

Rubriq
Peereviewers
Publons
Peerage of Science
Academic Karma
Service/s
Clients choose: review of contents + statistics, or review of contents + suggestion of suitable journals
Database of reviewers
Record of reviewers, journals and reviews
Reviews and publishing offers
Exchange of services
Review protocol
Closed. All manuscript go under the same protocol (Scorecard)
Open. Clients can customize the protocol of review
Open (Peerage Essay)
Open. Clients can customize the protocol of review
Fee (valid in  2015)
Several options depending on the services, from $500 to $650 (3 reviewers included)
$100 per reviewer
Type of acknowledgment to reviewers
Monetary (100$)
Monetary (50$), Certificate
Online record
Online record, ability to submit own articles for review
Online record, ability to submit own articles for review
Table 1. Comparison between third-party platforms offering reviewer services

To start with we would like to compare Rubriq (17) and Peereviewers (18). Both perform similarly but there are also some points distinguishing them (Table 1). In both cases, the reviewer must register on the platforms (restricted to academics and researchers with a given expertise) and declare their expert profile, so that they can be invited as reviewers for manuscripts that match their profile. Reviewers who are selected to review receive an email which contains a summary of the manuscript and instructions on how to complete the process. If the reviewer agrees, he/she will get access to the full text and the review form. When the review is finished a report is sent to the client and the reviewer is rewarded. The identity of the reviewer is also “anonymised” to the clients.
Another platform offering rewards to reviewers is Publons (19). Publons has a different objective: they do not offer any service to authors or publishers, but keep a record of reviewers, journals and reviews. They have a list of journals and create an account for each reviewer. A list containing all reviews conducted by a reviewer is listed in the reviewer’s account after being verified, next to the title of the journal to which each review belongs. Reviewers can claim the reviews they made in several ways, including online forms or by email. These data generate some statistics that place each reviewer in the corresponding percentile activity compared with that of all registered reviewers. The profile of each reviewer is public, so that reviewers can use this website to provide evidence of their activity.
Peerage of Science offers a tripartite where authors, reviewers and editors have a role (20) (Table 1). Authors submit manuscripts to Peerage of Science before submitting to any journal. Once submitted, any qualified peer-reviewer can choose to review the manuscript. The peer review process is available concurrently to all editors, with automated event tracking. If authors have received publishing offers from editors they may choose to accept one of these offers, or accept none and use their review in non-participating journals. A positive aspect of Peerage of Science is that peer reviewers are themselves peer reviewed. Reviewers are notified that they can evaluate the reviews sent by other reviewers. This extra twist contributes to increasing the quality of peer review. From the reviewer’s point of view, Peerage of Science offers credit for curricular purposes only as an externally verifiable measure of the reviewers’ expertise in their scientific fields.
An innovative approach comes from Academic Karma (21). Academic Karma is both the name of a currency and a platform for peer review. Instead of exchanging money, authors and reviewers exchange karma: reviewers earn 50 karma per reviewed manuscript and authors of the manuscript collectively spend 50 karma per reviewer (Table 1). Then reviewers may use their Karma for paying reviewers when authoring manuscripts. Editors are also involved since they receive the reviewer’s report simultaneously to authors.
An important point is how reviewers’ identities and their expertise are verified and how attribution of merits can be recorded and tracked. The Working Group on Peer Review Service (created to develop a data model and citation standard for peer review activity that can be used to support both existing and new review models) stresses the need for standardized citation structures for reviews which can enable the inclusion of peer review activity in personal recognition and evaluation, as well the ability to refer to reviews as part of the scholarly literature (6). In this regard, all platforms described here are using or are starting to use ORCID identifiers for both authors and reviewers, and DOIs as identification for published reviews (22). ORCID itself is also offering the option of adding reviews to ORCID profiles. Researchers with a profile in these networks can link this to their ORCID iD so that the reviews they have recorded on the platform are added to their ORCID page (23). In turn, these identificators will ease future reaearch on peer review and will probaly allow us to measure the impact of these platforms in the academic publishing process.
In conclusion, motivating and rewarding reviewers is a need that can be addressed both by publishers and third party organizations. Online platforms are good tools for giving credit to reviewers and to convey monetary reward, at the same time offering a way of recording review activity.

References and Notes
1.The Wellcome Trust (2003) Economic analysis of scientific research publishing: A report commissioned by the Wellcome Trust, revised ed. Available: http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003182.pdf. Accessed 10th July 2015.
2. Costs and business models in scientific research publishing A report commissioned by the Wellcome Trust. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003184.pdf
3. The National Academies (US) Committee on Electronic Scientific, Technical, and Medical Journal Publishing. Electronic Scientific, Technical, and Medical Journal Publishing and Its Implications: Report of a Symposium. http://www.ncbi.nlm.nih.gov/books/NBK215820/
4. Ware, Mark and Mabe, Michael (2015)  An overview of scientific and scholarly journal publishing. International Association of Scientific, Technical and Medical Publishers, 2015. http://www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf Accessed 20th October 2015.
5. Walker R, Rocha da Silva P. (2015) Emerging trends in peer review—a survey. Frontiers in Neuroscience 9:169
6. Paglione LD, Lawrence RN. (2015) Data exchange standards to support and acknowledge peer-review activity. Learned Publishing, 28 (4):309-316(8)
7.Van Noorden, R. (2014) Global scientific output doubles every nine years. Nature.com [Internet], NewsBlog, 7 May 2014. Available from: http:// blogs.nature.com/news/2014/05/global-scientific-output-doublesevery-nine-years.html
8. The Wellcome Trust (2015) Scholarly Communication and Peer Review: The Current Landscape and Future Trends. http://www.wellcome.ac.uk/stellent/groups/corporatesite/%40policy_communications/documents/web_document/wtp059003.pdf Accessed 12 November 2015.
9. Marijke Breuning, Jeremy Backstrom, Jeremy Brannon, Benjamin Isaak Gross, Michael Widmeie  Reviewer Fatigue? (2015) Why Scholars Decline to Review their Peers’ Work PS: Political Science & Politics 48(4):595-600. http://dx.doi.org/10.1017/S1049096515000827
10.Björk B; Hedlund T.(2015)  Emerging new methods of peer review in scholarly journals. Learned Publishing 28(2): 85-91
11. Thomson Reuters (2010) Increasing the Quality and Timeliness of Scholarly Peer Review. A report for Scholarly Publishers..http://scholarone.com/media/pdf/peerreviewwhitepaper.pdf
12. Taylor & Francis (2015) Peer review in 2015: A global view. http://authorservices.taylorandfrancis.com/peer-review-in-2015/Accessed 20th October 2015
13. Alice Meadows (2015, January 7th) Recognition for peer review and editing in Australia – and beyond? Blog post in Exchanges http://exchanges.wiley.com/blog/2015/01/07/recognition-for-peer-review-and-editing-in-australia-and-beyond/Accessed 20th October 2015.
14. Andrew Trounson. Journals should credit editors, says ARC. Post in The Australian http://www.theaustralian.com.au/higher-education/journals-should-credit-editors-says-arc/story-e6frgcjx-1227201178857Accessed 20th October 2015.
15. Alberts, P., Hanson, B., and Kelner, K.L. 2008. Reviewing peer review. Science, 321 (5885): 15. http://dx.doi.org/10.1126/science.1162115.
16. Review rewards. Nature [Internet], 514(7522): 274–274. http:// dx.doi.org/10.1038/514274a
17.http://www.rubriq.com/
18.http://www.peereviewers.com/
19.http://www.publons.com
20.https://www.peerageofscience.org
21.http://academickarma.org/
22.Gasparyan AY, Akazhanov NA, Voronov AA, Kitas GD. Systematic and open identification of researchers and authors: focus on open researcher and contributor ID. J Korean Med Sci. 2014 Nov;29(11):1453-6. doi: 10.3346/jkms.2014.29.11.1453
   

Online Platforms for Recruiting and Motivating Reviewers

Authors and publishers have easily understandable motivations for participating in scholarly publishing, but there is less clear motivation for reviewers. This post highlights the need of recognizing and rewarding reviewers and describes how online platforms can ease achieving this objective at the time of being a source for recruiting reviewers and recording review activity. A description and a comparison of the main online platforms available today are also provided.

The academic publishing process is driven by four main actors: authors, editors, publishers and reviewers, each of whom play a vital role in ensuring that high standards are maintained throughout the process of preparing the article, reviewing it and finally publishing it. Each main actor needs to have some motivation that drives participation and the quality of their contribution to the publishing process. I would like to summarize what I think are the main motivations of each party in the review process. Authors are driven by their wish to make public the results of their investigations. Besides that, the production of high quality scientific content is a highly valued merit in academia and research. Researchers whose curricular vitae boast of a large list of high-quality publications are well respected and have easier access to funding.
When it comes to editors, becoming a member of editorial board of scientific journals is in itself considered to be a merit. Editors normally serve in an “altruistic” mode, without expecting financial reward. They view being an editor as a means by which they can give back to the scientific and academic community. However, some editors are perhaps not as altruistic as one may think since they also gain recognition from the role which enhances their reputations and therefore access to funding. In addition, it is noteworthy that some publishers do provide some sort of compensation to editors for their work, which can be an additional motivation.
Scientific publishers are mainly based on two models of publication: 1. The traditional model, in which access to the full text of the articles is only accessible to subscribers (individual or institutional) 2. Open access model, in which publishers charge authors a fee for publishing articles with the full text available to all readers. In one way or other, major publishers manage to generate large chunks of revenue from the publishing process. The scientific publishing industry alone generates billions of dollars every year (1-4). Besides this, there is also a large group of non-profit and association/institutional publishers who make very little (if any) financial gain from their journals, but publish them as part of their mission to serve members and academia. Thus, the motivation of this last type of non-profit journal is radically different of that of publishers working as traditional for-profit companies.
While the motivation for three of the four actors in the publishing process can be clearly identified, the reason why reviewers participate in the publishing process is not so clear. There is no “material” reward for reviewers. Rather, it is the scientific altruism or commitment to the scientific model that motivates them to work. Reviewers are encouraged by the belief that they play an important role in ensuring that good quality research work reaches the community. The fact that reviewers themselves are also authors makes them more aware of the importance of good reviewers. In recent decades the number of scientific journals and the number of published articles has multiplied with a growth rate of approximately 3%-10% per year depending on the research area  (5-8), resulting in a true “explosion” of manuscripts that are submitted to publishers. As journals receive more and more manuscripts and the number of journals continues to grow, reviewers get saturated with multiple requests and invitations.  Thus, it is easy to understand “reviewer fatigue”, although many other factors may influence the reviewer’s decision to decline invitations to review manuscripts   (9). As a consequence editors often cannot find appropriate reviewers for manuscripts and this may result in delayed times for the various phases of the review process, and authors often have to wait months until their manuscripts get reviewed.
Getting more reviewers and making them more committed with providing good review reports on time is the main reason why it is necessary to increase the motivation of the reviewers. And indeed it seems fair to reward authors for their work in a sector that generates significant benefits. Several voices insist on this need again and again worldwide (10-15). Some journals/publishers are experimenting with direct payment of reviewers, although this is an exception. Anyway several arguments can be made against direct monetary compensation, in particular because paying reviewers would break the independence between editors/publishers and reviewers, which is one of the pillars of the academic publishing process. Most publishers acknowledge reviewers in front-matter summary pages or lists of reviewers or in letters upon request. Some others, such as Frontiers, make public the names of reviewers (and the name of the editor in charge) of all published articles including the names of the reviewers in a footnote in every published article. Others, such as Elsevier, are launching their own recognition platforms providing their reviewers with a personalized profile page where their reviewing history is documented and where they can download certificates. Authors and editors can also evaluate the quality of reviews done, providing feedback that may result in better quality of the review process. Nature, for example, recognizes reviewers with payment in kind, where reviewers receive free journal access, tools and services or vouchers for research supplies (16).  
In recent years, independent communities have developed online platforms offering review services for the scientific community. These platforms establish that it is possible to create an independent system where reviewers get recognition and reward for the efforts they put into ensuring that quality research reaches the scientific community. One of the main features of these platforms is that they are “third party companies” independent of publishers. This way, biases are completely prevented since editors and publishers are unable to influence reviewers, even when they may have a role in the workflow, since these platforms are designed to prevent direct communication among the different actors.
Basically, what these platforms do is provide authors and publishers with appropriate reviews and also provide reviewers with an extra motivation making them more willing to review manuscripts and complete the task in shorter periods (10, 11). They provide rewards to reviewers using two major strategies: 1. Credit through certificates or other elements that the reviewer can add to his curriculum vitae and 2. Other benefits such as monetary reward or rights to have their own manuscripts reviewed.
In this update, we report the global features of five of these platforms at the time of comparing them: Rubriq, Peereviewers, Publons.Peerage of Science and Academic Karma (Table 1). 

Rubriq
Peereviewers
Publons
Peerage of Science
Academic Karma
Service/s
Clients choose: review of contents + statistics, or review of contents + suggestion of suitable journals
Database of reviewers
Record of reviewers, journals and reviews
Reviews and publishing offers
Exchange of services
Review protocol
Closed. All manuscript go under the same protocol (Scorecard)
Open. Clients can customize the protocol of review
Open (Peerage Essay)
Open. Clients can customize the protocol of review
Fee (valid in  2015)
Several options depending on the services, from $500 to $650 (3 reviewers included)
$100 per reviewer
Type of acknowledgment to reviewers
Monetary (100$)
Monetary (50$), Certificate
Online record
Online record, ability to submit own articles for review
Online record, ability to submit own articles for review
Table 1. Comparison between third-party platforms offering reviewer services

To start with we would like to compare Rubriq (17) and Peereviewers (18). Both perform similarly but there are also some points distinguishing them (Table 1). In both cases, the reviewer must register on the platforms (restricted to academics and researchers with a given expertise) and declare their expert profile, so that they can be invited as reviewers for manuscripts that match their profile. Reviewers who are selected to review receive an email which contains a summary of the manuscript and instructions on how to complete the process. If the reviewer agrees, he/she will get access to the full text and the review form. When the review is finished a report is sent to the client and the reviewer is rewarded. The identity of the reviewer is also “anonymised” to the clients.
Another platform offering rewards to reviewers is Publons (19). Publons has a different objective: they do not offer any service to authors or publishers, but keep a record of reviewers, journals and reviews. They have a list of journals and create an account for each reviewer. A list containing all reviews conducted by a reviewer is listed in the reviewer’s account after being verified, next to the title of the journal to which each review belongs. Reviewers can claim the reviews they made in several ways, including online forms or by email. These data generate some statistics that place each reviewer in the corresponding percentile activity compared with that of all registered reviewers. The profile of each reviewer is public, so that reviewers can use this website to provide evidence of their activity.
Peerage of Science offers a tripartite where authors, reviewers and editors have a role (20) (Table 1). Authors submit manuscripts to Peerage of Science before submitting to any journal. Once submitted, any qualified peer-reviewer can choose to review the manuscript. The peer review process is available concurrently to all editors, with automated event tracking. If authors have received publishing offers from editors they may choose to accept one of these offers, or accept none and use their review in non-participating journals. A positive aspect of Peerage of Science is that peer reviewers are themselves peer reviewed. Reviewers are notified that they can evaluate the reviews sent by other reviewers. This extra twist contributes to increasing the quality of peer review. From the reviewer’s point of view, Peerage of Science offers credit for curricular purposes only as an externally verifiable measure of the reviewers’ expertise in their scientific fields.
An innovative approach comes from Academic Karma (21). Academic Karma is both the name of a currency and a platform for peer review. Instead of exchanging money, authors and reviewers exchange karma: reviewers earn 50 karma per reviewed manuscript and authors of the manuscript collectively spend 50 karma per reviewer (Table 1). Then reviewers may use their Karma for paying reviewers when authoring manuscripts. Editors are also involved since they receive the reviewer’s report simultaneously to authors.
An important point is how reviewers’ identities and their expertise are verified and how attribution of merits can be recorded and tracked. The Working Group on Peer Review Service (created to develop a data model and citation standard for peer review activity that can be used to support both existing and new review models) stresses the need for standardized citation structures for reviews which can enable the inclusion of peer review activity in personal recognition and evaluation, as well the ability to refer to reviews as part of the scholarly literature (6). In this regard, all platforms described here are using or are starting to use ORCID identifiers for both authors and reviewers, and DOIs as identification for published reviews (22). ORCID itself is also offering the option of adding reviews to ORCID profiles. Researchers with a profile in these networks can link this to their ORCID iD so that the reviews they have recorded on the platform are added to their ORCID page (23). In turn, these identifiers will ease future research on peer review and will probaly allow us to measure the impact of these platforms in the academic publishing process.
In conclusion, motivating and rewarding reviewers is a need that can be addressed both by publishers and third party organizations. Online platforms are good tools for giving credit to reviewers and to convey monetary reward, at the same time offering a way of recording review activity.

References and Notes
1.The Wellcome Trust (2003) Economic analysis of scientific research publishing: A report commissioned by the Wellcome Trust, revised ed. Available: http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003182.pdf. Accessed 10th July 2015.
2. Costs and business models in scientific research publishing A report commissioned by the Wellcome Trust. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003184.pdf
3. The National Academies (US) Committee on Electronic Scientific, Technical, and Medical Journal Publishing. Electronic Scientific, Technical, and Medical Journal Publishing and Its Implications: Report of a Symposium. http://www.ncbi.nlm.nih.gov/books/NBK215820/
4. Ware, Mark and Mabe, Michael (2015)  An overview of scientific and scholarly journal publishing. International Association of Scientific, Technical and Medical Publishers, 2015. http://www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf Accessed 20th October 2015.
5. Walker R, Rocha da Silva P. (2015) Emerging trends in peer review—a survey. Frontiers in Neuroscience 9:169
6. Paglione LD, Lawrence RN. (2015) Data exchange standards to support and acknowledge peer-review activity. Learned Publishing, 28 (4):309-316(8)
7.Van Noorden, R. (2014) Global scientific output doubles every nine years. Nature.com [Internet], NewsBlog, 7 May 2014. Available from: http:// blogs.nature.com/news/2014/05/global-scientific-output-doublesevery-nine-years.html
8. The Wellcome Trust (2015) Scholarly Communication and Peer Review: The Current Landscape and Future Trends. http://www.wellcome.ac.uk/stellent/groups/corporatesite/%40policy_communications/documents/web_document/wtp059003.pdf Accessed 12 November 2015.
9. Marijke Breuning, Jeremy Backstrom, Jeremy Brannon, Benjamin Isaak Gross, Michael Widmeie  Reviewer Fatigue? (2015) Why Scholars Decline to Review their Peers’ Work PS: Political Science & Politics 48(4):595-600. http://dx.doi.org/10.1017/S1049096515000827
10.Björk B; Hedlund T.(2015)  Emerging new methods of peer review in scholarly journals. Learned Publishing 28(2): 85-91
11. Thomson Reuters (2010) Increasing the Quality and Timeliness of Scholarly Peer Review. A report for Scholarly Publishers..http://scholarone.com/media/pdf/peerreviewwhitepaper.pdf
12. Taylor & Francis (2015) Peer review in 2015: A global view. http://authorservices.taylorandfrancis.com/peer-review-in-2015/Accessed 20th October 2015
13. Alice Meadows (2015, January 7th) Recognition for peer review and editing in Australia – and beyond? Blog post in Exchanges http://exchanges.wiley.com/blog/2015/01/07/recognition-for-peer-review-and-editing-in-australia-and-beyond/Accessed 20th October 2015.
14. Andrew Trounson. Journals should credit editors, says ARC. Post in The Australian http://www.theaustralian.com.au/higher-education/journals-should-credit-editors-says-arc/story-e6frgcjx-1227201178857Accessed 20th October 2015.
15. Alberts, P., Hanson, B., and Kelner, K.L. 2008. Reviewing peer review. Science, 321 (5885): 15. http://dx.doi.org/10.1126/science.1162115.
16. Review rewards. Nature [Internet], 514(7522): 274–274. http:// dx.doi.org/10.1038/514274a
17.http://www.rubriq.com/
18.http://www.peereviewers.com/
19.http://www.publons.com
20.https://www.peerageofscience.org
21.http://academickarma.org/
22.Gasparyan AY, Akazhanov NA, Voronov AA, Kitas GD. Systematic and open identification of researchers and authors: focus on open researcher and contributor ID. J Korean Med Sci. 2014 Nov;29(11):1453-6. doi: 10.3346/jkms.2014.29.11.1453