Intellectual Property Donor Sticker (real? art? humor?)

“Why let all of your ideas die with you? Current copyright law prevents anyone from building upon your creativity for 70 years after your death. Live on in collaboration with others. Make an intellectual property donation. By donating your IP into the public domain you will “promote the progress of science and useful arts” (U.S. Constitution). Ensure that your creativity will live on after you are gone and make a donation today.”

A Not-For-Profit Publisher’s Perspective on Open Access – E-LIS repository

Abstract: Recent legislative activity in the US House of Representatives and the UK House of Commons has added fuel to a debate over electronic access to the Scientific, Technical and Medical (STM) literature that was initiated in 1999 with the introduction of E-Biomed. On-going efforts to change the landscape of STM publishing involve moving it away from a subscription basis to an author-pays model. This article chronicles the swift evolution of electronic access to the scientific literature and asks whether the scholarly community will really be better off with government-mandated open access (OA) publishing.

Director of Digital Scholarship Commons

“The University Library at the University of California, Santa Cruz (UCSC) invites applications for the position of Director, Digital Scholarship Commons. The Digital Scholarship Commons at McHenry Library will play a key role in fostering the advancement of technologically driven interdisciplinary scholarship and asserting the University Library’s commitment to supporting new modes of digital research across the campus….”

AstraZeneca and Sanofi exchange over 200,000 chemical compounds

“AstraZeneca and Sanofi today announced a direct exchange of 210,000 compounds from their respective proprietary compound libraries. The swap represents a novel open innovation model between pharmaceutical companies. It enhances the chemical diversity of the compound collections of both companies and allows each to screen a broader, more diverse chemical space1 as the starting point in the search for new small-molecule medicines.

 

AstraZeneca and Sanofi have each selected the compounds to exchange based on differences from those in their own libraries. Chemical structures and synthetic procedures will be shared to facilitate the use of these compounds. The compounds will be exchanged in sufficient quantity to enable the receiving company to carry out high throughput screening for several years to determine whether they are active against specific biological targets. If a compound matches a target, it will go through several modifications to optimise its structure before being classified as a ‘lead compound’ to be taken forward to development….”

Announcing the Authors Alliance Guide to Understanding Open Access! | Authors Alliance

“[T]his guide is largely geared to the needs of authors working for academic institutions or under funding mandates. However, many chapters are suitable for authors who write in other contexts, and we encourage all authors interested in open access to read those sections relevant to their needs. 8 Understanding Open Access This guide will help you determine whether open access is right for you and your work and, if so, how to make your work openly accessible. This primer on open access explains what “open access” means, addresses common concerns and misconceptions you may have about open access, and provides you with practical steps to take if you wish to make your work openly accessible….”

ContentMining: My Video to Shuttleworth about our proposed next year

I have had two very generous years of funding from the Shuttleworth Foundation to develop TheContentMine. Funding is in yearly chunks and each Fellow must reapply if s/he wants another year (up to 3). The mission is simple: change the world. As with fresh applicants we write a 2-page account of where the world is at, what and how we want to change things.

TL;DR I have reapplied and submitted a 7 minute video (https://vimeo.com/146552838 ).

These two years have been a roller-coaster – seriously changed my life. I can honestly say that the Fellowship is one of the most wonderful organizations I know. We meet twice a year with about 20 fellows/almuni/team committed to making sure the world is more just, more harmonious, and that humanity and the planet have a better chance of prospering.

There’s no set domain of interest for applying, but Fellows have a clear sense of something new that could be done or something that badly needs mending. Almost everyone uses technology, but as a means, not as an end. And almost everyone is in some way building or enhancing a community. I can truly say that my fellow Fellows have achieved amazing things. Since we naturally live our lives openly you’ll find our digital footprints all over the Internet.

I’m not going to describe all the projects – you can read the web site and you may know several Fellows anyway.

  • Some are trying to fill a vacuum – do something exciting that is truly visionary – and I’ll highlight Dan Whaley’s https://hypothes.is/ . This project (and ContentMine is proud to be an associate) will bring annotation to documents on the Web. That sounds boring – but it’s as exciting as what TimBL brought with HTML and HTTP (which changed the world). Annotation can create a read-write web where the client (that’s YOU!) can alter/enhance our existing knowledge and it’s so exciting it’s impossible to see where it will go. The web has evolved to a server-centric model where organizations pump information at dumb clients and build walled gardens where you are trapped in their model of the world. Annotation gives you the freedom to escape , either individually or in subcommunities.
  • Others are challenging injustice – I’l highlight two. Jesse von Doom (https://cashmusic.org/ ) is changing the way music is distributed – giving artists control over their careers. Johnny West (https://openoil.net/ ) is bringing transparency to the extractive industries. Did you know “BP” consists of over 1000 companies? Where the fracking contracts in UK are?

So when I launched TheContentMine as a project in 2014 we were in the first category. Few people were really interested in ContentMining and fewer were doing it. We saw our challenge as training people, creating tools, running workshops, and that was the theme of my first application (https://vimeo.com/78353557 ). Our vision was to create a series of workshops which would train trainers and expand the knowledge and practice of mining. And the world would see how wonderful it was and everyone would adopt it.

Naive.

In the first year we searched around for likely early adopters, and found a few. We built a great team – where everyone can develop their own approaches and tools – and where we don’t know precisely what we want for the future. And gradually we get known. So for the second year our application centred on tools and mining the (Open ) literature (vimeo.com/110908526). It’s based on the idea that we’d work with Open publishers, show the value, and systematically extend the range of publishers and documents that we can mine. And that’s now also part of our strategy.

But then in 2014 politics…

The UK has already pushed for and won a useful victory for mining. We are allowed to mine any documents we have legal access to for “non-commercial research”. There was a lot of opposition from the “rights-holders” (i.e. mainstream TollAccess publishers to whom authors have transferred the commercial rights of their scientific papers). They’d also been fighting in Europe under “Licences for Europe” to stop the Freedom to mine. Indeed I coined the phrase “The Right to Read is the Right to Mine” and the term “Content Mining”. So perhaps when the UK passed the “Hargreaves” exception for mining, the publishers would agree that it was time to move on.

Sadly no.

2015 has seen the eruption of a fullscale conflict in EU over the right to mine. In 2014 Julia Reda MEP was asked to create a proposal for reform of copyright in Europe’s Digital Single Market. (The current system is basically unworkable – laws are different in every country and arcanely bizarre [1]). Julia’s proposal was very balanced – it did not ask for copyright to be destroyed – and preserved rights for “rights-holders” as well as for re-users.

ContentMining (aka Text and Data Mining, TDM) has emerged as a totemic issue. There was massive publishers pushback against Julia proposal, epitomised in the requirement for licences [2]. There were over 500 amendments, many being simply visceral attacks on any reform. And there has been huge lobbying, with millions of Euros. Julia could get a free dinner several times over every night!

There is no dialogue and no prospect of reconciliation. There is simply a battle. (I am very sad to have to write this sentence)

So ContentMine is now an important resource for Freedom. We are invited to work with reforming groups (such as LIBER who have invited us to be part of FutureTDM, an H2020 project to research the need for mining). And we accept this challenge by:

  • advocacy. This includes working with politicians, legal experts, reformers, etc.
  • software. Our software is unique, Open, and designed to help people discover and use ContentMining either with our support or independently.
  • Science. We are tackling real problems such as endangered species, and clinical trials.
  • Hands-on. We’ve developed training modules and also run hands-on workshops to explore scientific and technical challenges.
  • Partners. We’re working with university and national libraries, open publishers, and others.

So I’ve put this and more into the video. [3] This tells you what we are going to do and with whom. And I’ll explain the detail of what we are going to do in a future post.

 

[1] Read https://euobserver.com/justice/126375 and laugh, then weep. You cannot publish photos of the Eiffel Tower taken at night….

[2] Licensing effecetively means that the publishers have complete control over who, when, where, how is allowed to mine content (and we have seen Elsevier forbidding Chris Hartgerink to do research without their permission, see http://blogs.ch.cam.ac.uk/pmr/2015/11/22/content-mining-why-do-universities-agree-to-restrictive-publisher-contracts/ and earlier blog posts).

[3] It’s a non-trivial amount of work. Approximately 1 PMR-day per minute of final video. It took time for the narrative to evolve (thanks to Jenny Molloy and Richard Smith-Unna for the polar bear theme). And it’s CC-BY.

 

Books and articles across borders and languages (1990-2015) [PDF] | Marie Lebert

“After Tim Berners-Lee invented the World Wide Web and gave his invention to the world, books and articles could be accessed more easily. New media, new bookstores and new libraries helped cross national borders. Authors and journalists started working together at a distance. Internet users who didn’t have English as a mother tongue reached 5 percent in summer 1994, 20 percent in summer 1998, 50 percent in summer 2000, and 75 percent in summer 2015. Some of them could read English, others just got the gist of what they read, and a number of them couldn’t read English at all. The web saw the rise of linguistic democracy and the development of “language nations”, both large and small. Many dedicated people helped promoting their own language and culture, or the language and culture of others, while using English as a lingua franca. In a short time, they made the web truly multilingual, with bilingual or trilingual websites, language-related resources, reference dictionaries, multilingual encyclopedias and translation software. These people were linguists, authors, librarians, teachers, professors, researchers, computer programmers, marketing consultants, and so on. This book is based on many interviews conducted for several years in Europe, in the Americas, in Africa and in Asia….”

Content-mining; Why do Universities agree to restrictive publisher contracts?

[I published a general blog about the impasse between digital scholars and the Toll-Access publishers http://blogs.ch.cam.ac.uk/pmr/2015/11/22/content-mining-rights-versus-licences/ . This is followed by a series of detailed posts which look at the details and consequences


This is the second]

If you have read these earlier posts you will know that the issue is whether I and others are allowed to use machines to read publications we have legal access to read with our eyes.

The (simplified) paradigm for Content-mining scholarly articles consists of:

  • finding links to papers (articles) we may be interested in (“crawling”). The papers may be on publishers web sites (visible or behind paywall) or in repositories (visible). Most of this relates to paywalled articles

  • downloading these papers from (publisher) servers onto local machines (clients). (“scraping”). If paywalled this requires paid access (subscription) which is only available to members of the subscribing institution. Thus I can read thousands of articles to which Cambridge University has a subscription.

  • Running software to extract useful information from the papers (“mining”). This information can be chunks of the original or reworked material.

  • (for responsible scientists – including me) publish the results in full.

This is technically possible. Messy, if you start from scratch, but we and others have created Open Source tools and services to help.

The problem is that Toll-Access publishers don’t want us to do it (or only under unworkable restrictions). So what stops us?

THE LAW STOPS US

What follows is simplistic and IANAL (I am not a lawyer) though I talk with people who are. I am happy to be corrected by people more knowledgeable than me.

There are two main types of law relevant here:

  • Copyright law. https://www.copyrightservice.co.uk/copyright/p01_uk_copyright_law . TL;DR any copying may infringe copyright and allow the “rights-holder” to sue. The burden of proof is lower : “However, in a civil case, the plaintiff must simply convince the court or tribunal that their claim is valid, and that on balance of probability it is likely that the defendant is guilty”. Copyright law varies between countries and can be extraordinary complex and difficult to get clear answers. The simple, and sad, default assumed by many people and promoted by many vendors is that readers have no rights. (The primary method of removing these restrictions is to add a licence (such as CC-BY) which is compatible with copyright law and explicitly gives rights to the reader/user).

  • Contract law.
    Here the purchasers of goods and services (e.g. Universities) may agree a contract with the vendors (Publishers) that gives rights and responsibilities to both. In general these contracts are no publicised to users like me and may even be secret. Therefore some of what follows is guesswork. There are also hundreds of vendors and a wide variation on practice. However we believe that the main STMPublishers have roughly similar contracts.

    In general these contracts are heavily weighted in favour of the publisher. They are written by the publisher and offered to the purchaser to sign. If the University doesn’t like the conditions they have to “negotiate” with the publisher. Because there is no substitutability of goods (you can’t swap Nature with J. Amer. Chem. Soc.) the publisher often seems to have an advantage.

    The contracts contain phrases such as “you may not crawl our site, index it, spider it, mine it, etc.” These are introduced by the publisher to stop mining. (There is already copyright law to prevent the republishing of material without permission, so the new clauses are not required.). I queried a number of UK Universities as to what they had some – some were constructive in their replies but many – unfortunately – unhelpful.

    However there is no legal reason why a University has to sign the contract put in front of them. But they do, and they have signed clauses which restrict what I and Chris Hartgerink and other scientists can do. And they do it without apparent internal or external consultation.

    And this was understood by the Hargreaves reform which specifically says that text-miners can ignore any contracts which stop them doing it. Presumably they reasoned that vendors pressure Universities into signing our rights away, and this law protects us. And, indeed it’s critically important for letting us proceed.

But this law doesn’t (yet) apply to NL and so can’t help Chris (except when he comes to UK). We want it changed, and library organizations such as LIBER, RLUK, BL etc. want it changed.

So this mail is to ask Universities – and I expect their libraries will answer:

PLEASE REFUSE TO SIGN ANY CONTRACTS WHICH CONTAIN CLAUSES FORBIDDING CONTENT-MINING.

OR:

EXPLAIN WHY YOU HAVE TO SIGN OUR RIGHTS AWAY.

And then we’ll work out how to help.

Content-mining; Why do Publishers insist on APIs and forbid screen scraping?

[I published a general blog about the impasse between digital scholars and the Toll-Access publishers http://blogs.ch.cam.ac.uk/pmr/2015/11/22/content-mining-rights-versus-licences/ . This is the first of a number of posts which look at the details and consequences]

Chris Hartgerink described how Elsevier have stopped him doing content-mining: http://onsnetwork.org/chartgerink/2015/11/16/elsevier-stopped-me-doing-my-research/

and

http://onsnetwork.org/chartgerink/2015/11/20/why-elseviers-solution-is-the-problem/

There is a lot of comment on both of these , to which I may refer but will not reproduce in detail. It informs my comments. The key issue is “APIs”, commented by Elsevier’s Director of Access & Policy (EDAP)

Dear Chris,

We are happy for you to text mind content that we publish via the ScienceDirect API, but not via screen scraping. You can get access to an API key via our developer’s portal (http://dev.elsevier.com/myapikey.html). If you have any questions or problems, do please let me know. If helpful, I am also happy to engage with the librarian who is helping you.

With kind wishes,
Alicia

Dr Alicia Wise
Director of Access & Policy
Elsevier
a.wise@elsevier.com
@wisealic

The TAPublishers wish contentmining to be done through their APIs and forbid (not merely discourage) screenscraping. On the surface this may look like a reasonable request – and many of us use APIs – but there are critically important and unacceptable aspects.

What is screen scraping and what is an API?

Screen scraping simulates the action of a human reading web pages via a browser. You feed the program (ours is “quickscrape”) a URL and it will retrieve the HTML “landing page”. Then it find links in the landing page which refer to additional documents and downloads them. If this is done responsibly (as quickscrape does) is causes no more problem for the server than a human. Any publisher who anticipates large numbers of human readers has to implement software and which must robust. (I run a server, and the only time it’s had problems is when I have been the interest on Slashdot or Reddit, which are multi-human sites). A well-designed polite screen scraper like “quickscrape” will not cause problems to modern sites.

Screen-scraping can scrape a number of components from the web page. These differ for evry publisher or journal, and for science this MAY include:

  • the landing page
  • article metadata (often in the landing page)
  • abstract (often in the landing page)
  • fulltext HTML
  • fulltext PDF
  • fulltext XML (often only on Open Access publishers’ websites, otherwise behind paywall)
  • references (citations),
  • required files (e.g. lists of contributors, protocols)
  • supporting scientific information / data (often very large). A mixure of TXT, PDF, CSV, etc.
  • images
  • interactive data, e.g. 3D molecules

An excellent set of such files is in Acta Crystallographica journals (e.g. http://scripts.iucr.org/cgi-bin/paper?S2056989015020885 ) where the buttons represent such files.

I and colleagues at Cambridge have been screen-scraping many journals in this way for about 10 years to get crystallographic data for research and have never been told we have caused a problem. We have contributed our output to the excellent Free/Open www.crystallography.net Crystallography Open Database.

So I reject the idea that screenscraping is a problem, and regard the EDAP’s argument as FUD. I say that because despite the EDAP’s assertion that they are trying to help us, the reverse is true. I have spent 5 years of my life beating emails back and forth and got no-where, (https://blogs.ch.cam.ac.uk/pmr/2011/11/27/textmining-my-years-negotiating-with-elsevier/ ) and you should prepare for the same.

An API (https://en.wikipedia.org/wiki/Application_programming_interface#Web_use_to_share_content) allows a browser or program to request specific information or services from a server. It’s a precise software specification which should be documented precise and where the client can rely on what the server can provide. At EuropePMC there is such an API, and we use it frequently and in our “getpapers” tool. Richard Smith-Unna in Plant Sciences (Univ Cambridge) and in ContentMine has written a “wrapper” which issues queries to the API and stores the results.

When well written, and where there is a common agreement on rights, then APIs are often, but not always, a useful way to go. Where there is no common agreement they are unacceptable.

Why Elsevier and other TAPublishers’ APIs are unacceptable.

There are several independent reasons why I and Chris Hartgerink and others will not use TAPublisher APIs. This is unlikely to change unless the publishers change the way they work with researchers and acceptable that researchers have fundamental rights.

  • An API gives total control to the server (the publisher) and no control to the client (reader/user).

That’s the simple, single feature that ultimately decides whether an API is acceptable. The only way that I would use one, and would urge you consider, is

  • is there a mutually acceptable public contract between the publisher and the researcher?

In this case, and the case of all STMPublishers, NO. Elsevier has written its own TandC. It has done this without even the involvement of the purchasing customer. I doubt that any library, any library organization, any university, and university organization has publicly met with Elsevier or the STMPublisher’s association and agreed mutually satisfactory terms.

All the rest is secondary. Very important secondary, which I’ll discuss. But none of this can be mended without giving the researcher their rights.

Some of the consequences (which have already happened) include:

  • It is very heavily biassed towards Elsevier’s interests and virtually nothing about the user interests.
  • The TandC can change at any time (and do so) without negotiation
  • The API can change at any time.
  • There is no guaranteed level of software design or service. When (not if) it breaks we are expected to find and report Elsevier bugs. There is no commitment to mend them.
  • The API is designed and built by the publisher without the involvement of the researcher. Quite apart from the contractual issues this a known way of producing bad software.
  • The researcher has no indication of how complete or correct the process is. The server can give whatever view of the data they wish.
  • The researcher has no privacy.
  • (The researcher probably has no legal right to sign the TandC for the API – it is the University that contracts with the publisher.)
  • The researcher contracts only to publish results as CC-NC, which debars them from publishing in Open Access journals.
  • The researcher contracts not to publish anything that will harm Elsevier’s marketplace. This immediately rules me out as publishing chemistry will compete with Elsevier database products.

So the Elsevier API is an instrument for control.

To summarise, an API:

  • Allows the server to control what, when, how, how much, in what format the user can access the resource. It is almost certain that this will not fit onto how researchers work. For example, the Elsevier API does not serve images. That already make it unusable for me. I doubt it serves supplemental data such as CIFs either. If I find problems with EuropePMC API I discuss this with the European Bioinformatics Institute. If I have problems with Elsevier API I …
  • Can monitor all the traffic. I trust EuropePMC to behave responsibly as it has a board of governance (including one I have been on). It allows anonymity. With Elsevier I … In general no large corporate can be trusted with my data, which here includes what I did, when, what I was looking at and allows a complete history of everything I have done. From that machines can work out a great deal more, and sell it to people I don’t even know exist.

And…

  • APIs can be well written or badly written. Do you, the user, have an involvement?
  • Their use can be voluntary or mandatory. Is the latter a problem?
  • Is there a guarantee of privacy and non-use of data?
  • Do you know whether the API gives the same information as the screen-scraper (almost certainly not, but how)
  • what do you have to sign up to? Was it agreed by a body you trust?

So…

APIs are being touted by Elsevier and other STMPublishers as the obvious friendly answer to Mining. In their present form, and with present Terms and Conditions they are completely unacceptable and very dangerous.

They should be absolutely rejected. Ask your library/university to cancel all clauses in contracts which forbid mining by scraping. They have the legal right to do so.

 

Content-mining; Rights versus Licences

[I intend to follow with several more detailed posts.]

Last week was a critical point for those who regard the scholarly literature as a public good, rather than a business. Those who care must now speak out, for if they do not, we shall see a cloud descend over the digital century where we are disenfranchised and living in enclosures and walled gardens run by commercial mega-corporations.

Chris Hartgerink, a statistician at the University of Tilburg NL, was using machines to read scholarly literature to do research (“content-mining”). Elsevier, the mega-publisher, contacted the University and required them to stop Chris. The University complied with the publisher and Chris is now forbidden to do research using mining without Elsevier’s permission.

Some reports include:

The issues are simple:

  • Chris has rightful access to the literature and can read it with his eyes.

  • His research is serious, valuable and competent.

  • Machines can save Chris weeks of time and prevent many types of error.

What Chris has been doing has been massively resisted by mainstream “TAPublishers” [1]. This includes:

  • lobbying to reject proposed legislation (often by making it more restrictive).

  • producing FUD (“Fear Uncertainty and Doubt”) aimed at politicians, libraries and researchers such as Chris. Note that “stealing” is now commonly used in TAPublisher-FUD.

  • physically preventing mining (e.g. through CAPTCHAs).

  • Preventing mining though contractual or legal means (as with Chris).

Many of us met in The Hague last year to promote this type of new and valuable research, and wrote The Hague Declaration . A wide range of organisations and individuals ranging from universities, libraries, and liberal publishers have signed. This is often represented with by my phrase “The Right to Read is the Right to Mine”.

Many reformers, led initially by Neelie Kroes (European Commissioner till 2014) and now by Julia Reda (MEP) have pushed for reforms of copyright to allow and promote mining. The European Parliament and the Commission have produced in-depth proposals for liberalising European law.

The reality is that reformers and the Publishers have little common ground on mining. Reformers are campaigning for their Rights; TAPublishers are trying to prevent this. This is often encapsulated in additional mining “Licences” proposed by TAPublishers. This is epitomised by the STMPublisher-lobbied “Licences for Europe” proposed in 2013 in Commission discussions, but which broke down completely as the reformers were not prepared.

The TAPublishers are trying to coerce the scholarly and wider community into accepting Licences; we are challenging this by asserting our Rights.

Unfettered Access to Knowledge is as important in the Digital Century as food, land, water, and slavery have been over the millenia.

The issue for Chris and others is:

  • Can I read the literature I have access to

    1. in the way I want,

    2. for any legal purpose

    3. using machines when appropriate

    4. without asking for further permission

    5. or telling corporations who I am and what I am doing

    6. and publishing the results in the open literature without constraints

Chris has the moral right to do 1-6, but not the legal right, because the TAPublishers have added restrictions to the subscription contracts, and his University has signed them. He is therefore (probably) bound by NL contract law.

In the UK the situation is somewhat better. Last year a copyright Exception was enacted which allows me to do much of this. (2) has to be for “non-commercial research” and (6) would only be permissable if I don’t break copyright in publishing the results. So I can do something useful (although not nearly as much as I want to do, and as reponsible science requires). I know also that I will have constant opposition from publishers, probably including lobbying of my institution.

European reformers are pushing for a similar legal right in Europe and many propose removing the “non-commercial” clause. There is MASSIVE opposition from publishers primarily through lobbying, where key politicians and staff are constantly fed the publishers’s story. There is no public forum (such as a UK Select Committee) where we can show the fallaciousness of TAPublisher arguments. (This is a major failing of European democracy – much of it happens in personal encounters with unelected representatives who have no formal responsibility to the people of Europe). The fight – and it is a fight – is therefore hugely asymmetric. If I want to represent my views we have to travel to Brussels at our own expense – TAPublishers have literally billions.

The issue is RIGHTS (not APIs, not bandwidth, not cost, not FUD about server-load, not convenience)

Just OUR RIGHTS.

I hope you feel that this is the time to take a stand.

What can we do?

Some immediate and cost-free tasks:

  • Sign the Hague Declaration. Very few European University / Libraries have so far done so

  • Write to your MEP. Feel free to take this mail as a basis, but personalise it

  • Write to Commissioner Oettinger (“Digital Single Market”)

  • Write to your University and your University Library. Use Freedom of Information to required that they reply. Challenge the current practice

  • Alert your learned socity to the muzzling of science and scholarship.

  • Alert organizations who are campaigning for Rights in the Digital age.

  • Tweet this post, and push for retweets

And think about what ContentMining could do for you. And explore with us.

And what are PMR and colleagues going to do?

Because I have the legal right to mine the Cambridge subscription literature for non-commercial purposes, I and colleagues are going to do that. Ross Mounce and I have already shown that totally new insights are possible (see http://contentmine.org/2015/09/contentmine-tools-produce-a-supertree/). We’ve developed a wide range of tools and we’ll be working on our own research and also with the wider research community in areas that we can contribute to.

[1]. There is a spectrum of publishing ranging from large conventional, often highly profitable, publishers through learned societies, to new liberal startups in the last 10 years. I shall use “TAPublisher” (TollAccess publisher) to refer to publishers such as (but not limited to) Elsevier, Wiley, Springer, Macmillan, Nature . They are represented by an association (STMPublishers Association) which effectively represents their interests and has been active in developing and promoting licences..

SEANOE

“Seanoe (SEA scieNtific Open data Edition) is a publisher of scientific data in the field of marine sciences. It is operated by Sismer within the framework of the Pôle Océan. 

Data published by SEANOE are available free. They can be used in accordance with the terms of the Creative Commons license selected by the author of data. Seance contributes to Open Access / Open Science movement for a free access for everyone to all scientific data financed by public funds for the benefit of research. 

An embargo limited to 2 years on a set of data is possible; for example to restrict access to data of a publication under scientific review. 

Each data set published by SEANOE has a DOI which enables it to be cited in a publication in a reliable and sustainable way. 

The long-term preservation of data filed in SEANOE is ensured by Ifremer infrastructure.”

Research & Development Co-Ordinator job with SPRINGER NATURE | Guardian Jobs

“We have an exciting opportunity to join the research and development team in the Open Research Group at Nature Publishing Group and Palgrave Macmillan. This role would suit a graduate with an interest in open access publishing who is looking to gain experience in this rapidly developing area of scholarly communications….”

Research & Development Co-Ordinator job with SPRINGER NATURE

“We have an exciting opportunity to join the research and development team in the Open Research Group at Nature Publishing Group and Palgrave Macmillan. This role would suit a graduate with an interest in open access publishing who is looking to gain experience in this rapidly developing area of scholarly communications….”