Sharing Science

Open access can take many forms and utilize a variety of new social media tools to share research across disciplines, WeShareScience was developed as a new concept for sharing research in new ways.  By mashing together Pinterest, TedTalks, and YouTube (with a focus on scientific research) we created a unique place for researchers to share what they are doing and learn from others in many other disciplines — offering an exciting new perspective on research.

WeShareScience also developed the first online global science fair, with more than $11,000 in prizes that will go directly to researchers. Entries (short 5 minute videos about research) are due by June 1, 2014 to qualify. The science fair is not intended to “dumb down” research but rather to provide another entry point for those interested in research to learn what others are doing (without having to subscribe to high-price journals).  

You can help us spread the word about the science by asking your colleagues (faculty, students, professional researchers, and others) to post a video about their work.  And on April 8th we are asking everyone to post about the science fair on their Twitter or Facebook page.  You can learn more about this effort to create “buzz” about the science fair at: http://thndr.it/1jQk9AM

HEFCE/REF Adopts Optimal Complement to RCUK OA Mandate

There are two essential components to an effective ?Green? OA mandate (i.e., one that generates as close to 100% compliance, as soon as possible):

(1) The mandate must uncouple the date of deposit from the date the deposit is made OA, requiring immediate deposit, with no exemptions or exceptions. How long an OA embargo it allows is a separate matter, but on no account must date of deposit be allowed to be contingent on publisher OA embargoes.

This is exactly what the New HEFCE policy for open access in the post-2014 Research Excellence Framework has done.

(2) Eligibility for research assessment (and funding) must be made conditional on immediate-deposit (date-stamped by the journal acceptance letter). Again, this is in order to ensure that deposits are not made months or years after publication: no retrospective deposit

The deposit requirement for eligibility for research assessment and funding is not itself an OA requirement, it is merely a procedural requirement: For eligibility, papers must be deposited in the institutional repository immediately upon acceptance for publication. Late deposits are not eligible for consideration.

This engages each university (always extremely anxious to comply fully with REF, HEFCE and RCUK eligibility rules) in ensuring that deposit is timely, with the help of the date-stamped acceptance letter throughout the entire 6-year REF cycle, 2014-2020.

These two conditions are what have yielded the most effective of all the Green OA mandates to date (well over 80% compliance rate and growing) at University of Liege and FRS-FNRS (the Belgian Francophone research funding council); other mandates are upgrading to this mandate model; Harvard FAS has already adopted immediate-deposit as one of its conditions. And now RCUK ? thanks to HEFCE/REF ? will reap the benefits of the immediate-deposit condition as well (see ROARMAP)

OA embargoes are another matter, and HEFCE/REF is wisely leaving that to others (RCUK, EU Horizon2020, and university mandates) to stipulate maximal allowable embargo length and any allowable exceptions. What HEFCE/REF is providing is the crucial two components for ensuring that the mandate will succeed: (1) immediate deposit as a (2) condition for REF-eligibility.

But let me add something else that will become increasingly important, once the HEFCE/REF immediate-deposit requirement begins to propagate worldwide (as I am now confident it will: UK is at last back in the lead on OA again, instead of odd-man-out, as it has been since Finch):

The immediate-deposit clause and the contingency on eligibility for research assessment and funding also ensures that the primary locus of deposit will be the institutional repository rather than institution-external repositories. (Deposits can be exported automatically to external repositories, once deposited and once the embargo has elapsed; they can also be imported from extrenal repositories, in the case of the physicists and mathematicians who have already been faithfully depositing in Arxiv for two decades,)

But besides all that, many of the eprints and dspace institutional repositories already have ? and, with the HEFCE mandate model propagating almost all of them will soon have the email-eprint-request Button:

This Button makes it possible for users who reach a closed access deposit to click once to request a copy for research purposes; the repository software emails an automatic eprint request to the author, who can click once to comply with the request; the repository software emails the requestor the eprint. (Researchers have been requesting and sending reprints by mail ? and lately by email ? for decades, but with immediate-deposit and the Button, this is greatly accelerated and facilitated. So even during any allowable embargo period, the Button will enhance access and usage dramatically. I also predict that immediate-deposit and the Button will greatly hasten the inevitable and well-deserved demise of publisher OA embargoes.)

Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2014) Open Access Mandates and the “Fair Dealing” Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.)

Let me close by noting another important feature of the new HEFCE/REF policy: The allowable exceptions do not apply to the immediate-deposit requirement! They only apply to the allowable open-access embargo. To be eligible for REF2020, a paper must have been deposited immediately upon acceptance for publication (with a 3-month grace period).

(No worries about HEFCE’s optional 2 year start-up grace period either: Institutions will almost certainly want their REF procedures safely and systematically in place as early as possible, so everything can go simply and smoothly and there is no risk of papers being ineligible.)


Postscript. Expect the usual complaints from the usual suspects:

(i) “This is a sell-out of OA! It’s just Green Gratis OA, not Libre OA: What about the re-use rights? And if it?s embargoed, it isn?t even Green OA!

Reply: Relax. Patience. A compromise was needed, to break the log-jam between the Finch/Wellcome Fool?s-Gold profligacy and publisher embargoes on Green OA. The HEFCE immediate-deposit compromise is what will break up that log jam, and it?s not only the fastest and surest (and cheapest) way to get to 100% Green Gratis OA, but also the fastest, surest and cheapest way to get from Green Gratis OA to Libre Fair-Gold OA.

(ii) “This is a sell-out to publishers and their embargoes.”

Reply: Quite the opposite. It will immediately detoxify embargoes (thanks to the Button) and at the same time plant the seeds for their speedy extinction, by depriving publishers of the power to delay access-provision with their embargoes. It is also moots the worries of the most timorous or pedantic IP lawyer.

It thereby provides a mandate model that any funder or institution can adopt, irrespective of how it elects to deal with publisher OA embargoes.

And a mandate that can be simply and effectively implemented and monitored by institutions to ensure compliance.

Write to your MEPs to vote to safeguard Open Internet in Europe

I am proud to be a Fellow of the OpenForumAcademy – which promotes openness in IT standards and procurement. We are very concerned about the pressures to lead to two/many-tier Internet access and we urge “Net Neutrality”. Read this and then write to your MEP.

Don’t know who s/he is? Or how to write?

Simple  in UK. Go to writetothem.org – it will tell you everything. Don’t just copy the letter below – make it a bit personal.

  • About how a free Internet generates wealth for your region.
  • About how it encourages your constituents to keep in touch with MEPs
  • About the ability to share culture across Europe

You get the idea? Now tell them how to vote.

From: Maël Brunet <mael@openforumeurope.org>Date: 31 March 2014 12:33Subject: A chance to safeguard the Open Internet in EuropeTo:

Inline images 1

Dear Member of the European Parliament,

On April 3rd, you will have the opportunity to vote on the Commission’s Telecoms Package proposal. As you are surely aware, ITRE committee adopted on March 18th its report with proposed amendments for the EP. We are disappointed with the final outcome of this vote that we believe is detrimental to an open Internet and would like to take this opportunity to address this issue with you.

We are an independent, not-for-profit industry organisation that aims at promoting open and competitive ICT market. As such, we would like to draw your attention to the vague definition of ‘specialised services’ as adopted by the ITRE members in the aforementioned report. We believe that this is a dangerous loophole. In fact, this provision opens a space to use these services for exploiting the Internet in a way that is deeply detrimental to innovation and the EU citizens as the end users.

We fear that the wording as it stands would allow Internet Service Providers (ISPs) to prioritise content/application providers that can comply with the financial conditions of the ISPs. This would undoubtedly lead to service monopolies, hindering the competition as a direct consequence. In addition, the ISPs would lose any incentives to invest in the open Internet and the services thereof would slowly deteriorate. Moreover, the end-users would be trapped to use and access only services, contents and/or applications of providers that can pay a prioritised accessibility under this ‘specialised services’ loophole provision. Asresearch indicates, we need to guarantee that investment continues to be made in the ‘open’ part of the network in order to avoid a ‘dirt road’ effect whereby ‘specialised services’ would become the norm rather than the exception.

The success of the global Internet and the World Wide Web has been built on the sole concept of openness, with access being guaranteed to all without favour to any individual, organisation or commercial company. This would not be the case any more, should the definition of ‘specialised services’ be maintained in the text as recommended in the report. We urge you not to miss this opportunity and use your mandate to ensure the full impact of advances to innovation that are introduced by the package. In this regard, we strongly welcome and support the alternative amendments to the regulation bill proposed by the ALDE, S&D, Greens/ALE and the GUE/NGL groups. Europe is at a crossroad and needs to decide whether it will maintain a leadership position in the digital age. In this very moment, Brazil is successfully pushing its own ‘net neutrality’ law through the legislative process and it is a question of time when other countries will follow.

“The moment you let neutrality go, you lose the web as it is. You lose something essential – the fact that any innovator can dream up an idea and set up a website at some place and let it just take off from word of mouth”said Tim Berners-Lee, the inventor of the World Wide Web.

Please take the time and interest to consider what is at stake. There is still a possibility to correct this shortcoming and introduce a text that truly safeguards the net neutrality in the EU.

Yours sincerely, 

Maël Brunet (Mr)

Director, European Policy & Government Relations
OpenForum Europe

Journal of Emerging Investigators

“The Journal of Emerging Investigators is an open-access journal that publishes original research in the biological and physical sciences that is written by middle and high school students.  JEI provides students, under the guidance of a teacher or advisor, the opportunity to submit and gain feedback on original research and to publish their findings in a peer-reviewed scientific journal. Because grade-school students often lack access to formal research institutions, we expect that the work submitted by students may come from classroom-based projects, science fair projects, or other forms of mentor-supervised research. JEI is a non-profit group run and operated by graduate students at Harvard University. JEI also provides the opportunity for graduate students to participate in the editorial, review, and publication process. Our hope is that JEI will serve as an exciting new forum to engage young students in a novel kind of science education that nurtures the development and achievements of young scientists throughout the country….”

UK Copyright reforms set to become Law: Content-mining, parody and much more

I have been so busy over the last few days and the world has changed so much that I haven’t managed to blog one of the most significant news – the UK government has tables its final draft on the review of copyright. See http://www.ipo.gov.uk/copyright-exceptions.htm .

This is fantastic. It is set to reform scientific knowledge. It means that scientific Facts can be extracted and published without explicit permission. The new law will give us that. I’m going to comment on detail on the content-mining legislation, but a few important general comments:

  • UK is among the world leaders here. I understand Ireland is following, and the EU process will certainly be informed by UK. Let’s make it work so well and so valuably that it will transform the whole world.
  • This draft still has to be ratified before it becomes law on June 1st. It’s very likely to happen but could be derailed by (a) Cameron deciding to go to war (b) the LibDems split from the government (c) freak storms destroy Parliament (d) content-holder lobbyists kill the bill in underhand ways.
  • It’s not just about content-mining. It’s about copying for private re-use (e.g. CD to memory stick), and parody. Reading the list of new exceptions make you realise how restrictive the law has become. Queen Anne in 1710 (http://en.wikipedia.org/wiki/Statute_of_Anne) didn’t even  consider format shifting between technologies.  And e-books for disabled people??

So here’s guidance for the main issues in simple language:

and here are the details (I’ll be analysing the “data analytics” in detail in a later post):

And here’s the initial announcement – includes URLs to the IPO and government pages.

From: CopyrightConsultation
Sent: 27 March 2014 15:06
To: CopyrightConsultation
Subject: Exceptions to copyright law – Update following Technical Review

The Government has today laid before Parliament the final draft of the Exceptions to Copyright regulations. This is an important step forward in the Government’s plan to modernise copyright for the digital age. I wanted to take this opportunity to thank you for your response to the technical review and to tell you about the outcome of this process and documents that have been published.

As you will recall, the technical review ran from June to September 2013 and you were invited to review the draft legislation at an early stage and to provide comments on whether it achieved the policy objectives, as set out in Modernising Copyright in December 2012.

We found the technical review to be a particularly valuable process. Over 140 organisations and individuals made submissions and we engaged with a wide range of stakeholders before and after the formal consultation period. The team at the IPO have also worked closely with Government and Parliamentary lawyers to finalise the regulations.

No policy changes have been made, but as a result of this process we have made several alterations to the format and drafting of the legislation. To explain these changes, and the thinking behind them, the Government has published its response to the technical review alongside the regulations. This document sets out the issues that were raised by you and others, the response and highlights where amendments have been made.

It is common practice for related regulations such as these to be brought forward as a single statutory instrument. However, the Government is committed to enabling the greatest possible scrutiny of these changes and the nine regulations have been laid before parliament in five groups.  In deciding how to group the regulations, we have taken account of several factors, including any relevant legal interconnections and common themes. The rationale behind these groupings is set out in the Explanatory Memorandum.

The Government has also produced a set of eight ‘plain English’ guides that explain what the changes mean for different sectors. The guides explain the nature of these changes to copyright law and answer key questions, including many raised during the Government’s consultation process.  The guides cover areas including disability groups, teachers, researchers, librarians, creators, rights-holders and consumers. They also explain what users can and cannot do with copyright material.

The response to the Technical Review and the guidance can be accessed through the IPO’s website:  www.ipo.gov.uk/copyright-exceptions.htm <http://www.ipo.gov.uk/copyright-exceptions.htm>.  This also provides links to the final draft regulations, explanatory memorandum and associated documents that appear on www.legislation.gov.uk<http://www.legislation.gov.uk>.

It is now for Parliament to consider the regulations, which will be subject to affirmative resolution in both Houses. If Parliament approves the regulations they will come into force on 1 June 2014.

Thank you again for your contribution.

Yours sincerely,

John Alty

Elseviergate today: LIBER says to Libraries: DONT sign Elsevier’s click-through licence for Content Mining (TDM)

A month or so ago Elsevier published a “click-through” licence “allowing” researchers to use Elsevier content for Text-and-Data-Mining (TDM) – more widely content mining.  Nature News rejoiced and suggested everyone could start mining. I read the licence carefully and wrote several [start] blog posts [end] showing the great danger of anyone signing . Effectively DONT.

LIBER, the European association of Research Libraries flagged these and said it would do a thorough analysis which has now been published. http://www.libereurope.eu/news/liber-response-to-elsevier’s-text-and-data-mining-policy I’ll show most of this below with my comments. It’s necessarily long, so, to summarise:

  • DON’T SIGN
  • TELL EVERYONE ELSE NOT TO SIGN
  • CROSS OUT ANY CLAUSES RESTRICTING MINING

Other publishers and publisher syndication – e.g. DOI resolvers – may develop their own TDM “licences”

  • DONT SIGN THEM EITHER

So here’s why (summarised)

  • The licences add additional restrictions and no freedoms
  • Researchers could find themselves in legal trouble
  • Libraries could find themselves in trouble
  • Legislation is coming in UK and elsewhere which renders these licences unnecessary. You will simply be signing away your right
  • Publishers’ APIs are worse than using the standard access to research papers
  • You do NOT need publishers software. There is better Open Access software that is free.

So, if an Elsevier rep approaches you with a shiny new contract with a TDM clause, strike it out. YOU have the power. Tell the world.

Now the TL;DR bit. I reproduce much of LIBER and comment.

LIBER  believes that the right to read is the right to mine and that that licensing will never bridge the gap in the current copyright framework as it is unscalable and resource intensive. Furthermore, as this discussion paper highlights, licensing has the potential to limit the innovative potential of digital research methods by:

  1. restricting the tools that researchers can use
  2. limiting the way in which research results can be made available
  3. impacting on the transparency and reproducibility of research results.

The full text of the discussion paper is included below or can by downloaded here.

PMR: Yes. LIBER and many others (JISC, BL, etc.) walked out of the attempt to force licences on us. My highlighting 

Over the last twelve months LIBER has devoted a considerable amount of effortto making the case for the need for changes to copyright legislation in order to allow researchers to employ digital research methods to extract facts and data from content. We believe that this will exponentially speed up scientific progress and innovation in Europe. Having explored the issue of TDMwith our members and other stakeholders in the research community we have come to the conclusion that licensing will never bridge the gap in the current copyright framework as it is unscalable and resource intensive.

In the current vacuum left by a legal framework that is unfit for the digital age, and with the ensuing lack of legal clarity, it is unavoidable that libraries or researchers will have to agree to further licences for the mining of content to which they already have access. The terms of such licences, however, should be such that they reinforce the position that the right to read is the right to mine, and not impose restrictions on how researchers apply research methods or disseminate their research.

UK members should exercise particular caution when considering TDM licence terms, since an exception in UK law for text and data mining is imminent and, dependent on the wording in this new exception, TDM licence terms may undermine what researchers will be permitted to do under this update to UK copyright law. Ireland is also considering such an exception.

PMR: This has now been tabled (I shall blog it) and is substantially what has been drafted for the last year. It gives all the rights we felt we could ask for. Singing Elsevier’s contract or any other contract will simply restrict your rights.

This paper has been released in response to the recent launch of the new Elsevier text and data mining policy and API. It is understood that Science Direct licences will be amended to include language around access for TDM. Many libraries may be considering signing, or have even already signed up to the terms and conditions laid out under this new licence.

PMR: DONT sign. Much of what libraries have signed has restricted scholarship for no gains. STOP HERE>

Other publishers may also be considering following in the footsteps of Elsevier by introducing similar terms for the licensing of text and data mining activities into their licence agreements. LIBER is concerned that some of the licence’s terms and conditions relating to content mining may be unnecessarily restrictive and that systematic and widespread adoption of such terms and conditions will severely hamper the progress and dissemination of data-driven research.

PMR: DON’T EVEN LET THEM TRY.

The institutional licence agreement for text and data mining

In order for a researcher within a subscribing institution to gain access to Elsevier content for the purpose of mining, it is necessary for the institution to update their licence agreement to allow text mining access. Note that within this agreement “text mining access” does not mean access to the content on the Elsevier Website that universities subscribe to. Access to content for the purpose of mining is limited to access via an API. The licence explicitly prohibits the use of robots, spiders, crawlers or other automated programs, or algorithms to download content from the website itself, which are the most common ways of performing content mining. Although the new Elsevier policy claims that it “enshrines text- and data-mining rights” in subscription agreements, in reality, under these terms, it compels institutions to agree to very restrictive conditions in order to gain very narrowly defined “access” to content for the purpose of mining.

PMR: Elsevier’s API is constructed solely to reduce the view of the content, control the way it is accessed and monitor what is done. It is not necessary and has no beneficial process. (PLoS and BMC provide all that is necessary without APIs).

Access via an API

An application program interface (API) is a set of programming instructions and standards for accessing a web-based software application. In the case of the API offered by Elsevier, the API provides full-text content in XML and plain-text formats.  The use of APIs for the mining of metadata is not uncommon. However, article content is much richer, potentially containing images, figures, interactive content, and videos. For researchers in many different disciplines there is as much value in the images and figures contained in the article as there is in the text. In fact, for researchers in disciplines such as the humanities, genetics, chemistry, these may be the most valuable content elements. The Elsevier API allows access to thetext only.And the access limit is an arbitrary and proportionally tiny 10,000 articles per week.

PMR: In the ContentMine we are already extracting data from images and expect to handle millions of figures a year.

Crucially, researchers develop their own tools for handling and exploiting this rich and diverse variety of content and formats. In order for students and academics to be able to perform research freely, in the way that makes sense for their own studies, they must have the freedom to interrogate, query and structure content in ways that fit with their own needs, technologies and requirements. The requirement to use pre-defined publisher technologies hampers academic freedom, learning, and data driven innovation.

PMR: Innovation is critical. Publishers have failed to innovate and held back innovation. We are innovating.

Even for those researchers for whom the API is sufficient, the licence does not guarantee sustained access to the API, as the following clause indicates:

3.4 Elsevier reserves the right to block, change, suspend, remove or disable access to the APIs and any of its services at any time.

PMR: Were you pleased when Elsevier or Nature tightened their policies on Green OA recently? They can do that on TDM.

Use of robots

The Elsevier policy expressly forbids the use of robots for content mining on the grounds that it would place too much strain on their infrastructure. Open access publishers, whose infrastructure is exposed to all web users on the open web,have reportedthat the demand placed on their infrastructure by robots for content mining is negligible and any increase in demand will be easy to manage. For subscription services such as those provided by Elsevier, the demand placed on their infrastructure should be even less, as only users registered at subscribing institutions will have access.

PMR: I can mine the whole literature on my laptop. That’s probably 0.00001% of daily usage. If that crashes Elsevier they shouldn’t be in the business. This argument is FUD.

Control of outputs

Under the terms and conditions of the updated licence agreement the outputs are controlled in the following ways:

1.    Outputs can contain “snippets” of up to 200 characters of the original text

This is an arbitrary limit. Because this is essentially a limit on the amount of text that can be quoted from the original source, it could potentially result in misquotation or, at the very least, an inaccurate representation of the original research.

PMR: some chemical names are > 200 characters. Truncating these could KILL PEOPLE.

2.    Licensed as CC-BY-NC

In signing up to the Elsevier licence agreement, researchers are asked to agree to make their output available under a CC-BY-NC licence. The outputs of TDM are very often facts and data, which are not subject to copyright; however, the Elsevier licence agreement stipulates that this non-copyright information should be put under a licence for copyright works.

In addition, the definition of “non-commercial” is highly ambiguous and open to interpretation. In effect, a CC-BY-NC licence prevents downstream use of the results and may also put researchers who are performing research under a grant agreement that mandates that data be openly available in a difficult position. Universities are also increasingly engaging in, and being encouraged by governments to enter into business partnerships with, private business. This is known as the “knowledge transfer agenda”. We recommend that universities and researchers decide before signing the Elsevier licence whether there is a possibility that the outputs of the research they wish to undertake are commercial. As facts and data are not copyrightable, LIBER’s position is that they should be made available under a CC0 licence.[1]

PMR: The only reasonable way to publish scientific Facts is CC0. We enshrined this in the Panton Principles. These are , for example, endorsed by BMC and Cameron Neylon of PLoS is a co-author

Registration and click-through licences

In order for an individual researcher to gain access to the Elsevier content that their institution subscribes to, he/she must register directly with the Elsevier developers portal, provide details about the research they wish to undertake, and agree to the terms of a click-through licence. LIBER is particularly concerned about making such demands of researchers for the following reasons:

1.    We want to protect the privacy of our users.

Libraries have a strong track record of putting measures in place to protect the personal details and reading habits of our patrons. By requiring researchers to register individually and to provide details of their research project, Elsevier is circumventing the protections that libraries have put in place. The reason given by Elsevier for this requirement is that the publisher needs to check the credentials of the individual accessing the content. However, in authenticating individual user accounts the institution has already established the bona fide nature of the researcher. Further verification should not be necessary. We object to data about the research being performed by our users in our institutions being collected by an external third party. It is not the job of a publisher to control, monitor and vet what research takes place at a university.

2.    We want to protect our researchers from undue liability.

Many institutions employ full time experts to negotiate the terms and condition of licence agreements on their behalf. This process can take months, and yet, a researcher is expected to agree to the Elsevier click-through licence in a matter of seconds. The terms of this click-through licenceare extremely complex, in many places unclear[2] and could haveserious down-stream implications for the outputs of the research. We also note that there is no cap on liabilities for a researcher:

2.3 The User will be solely responsible for all costs, expenses, losses and liabilities incurred, and activities undertaken by the User in connection with TDM Service. [BOLD here is from LIBER]

What is more, Elsevier retain the right to amend the terms, without notice and the changes will be deemed accepted by the researcher immediately. This is unacceptable.

Many of the responsibilities that are placed on the researcher by the click-through licence will be difficult to implement in practice e.g. the licence states that copyright notices may not be changed from how they appear in the dataset. This means that in a dataset derived from 10,000 articles there may be at least 10,000 appearances of the word “copyright”. A normal way of dealing with this “noise” would be to remove these irrelevant data from the dataset, but this would contravene the terms of the licence.

The click-through licence also makes it impossible to ensure the transparency and reproducibility of research results as the researcher may not share the dataset used for the research project and must delete it after use. The researcher is also expressly prohibited from depositing this dataset in their institutional repository.

Lastly, the licence is silent on post-termination use of the results of content mining. The licence will be terminated if the subscribing university “does not maintain a subscription to the book and journal content in the ScienceDirect® database”.If a researcher has mined thousands of articles, how do they check that each and every one is being subscribed to? If one or many are cancelled, what does this mean for the results, categorisations and hypotheses contained in data they have invested time and effort to produce?

PMR: Can anyone suggest that these terms are good for science?

Outlook

We estimate that European universities spend in the region of €2 billion a year on Scientific Technical and Medical published content, the vast majority of which is on e-journal subscriptions. The new Elsevier licence terms and added requirement of an additional licence for each and every researcher who wishes to mine the content raises questions about what institutions are actually purchasing when subscribing to digital information. The implication of the Elsevier TDM policy is that institutions only purchase the right to cache, look at, print out, and do a word-search on a PDF. We believe that universities should be able to employ computers to read and analyse content they have purchased and to which they have legal access. An e-subscription fee is paid so that universities can appropriately and proportionately use the content they subscribe to. For what other purpose is a university buying access to information?

Research and innovation is best encouraged in a free-thinking and enabling environment where researchers can fully exploit the content they have access to through their library. Going forward, it is important that libraries can ensure that the scientific freedom of their researchers is not eroded, and the impact of their scientific outputs undermined, by limits imposed through licences.


[1]This licence is recommended so that reuse is not prevented under the sui-generis Database Directive.

[2]Terms used in the licence such as “recognition” and “classification” (2.1.1) are unclear. Another crucial, term “integration” (3.3) has been left undefined.

PMR: In summary, the ONLY reason for Elsevier’s licence is to give them stranglehold over this new technology. Libraries gave away author’s rights (they should have flagged this and communally refused to let it happen).

Any library who signs a publishers’ TDM clause will destroy the new information-led science.

Even if you aren’t in UK it is very probable that it is legally allowed to extract facts. The only thing stopping you doing it is the additional clause you have agreed to with the publisher.

Kill the restrictive clauses you sign with the publisher. You don’t have to.

 

 

 

 

Policy Analyst and Intern, OpenForum europe

OFE is a not-for-profit industry organization which was originally launched in 2002 to accelerate 

and broaden the use of Open Source Software (OSS) among businesses, consumers and 
governments. OFE’s role has since evolved and its primary role now is to promote the use of open 
standards in ICT as a means of achieving full openness and interoperability of computer systems 
throughout Europe. It continues to promote open source software, as well as openness more 
generally as part of a vision to facilitate open, competitive choice for IT users….”

An Open Future for Higher Education (EDUCAUSE Quarterly) | EDUCAUSE.edu

Key Takeaways:

  • As the world becomes more open, universities have the opportunity to embrace openness in how they carry out their operations, teaching, and research.
  • Open educational resources can provide the catalyst for different forms of learning, linking formal and informal aspects and splitting up the functions of content, support, assessment, and accreditation.
  • Models from research suggest that an open approach is likely to encourage the crossing of boundaries between inside and outside the classroom, games and tools for learning, and the amateur and the expert.
  • A new attitude toward research and scholarship is needed to work with the data of openness and to use it as an approach to gather evidence, share thoughts, and disseminate results….”

Squirrels – Nut Sleuths or Just Nuts?

squirrel2

Crazed squirrels: we’ve all seen them. Some dashing toward you only to stop short long enough to glare with beady eyes before fleeing, others dive-bombing the dirt, coming up with their heads waving back and forth. They’re the butt of many a joke on college campuses, providing endless amusement with their antics. Some UC Berkeley students even think that the resident campus squirrels may have gobbled up substances left over from the wilder moments of Berkeley’s past, leaving them permanently crazed. However, according to a recently published PLOS ONE article from UC Berkeley, these squirrels’ seemingly odd behavior may actually have a purpose. We’ve long known that scatter-hoarders will store food they find to prepare for periods when it’s less abundant, but there is little information on the hoarding process. Turns out these squirrels might actually have a refined evaluation method based on economic variables like food availability and season. To eat now, or cache for later?

Researchers interacted with 23 fox squirrels, a species well-habituated to humans, in two sessions during the summer and fall of 2010 on the Berkeley campus, evaluating food collection behavior during both lean (summer) and bountiful (fall) seasons. The authors engaged the squirrels with calls and gestures to attract their attention, and the first squirrel to approach was the focus of that round of testing.

Each squirrel was given a series of 15 nuts, either peanuts or hazelnuts, in one of two sequences. Some were offered five peanuts, followed by five hazelnuts, then five more peanuts (PHP). Others were given five hazelnuts, five peanuts, then five hazelnuts (HPH). The purpose of this variation was to evaluate how squirrels would respond to offers of nuts with different nutritional and “economic” values at different times. Hazelnuts are, on average, larger than peanuts, and their hard shell prevents spoiling when stored long term, but peanuts tend to have more calories and protein per nut.  Researchers videotaped and coded each encounter to calculate variables, like the number of head flicks per nut, time spent pawing a nut, and time spent traveling or caching nuts. See the video below for a visual example of these behaviors.

https://www.youtube.com/watch?v=MKKgpZoZ0Y0&feature=youtu.be

The results showed that season and nut type significantly affected the squirrel’s response, and the squirrel’s evaluation of the nut could forecast its course of action. Predictably, the fall trial showed squirrels quickly caching most of their nuts, likely taking advantage of the season’s abundance. Squirrels ate more nuts in the summer, though they still cached the majority of hazelnuts (76% vs. 99% cached in the fall) likely due to their longer “shelf life”.

The squirrels who head-flicked at least one time in response to a nut cached it nearly 70% of the time, while those who spent more time pawing the nut tended to eat it (perhaps searching for the perfect point of entry?). The time spent caching and likelihood of head flicking were clearly linked to the type of nut received and to the trial number, with time spent evaluating a nut decreasing as the trials continued for a squirrel. The authors suggest that the changes in food assessment strategies in response to resource availability provide an example of flexible economic decision making in a nonhuman species.

So, now that squirrels are possibly making economically prudent decisions when evaluating nuts, I guess we have to give them a break when we see them running around like crazy on campus. Doesn’t mean we’ll stop laughing.

Citation: Delgado MM, Nicholas M, Petrie DJ, Jacobs LF (2014) Fox Squirrels Match Food Assessment and Cache Effort to Value and Scarcity. PLoS ONE 9(3):e92892. doi:10.1371/journal.pone.0092892

Image: Squirrel by likeaduck

Video: Video S1 from the article

The post Squirrels – Nut Sleuths or Just Nuts? appeared first on EveryONE.

HOW TO RIG AN ELECTORAL LANDSLIDE, HUNGARIAN-STYLE

(partial list, to be updated: please provide corrections and additions):

– Start with a 2/3 supermajority, generated by a smear campaign and inciting mobs to violence

– Gerrymander the electoral districts

– Adopt laws to control the media

– Buy up the media

– Recruit and buy up corrupt oligarchs

– Re-write the constitution

– Adopt new laws and amendments whenever desired

– Retire the judiciary and appoint your own

– Take over the national bank presidency

– Take over private pensions

– Nationalise businesses and properties, then re-privatize to cronies

– Conduct press and police campaigns to smear the opposition

– Use EU subsidies to fund government electoral campaign

– Limit electoral campaigning in media

– Fund private foundations to do limitless media promotion of government

– Use taxes and subsidies to lower utility costs to disguise economic decline

– Blame all economic ills on opposition

– Oblige tenement owners to advertise utility savings

– Enfranchise non-citizens in adjoining countries to vote; facilitate their voting

– Make it as difficult as possible for citizens living abroad to vote (misinformation, red tape)

– Fund the fraudulent creation of many bogus opposition parties to create confusion in the ballot box

– Have oligarchs buy up all poster campaign space for government posters

– Adopt laws restricting campaign posting in public view

– Use media control to foster a popular climate of hatred toward the opposition and xenophobia toward the outside world

– Borrow bail-out funds at extortionate rates from Russia for nuclear plant building

– Use the loan to fund ?Hungary is Performing Better? campaign

– Leak innuendos and initiate criminal proceedings against the opposition weekly, dropping them once they prove groundless and have already done their damage

[Please re-post this list amended and expanded: Maybe there’s hope to get it to go viral before the elections]

The WellcomeTrust APC spreadsheet (ed Michelle Brook and community) adds massive crowdsourced value to Open Access. YOU can help

Last week The Wellcome Trust published its list of ca. 2000 articles for which it had paid Article Publishing Charges (APCs). It spent about 3 million GBP.

Those publications are a valuable investment. On Monday Mark Walport told us at the EuropePMC young scientist writers awards that publishing was as valuable as test tubes. Well-communicated science is of great value. Science behind paywalls loses hugely. My rough guess is that publishing is ca 1-2% of the cost of the grant, so I’d guess this represents about 200 million GBP overall investment. [See below how to avoid the guessing].

But what the Wellcome Trust lists offers is just the beginning. Michelle Brook , who runs Science at the Open Knowledge Foundation, immediately saw the potential. With great energy (and loss of sleep) she coordinated volunteers to curate this list. The result is at https://docs.google.com/spreadsheets/d/1RXMhqzOZDqygWzyE4HXi9DnJnxjdp0NOhlHcB5SrSZo/edit#gid=0

This isn’t the “version of record”. It’s a snapshot. Get used to the idea that in the Digital Century everything is snapshotted. There is often no “final version”. There may be intermediate versions used for specific purposes – for example checking that Elsevier has published what it got paid to publish. But everything is capable of revision and enhancement – in so many ways. I’ll give some below.

Michelle is using Google spreadsheets – which allows anyone to view the exact state of the spreadsheet. When she first prepared the spreadsheet it could be a bit confusing because if anyone sorted a column it alters everyone’s views.  But we solve that by social, not technical means. We know who is there – they are all friends (by definition you are part of the community) and we let each other know what we are doing.

The result is mind-blowing. It’s a human-machine synthesis of a section of scholarly publishing. So here’s a rough roll of honour:

  • Mark Walport, Robert Terry for making Wellcome the most dynamic force in Open Access and providing the funding
  • Robert Kiley and colleagues
  • Michelle Brook (andOKFN) for pulling this together and in no order (and maybe with omissions)
  • Stuart Lewis
  • Theo Andrew
  • Nic Weber
  • Jackie Proven
  • Fiona Wright
  • Stuart Lawson
  • Jenny Molloy
  • Yvonne Budden
  • SM
  • Rupert Gatti
  • Peter Murray-Rust
  • ck

That’s 13 contributors in less than a week. That’s how crowdsourcing works. About half the entries have names, so there’s lot of opportunity for you. You don’t need to have any specialist knowledge – and it’s open to all. Would make a good high-school project. Open Access Button could be involved, for example.

I think this spreadsheet has added a million GBP to Wellcome’s output.

What???!!! That’s an absurd amount to claim for 1 week of crowd sourcing. OK, I’ll revise it below…

Yes. There is 200 million GBP of investment. If no-one knows about it its values is small (we can count people trained, buildings kept-up, materials, etc.). But the major outcome of research funding, apart from people and institutions is KNOWLEDGE.

If the knowledge is 100 million, that’s a bad investment. If it’s 200 million, it’s marginal. To be useful the knowledge must be at least 300 million. [I’ll claim a multiplier of 5 for the mean of Open Knowledge and I’ll write a separate post…].

So what can this spreadsheet be used for?

  • we can download all the full text and search it. [“some of this isn’t CC-BY” so you can’t do that… Well I’m going to mine it for Facts, and that’s legal and anyway if you want to take me to court and claim that copyright stops people doing research that stops people dying I’ll see you there. It’s Open – Wellcome Trust has paid huge amounts of its own money and we have a moral right to that output.]. So expect the Content Mine to take this as a wonderful resource.
  • we can teach with it. For most science the publishers forbid teaching without paying them an extra ransom. Well, there’s enough here that we can find masses of useful examples for teaching. tells, sequences, species, phylogenetic trees, metabolism, chemical synthesis, etc. When you are creating teaching resources one of the first places you will look will be the WT-OKFN spreadsheet
  • we make science better. There’s enough here to create books of recipes (how-tis), typical values, etc. We can detect develop FRAUD detection tools.
  • we can engage citizens. [“Hang on – you’re going too far. Ordinary people can’t be exposed to science”. Tell that to cyclists in cambridge – there’s a paper on the “health benefits of cycling in Cambridge”. I think they’ll understand it. And I think they may be more knowledgeable that many paywall-only readers.]
  • we can detect papers behind paywalls. and the hints are that it’s not just Elsevier…
  • we can develop the next generation of tools. This spreadsheet is massive for developing content-mining. It’s exactly what I want. A collection of papers from all the biomedical publishers and I know I can’t be sued.
  • a teaching resource. If I were teaching Library and Information Science I would start a modern course with this spreadsheet. It’s a window onto everything that’s valuable in modern scientific information.
  • an advocacy and awareness aid.
  • a tool to fundamentally change how we communicate science. This is where the future is and it’s just the beginning. Information collected and managed by new types of organisation. The Open Knowledge Foundation. Democracy and bottom-up rather than top-down authoritarianism. If you are in conventional publishing and you don’t understand what I have just said then your are in trouble. (Unless of course you have good lawyers and rich lobbyists who can stop the world changing). We haven’t even put it into RDF yet and that will be a massive step forward.
  • a community-generator. We’ve already got 13 people in a week. That’s how Open Streetmap started. it’s now got half a million. WT-Brook could expand to the whole of enlightened scientific communication. Think Wikipedia. Think Mozilla, Think Geograph, Think OpenStreetMap. Think My Society, Think Crowdcrafting, Think Zooniverse. These can take off within weeks or months.

 

So it was silly to suggest this spreadsheet liberates a million pounds of value. I’ll be conservative and settle for ten million.

 

Elseviergate; Elsevier is STILL charging for Open Access even after I have told them. Wellcome should take them to court

Someone needs to take formal action against Elsevier. Like taking them to court. In this case Wellcome.

Two days ago I posted https://blogs.ch.cam.ac.uk/pmr/2014/03/24/today-at-elseviergate-more-potholes-and-bumps-on-the-shared-journey-please-help-us-find-paywalled-openaccess-elsevier/ where  I mentioned an APC-paid Open Access article behind a paywall. In response to this Elsevier lifted the paywall.

Prompted by a tweed from Ross Mounce I looked again. Now they have put the article back behind the paywall. Requiring non-subscribers to pay for Open Access.  Unethical, Immoral and I suspect a clear breach of contract law.

Here’s todays’ screen shot

elsevier20a

I simply don’t know what to say. Does anyone care? Or do we continue to pour public funds into an arrogant, avaricious, unprincipled company?

UPDATE: I’ve checked the earlier paywalled Open Access articles and they are not accessible to anyone (“we are experiencing technical difficulties”);

 

Open Acces als belangrijk middel om het vertrouwen in de wetenschap te bevorderen

25 maart 2014 – Sander Dekker ziet open acces als belangrijk middel om het vertrouwen in de wetenschap te bevorderen. Al die onderzoeken in de ‘journals’ daar kun je als burger niet bij want dan stuit je op een betaalmuur, stelt Dekker. “Terwijl al die onzin van die kwakzalvers wel gratis toegankelijk is.”

In het Amsterdamse NEMO ging Sander Dekker aan de hand van twee maatschappelijke casussen die veel onrust hebben veroorzaakt – schaliegas en vaccinatie tegen baarmoederhalskanker – in discussie met wetenschappers en beleidsmakers.  Hoe kunnen zij er gezamenlijk voor zorgen dat de wetenschap niet wordt misbruikt? En wanneer is onderzoek nog betrouwbaar en onafhankelijk? Dit waren de meest prangende vragen die op tafel lagen bij het tweede debat in de reeks die OCW samen met het Rathenau Instituut en de WRR organiseert.

Elseviergate: Checking whether paid OpenAccess is behind paywalls? Elsevier says it’s more efficient than libraries

The recent (wonderful) collection of Wellcome-sponsored articles (thanks Robert Kiley) has highlighted the huge percentage of “hybrid” articles – where both the author and the subscribing library pay the publishers. Publishers claim they give the money back to libraries.

Do you trust major publishers to get it right?

Michelle Brook has made a magnificent effort to collate all this information. In her blog post ”the-sheer-scale-of-hybrid-journal-publishing” she gives tables:

Top 5 publishers by total cost to Wellcome Trust

Publisher

No. of articles

Maximum Cost

Average Cost

Total Cost (nearest £1000)

Elsevier (inc. Cell Press)

418

£5,760

£2,448.158

£1,036,000

Wiley-Blackwell

271

£3,078.92

£2,009.632

£545,000

PLOS

307

£3,600

£1,139.286

£350,000

Oxford University Press

167

£3,177.60

£1,850.099

£300,000

Nature Publishing Group (not inc. Frontiers)

80

£3,780

£2,696.396

£216,000

 

I have been concerned that the quality of Open Access provided by publishers is often unacceptable. I started at the top of the list – Elsevier – and found 4 articles behind paywalls. (There may be more – I haven’t done all 418 – volunteers would be welcome). That’s totally unacceptable to me and most people.

It’s not totally unacceptable to Elsevier. It’s “bumpy road on the shared journey”. I call this “mumble”. Elsevier’s Directorate of Access and Policy (was Universal Access)  produces a great deal of mumble.

I doubt Elsevier has apologised to any authors

I doubt Elsevier has refunded them any money

I doubt Elsevier has communicated with the funder (Wellcome Trust).

The more I ask, the more I get mumble. I have lost all trust in Elsevier to produce accurate OpenAccess or give clear accurate information.

So we have to resort to other methods:

  • write to your parliamentary representative (I have)
  • Blog and tweet problems (I have)
  • Inform funders (I have communicated with Robert Kiley of Wellcome Trust)
  • Report Elsevier to trading standards (I can’t unless I have an author who has paid money)

and

  • Ask Universities to provide information on exactly what they paid Elsevier for APCs and for what.

So I tweeted this idea. It’s something that University libraries could and should do. I could and maybe will find out through FOI though I’d rather they did it voluntarily. One or two Universities seemed to catch on so I tweeted:

elsevier20

This statement staggered me.

If I were a librarian I would be outraged.

Elsevier says it is better than them at knowing what APCs have been paid and whether the article is paywalled. My simple research over the last week has shown vast errors in Elsevier’s system and arrogant complacency.

But I try to be a fair person and I try to avoid mumble so here is a simple clear question to the DoAP.

Please give me a machine-readable list of all articles Elsevier published in 2012-2013  for which there was an APC.

Elsevier should have done this publicly already.

Only a machine readable list (like the one that Wellcome Trust have provided) will do. The following are NOT acceptable:

  • “search for ‘open access’ in our ScienceDirect API.” (PMR I don’t trust Elsevier’s system to be 100% correct).
  • “Wait until we have fixed it in ‘summer 2014′”. They’ve taken a MILLION POUNDS. They should have a record. Maybe the UK tax office would like to know their income?
  • Mumble

But Elsevier says it’s more efficient than Libraries.

Libraries can you counter this by providing lists of APCs you paid to Elsevier? And we’ll see if any are behind paywalls.