We’re coming to the Hackathon

The JISC/SWAT4LS/OKF Hackathon starts on Tuesday in London. http://www.swat4ls.org/workshops/london2011/ and see Jenny Molloy’s blog post: http://science.okfn.org/2011/11/09/open-research-reports-trailer/

Here are some of the stars:

On the left is KosherFrog (alias @gfry, Alias Gilles Frydman). (We’ll create some better glasses for him). Here’s his twitter avatar:

In the middle is a patient – or rather the mother of a patient. She is carrying Roo’s salbutamol inhaler. She needs access to medical information.

And on the right is McDawg. He’s Graham Steel. Here’s HIS twitter avatar.

We’ll be creating semantic resources for disease.

Be there.










Cambridge Crystallographic Data Centre disputes non-re-usability of primary data (Am. Chem. Soc charges > 100 USD to view this discussion)

I have been alerted to a discussion in the letter pages of J. Chem. Inf. Modeling (an ACS Journal). I normally read the literature through a paywall window (my home machine has no privileges and so I get a “citizen-enhanced” view of the primary literature. The enhancement is of course massively negative – I can’t read most of this. For most things if I can’t read them they don’t exist – an increasingly common approach. Occasionally I switch on access to the University VPN which allows me to read the fulltext – thereby requiring the University to continue its subscription (in dollars) to this journal. Unless they use the paywall filter academics in rich universities (which is the only real market for scholarly journals) have no idea how impoverished the world is. But many of my readers will appreciate – they are the Scholarly Poor. And what follows can be understood by anyone – you don’t have to be a chemist. Note that many research institutions do not subscribe to JCIM so I expect most readers will have a “scholarly poor lens” on what follows.

  • Earlier this year a paper was published http://pubs.acs.org/doi/abs/10.1021/ci100223t

    Data-Driven High-Throughput Prediction of the 3-D Structure of Small Molecules: Review and Progress

    Alessio Andronico, Arlo Randall, Ryan W. Benz, and Pierre Baldi*

    School of Information and Computer Sciences, Institute for Genomics and Bioinformatics and Department of Biological Chemistry, University of California, Irvine, Irvine, California 92697-3435, United States

    J. Chem. Inf. Model., 2011, 51 (4), pp 760–776 DOI: 10.1021/ci100223t Publication Date (Web): March 18, 2011 Copyright © 2011 American Chemical Society

I can’t reproduce the abstract because although it was written by the authors they have signed over its ownership/copyright to ACS. (ACS in their generosity allow you to read this at the end of the link above). Note that the system is mounted at http://cosmos.igb.uci.edu/ . It contains the rubric:

Note: In as much as this Service uses data from the CSD [Cambridge Structural Database] , it has been given express permission from the CCDC [Cambridge Crystallographic Data Centre] . At the request of the CCDC, no more than 100 molecules can be uploaded to the Service at a time, and the Service ought to be used for scientific purposes only, and not for commercial benefit or gain.

Well – that was a pretty challenging paper, wasn’t it? (Sorry scholarly poor, I can’t tell you what it said – but trust me – or pay 35 USD).

This elicited a response from the director of the (CCDC). If you read the abstract you will see their involvement. (BTW I have no relation to them except geographical proximity and the University has declared that they don’t belong to the University (for FOI) although they are listed as a department). Here is his 1-page response:

  • http://pubs.acs.org/doi/pdfplus/10.1021/ci2002523 Data-Driven High-Throughput Prediction of the 3-D Structure of Small Molecules: Review and Progress. A Response from The Cambridge Crystallographic Data Centre,

    Colin R Groom* The Cambridge Crystallographic Data Centre, 12 Union Road, Cambridge CB2 1EZ, U.K.

He clearly disagrees with their contention. (Scholarly Poor you will have to fork out another 35 USD to read this single page). [2]

And the original authors responded

  • (http://pubs.acs.org/doi/abs/10.1021/ci200460z ) Data-Driven High-Throughput Prediction of the 3-D Structure of Small Molecules: Review and Progress. A Response to the Letter by the Cambridge Crystallographic Data Center

    Pierre Baldi

    J. Chem. Inf. Model., Just Accepted Manuscript • DOI: 10.1021/ci200460z • Publication Date (Web): 22 Nov 2011

Wow! Some strong disagreement on matters of fact. (Stop whining Scholarly Poor and pay another 35 USD to read this letter – it’s nearly 2 pages!). I’ll reveal that it contains phrases like “simply false”. And you can read the abstract which contains the phrase “significant impediments to scientific research posed by the CCDC.”

So that is a pretty damning indictment. Of the CCDC? Maybe, if you can read the letters. But certainly of the ACS. An important discussion about the freedom of re-use of the scholarly literature is hidden behind a paywall. The letters have been written by scientists and presumably reproduced verbatim by the ACS. What possible justification is there for requiring the charge of 35 USD? There is no peer review involved. But then the ACS charges 35 USD for everything, including an 8-WORD retraction notice. (It’s sort of easier just to charge vast amounts of money than think what you are doing to science).

So I am in a dilemma. How to I bring this discussion to public view. Because that is what a Scholarly Society SHOULD wish. I can’t expect everyone to pay 105 USD. (The part of the first paper that is involved is only two sentences). I have the following options:

  • Do nothing – this will perpetuate the injustices
  • Write summaries of the letters (absurd because it will distort the meaning)
  • Extract paragraphs and publish them under fair use. (There is no doctrine of fair use in the UK and I could be sued for any phrase extracted – I have already laid myself open to this with the phrase “simply false”
  • Urge the authors of the letters to publish them Openly. In doing so they will break the conditions of publication and lay themselves open to legal action or having subscriptions to JCIM cut off
  • Write to the editor of the Journal suggesting it would be in the public interest to publish the letters? In general editors don’t reply – but I know this one. But in any caseI dounbt they would do it and it makes the situation worse
  • Or follow a reader’s suggestion I haven’t thought of

Because I am now going to continue to challenge the CCDC. I have been turned down on FOI ground with a technicality (that the CCDC although listed as a department of the University isn’t part of it for FOI). BTW it took the University FOI 19.8 days to work that out.

If you read the last paper (shut up and pay!) you will see that the authors quote our work on Crystaleye and suggest that it, together with the Crystallography Open Data Base (COD) could and now should replace the CCDC. They say (I have removed all the letter “O”s [1] to avoid direct quoting) 35 USD will tell you where the O’s are meant to be.

As histry shws, thse wh stand in the way f demcracy and scientific prgress end up lsing ver the lng-run. The reactinary attitude f the CCDC staff has started t backfire by energizing academic labratries arund the wrld t find alternative slutins arund the CCDC.

I agree with the sentiments expressed. The only problem is that the authors chose to do it behind a paywall.

I shall continue my campaign to liberate “our” data from the CCDC+Wiley/Elsevier/Springer monopoly. Sancho Panza (http://en.wikipedia.org/wiki/Sancho_Panza ) is welcome to join me.

[1] http://en.wikipedia.org/wiki/The_Wonderful_O James Thurber.

[2] UPDATE: I managed to get it for free but maybe I have a cached copy?

UPDATE: It now seems that most people can get the first letter (“Editorial”) for free but I still have to pay for the UCI response

Scientists should NEVER use CC-NC. This explains why.

There is a really important article at http://www.pensoft.net/journals/zookeys/article/2189/creative-commons-licenses-and-the-non-commercial-condition-implications-for-the-re-use-of-biodiversity-information. (Hagedorn G et al)

[NOTE the OKF has a clear indication of the problems of CC-NC. They should add a link to Hagedorn. See my earlier blog post http://blogs.ch.cam.ac.uk/pmr/2010/12/17/why-i-and-you-should-avoid-nc-licences/ ].

So, you aren’t interested in Biodiversity Journals? Never read Zookeys? (I didn’t know it existed). But in 1 day about 1200 people have accessed this article. Yet another proof that WHAT you publishe matters, not WHERE. And hopefully this blog will send a few more that way.

I can’t summarise all of it. The authors give a very detailed and, I assume, competent analysis of Copyright applied to scientific content (data, articles, software) and its licensability under Creative Commons. Note that “This work is published Under a Creative Commons Licence” – which so many people glibly use is almost useless. It really means “This work is copyrighted [unless it’s CC0] and to find out whether you have any rights you will have to look at the licence”. So please, always, specific WHAT CC licence you use.

The one you choose matters, because it applies the rule of LAW to your documents. If someone does something with them that is incompatible with the licence they have broken copyright law. For example combining a CC-NC-SA licence with CC-BY-SA licence is impossible without breaking the law.

There are so many misconceptions about NC. Many people think it’s about showing that you want people to share your motivation. Motivation is irrelevant. The only thing that matters is whether the court assessing the use by the licensor breaks the formal non-commercial licence. There’s little case law, but the Hagedorn paper argues that being a non-profit doesn’t mean non-commercial. Recovering costs can be seen as commercial. And so on.

We came across this when we wished to distribute a corpus of 42 papers using in training OSCAR3. The corpus was made available by the Royal Society of Chemistry. It was used (with contributions from elsewhere) to tune the performance of OSCAR3 to chemistry journals. Because training with a corpus is a key part of computational linguistics we wished to distribute the corpus (it’s probably less than 0.1% of the RSC’s published material – it would hardly affect their sales). After several years they agreed, on the basis that the corpus would be licenced as CC-NC. I pointed out very clearly that CC-NC would mean we couldn’t redistribute the corpus as a training resource (and that this was essential since others would wish to recalibrate OSCAR). Yes, they understood the implications. No they wouldn’t change. They realised the problems it would cause downstream. So we cannot redistribute the corpus with OSCAR3. The science of textmining suffers again.

Why? If I understood correctly (and they can correct me if I have got it wrong) it was to prevent their competitors using the corpus. (The competitors includes other learned societies. )

I thought that learned societies existed to promote their discipline. To work to increase quality. To help generate communal resources for the better understanding and practice of the science. And chemistry really badly needs communal resources – it’s fifteen years behind bioscience because of its restrictive practices. But I’m wrong. Competition against other learned societies is more important than promoting the quality of science.

Meanwhile Creative Commons is rethinking NC. They realise that it causes major problems. There are several plans (see Hagedorn paper):

Creative Commons is aware of the problems with NC licenses. Within the context of the upcoming version 4.0 of Creative Commons licenses (Peters 2011), it considers various options of reform (Linksvayer 2011b; Dobusch 2011):

• hiding the NC option from the license chooser in the future, thus formally retiring the NC condition

• dropping the BY-NC-SA and BY-NC-ND variant, leaving BY-NC the only non-commercial option

• rebranding NC licenses as something other than CC; perhaps moving to a “non-creativecommons.org” domain as a bold statement

• clarifying the definition of NC

I’d support some of these (in combination) but not the last. Because while it is still available many people will use it on the basis that it’s the honourable thing to do (I made this mistake on this blog). And others will use it deliberately to stop the full dissemination of content.

Harvard Open Access Policy Benchmark Needed

It is important to calculate what percentage of the total annual refereed journal article output of Harvard (participating Faculties) is represented by the c. 6457 deposits to date in Harvard’s DASH Repository since adoption of Harvard’s OA Policy?

That is the objective measure of the success of an OA policy, and hence of whether it provides a model ready for other universities to emulate — or whether it still needs some tweaks (e.g., to make it more like the U. Liege ID/OA policy, which (1) requires immediate deposit with no waiver, (2) only requests (but does not require) that the deposit be made immediately OA, (3) designates repository deposit as the sole means of submitting journal articles for research performance review, and has generated 67,631 deposits to date).

The global baseline rate of making articles OA (without any OA policy) is about 20% (varying by discipline). The target is of course 100%. And about 60% is a benchmark, because that is the percentage of journals that already endorse immediate OA deposit (hence do not require Harvard-style rights retention in order to make deposits OA immediately).

It is extremely important to get a clear idea of exactly how well Harvard’s policy is doing after nearly 4 years: If the deposit rate is near 100%, it is doing as well as or better than all other kinds of OA mandates. If it is close to 60%, that’s still good, but it’s not clear whether its rights-retention clause is the cause, or its deposit clause.

And if it’s closer to 20%, then Harvard’s deposit clause is not working and needs upgrading to ID/OA.

This is all the more important since it is the Harvard model that other universities are likely to follow, come what may.

Stevan Harnad
EnablingOpenScholarship (EOS)

Holiday Service Update

With the end-of-year holiday season upon us, we wanted to let our authors know in advance that they may experience a slight delay in the peer review process of their manuscript if they submit anytime between now and the end of the year. This is because many of our academic editors and external referees will be out of the office at some point during the holiday season. We will endeavor to ensure that all manuscripts submitted to PLoS ONE are evaluated as quickly as possible, but please accept our advance apologies for any delays you experience.

Despite many people being on vacation, the work of the journal continues and so we will continue to receive a large number of emails from authors, academic editors, reviewers and readers throughout this period. Between our offices in the UK and the US, we will have some level of staff coverage every day except for Christmas Day (December 25), but with some team members being out of the office, we may not be able to respond to emails sent to the PLoS ONE inbox (plosone@plos.org) as quickly as usual. We will respond to your message as soon as we can, but in the meantime, you may wish to visit some of the following pages on our websites, which may help to answer your question:

Call to action: 2011 White House RFI on public access (deadline Jan. 2)

The opportunity

As part of the process of fulfilling Section 103 of the 2010 America COMPETES Act, the White House Office of Science and Technology Policy (OSTP) has issued a Request for Information (RFI), asking individuals and organizations to provide recommendations on approaches for broad public access and long-term stewardship to peer-reviewed scholarly publications that result from federally funded scientific research. The RFI poses eight multi-part questions, which can be found at the link below. 

The Right to Research Coalition strongly encourages student organizations, student governments, and individual students to submit responses supporting public access – your comments will be crucial in both showing the need for public access and ensuring the policy is maximally beneficial for students. This is a real opportunity to greatly expand students’ access to academic research, so please take a few minutes to submit a comment. Each response will be important in demonstrating students’ need for access to federally funded research.

The full text of the RFI may be found at: http://www.gpo.gov/fdsys/pkg/FR-2011-11-04/html/2011-28623.htm  

Who should respond?

It is urgent that as many individuals and organizations as possible respond. We strongly encourage you to write in both individually and on behalf of any student organizations that you are a member of. You’re also encouraged to share this call to action with any friends, colleagues, professors, or others in your network who would be willing to submit a carefully thought-out response.

For reference, the RFI specifically calls for comments from “non-Federal stakeholders, including the public, universities, nonprofit and for-profit publishers, libraries, federally funded and non-federally funded research scientists, and other organizations and institutions with a stake in long-term preservation and access to the results of federally funded research.”

If you can’t answer all of the questions, answer as many as possible – and respond to questions as directly as possible.  Responses that reference the questions directly will have more impact than those that are supportive of public access more generally.

How the results will be used

The input provided through this RFI will inform the National Science and Technology Council’s Task Force on Public Access to Scholarly Publications, convened by OSTP.

OSTP will issue a report to Congress describing: 

1. Priorities for the development of agency policies for ensuring broad public access to the results of federally funded, unclassified research;
2. The status of agency policies for public access to publications resulting from federally funded research; 
3. Public input collected.

Taxpayers paid for the research.
We deserve to be able to access the results.

The main point to emphasize is that taxpayers are entitled to access the results of the research our tax dollars fund, especially given how crucial this research is for a complete, up-to-date education. Taxpayers should be allowed to immediately access and fully reuse the results of publicly funded research. 

To discuss talking points in further detail, don’t hesitate to contact us. 

How to respond

The deadline for submissions is January 2, 2012. Submissions should be sent via email to publicaccess@ostp.gov. Please note: OSTP will publicly post all submissions after the deadline (along with names of submitters and their institutions) so please make sure not to include any confidential or proprietary information in your submission. Attachments may be included. 

As ever, thanks for your commitment to public access and the advancement of these crucial policies.

If you have any questions or comments, don’t hesitate to contact:

Nick Shockey
Director, Right to Research Coalition
nick [at] arl [dot] org

What is the basis of the NaCTeM-Elsevier agreement? FOI should give the answer

In the previous posts (http://blogs.ch.cam.ac.uk/pmr/2011/11/25/textmining-nactem-and-elsevier-team-up-i-am-worried/ and http://blogs.ch.cam.ac.uk/pmr/2011/11/27/textmining-my-years-negotiating-with-elsevier/ ) I highlight concerns (not just mine) about the publicly announced collaboration between NaCTeM (The National Centre for Textmining at the University of Manchester) and Elsevier (henceforth N+E). I am now going to find out precisely the details of this collaboration and, when I have the answers, will be in a position to answer the following questions:

  • What is NaCTeM’s mission for the nation? (NaCTeM formally has a responsibility to the Nation)
  • What public finance has NaCTeM had and what is planned in the future?
  • What public money has gone into the N+E?
  • What are the planned the benefits to Elsevier?
  • What are the planned benefits of N+E to NaCTeM?
  • Are there plans to pass any of these benefits to the wider national community

In particular my concerns are:

  • Will the benefits of this work be available only through Elsevier’s Sciverse platform?
  • Are we getting value for money?

It may seem strange – and potentially confrontational – to use FOI to get this information rather than simply asking the University or NaCTeM. But the power of FOI is that the University has specialist staff to give clear unemotional answers. And in particular it will highlight precisely whether there are hidden confidential aspects. If so it will be especially important to assess whether this is in the Nation’s interest. And, with the possibility that this will reveal material that is useful to the Hargreaves process and UK government (through my MP) it is important that my facts are correct.

For those who aren’t familiar with the FOI process each public institution has a nominated office/r who must, within 20 working days, give answers to all questions (or show why s/he should not). I shall use http://whatdotheyknow.com – a superb site set up for this purpose which means that everyone can follow the process and read the answers. FOI officers are required to respond promptly, and I hope that Manchester will do so – and be quicker than Oxbridge who ritually take 19.8 days to respond. Note that I am not expected to give my motivation. I shall request the information in existing documents or known facts – this is not a place for future hypotheticals or good intentions.


Dear FOI University of Manchester,

I am requesting information under FOI about the National Centre for Text Mining (NaCTeM) and the University’s recently announced collaboration of NaCTeM with Elsevier (http://www.manchester.ac.uk/aboutus/news/display/?id=7627 ). The information should be supported by existing documents (minutes, policy statements, etc.). I shall be concerned about the availability of resource material to the UK in general (i.e. beyond papers and articles). I use the word “Open” (capitalised) to mean information or services which are available for free use, re-use and redistribution without further permission (see http:// http://opendefinition.org/ ). In general this means OSI-compliant Open Source for code and CC-BY or CC0 for content (CC-NC and “for academics only” are not Open).


  • What is the current mission statement of NaCTeM?
  • Does NaCTeM have governing or advisory bodies or processes? If so please list membership, dates of previous meetings and provide minutes and support papers.
  • List the current public funding (amounts and funders) for NaCTeM over the last three years and the expected public funding in the foreseeable future.
  • What current products, content and services are provided to the UK community (academic and non-academic) other than to NaCTeM?
  • What proportion of papers published by NaCTeM are fully Open?
  • What proportion and amount of software, content (such as corpora) and services provided by NaCTeM is fully Open?

Elsevier collaboration

  • Has the contract with Elsevier been formally discussed with (a) funders (b) bodies of the University of Manchester (e.g. senates, councils)? Please provide documentation.
  • Is there an advisory board for the collaboration?
  • Has third party outside NaCTeM formally discussed the advantages and disadvantages of the Elsevier collaboration.
  • Please provide a copy of the contract between the University and Elsevier. Please also include relevant planning documents, MoIs, etc.
  • Please highlight the duration, the financial resource provided by (a) the University (b) Elsevier. Please indicate what percentage of Full Economic Costs (FEC) will be be recovered from Elsevier. (I shall assume that a figure of less that 100% indicates that the University is “subsidising Elsevier” and one greater than 100% means the University gains.
  • Please indicate what contributions in kind (software, content, services, etc.) are made by either party and what they are valued at.
  • Please outline the expected deliverables. Please indicate whether any of the deliverables are made exclusively available to either or both parties and over what planned timescale.
  • Are any of the deliverables Open?
  • What is the IP for the deliverables in the collaboration?
  • Are any of the deliverables planned to be resold as software, services or content beyond the parties?
  • Has NaCTeM or the University or any involved third party raised the concern that contributing to Sciverse may be detrimental to the UK community?
  • Please indicate clearly what the planned benefit of the collaboration is to the UK.


I shall post this tomorrow so please comment now if you wish to.


Textmining: My years negotiating with Elsevier

This post – which is long, but necessary – recounts my attempts to obtain permission to text-mine content published in Elsevier’s journals. (If you wish to trust my account the simple answer is – I have got nowhere and I am increasingly worried about Elsevier’s Sciverse as a monopolistic walled garden. If you don’t trust this judgement read the details). What matters is that the publishers are presenting themselves as “extremely helpful and responsive to request for textmining” – my experience is the opposite and I have said so to Efke Smit of the STM publishers’ assoc. In particular I believe that Elsevier made me and the chemical community a public promise 2 years ago and they have failed to honour it.

Although it is about chemistry it is immediately understandable by non-scientists. It is immediately relevant to my concerns about the collaboration between the University of Manchester and Elsevier but has much wider implications for scientific text-mining in general. New readers should read recent blogs posts here including http://blogs.ch.cam.ac.uk/pmr/2011/11/25/the-scandal-of-publisher-forbidden-textmining-the-vision-denied/ which explains what scientific textmining can cover and should also read forthcoming posts and comments.

I shall frequently use “we” to mean the group I created in Cambridge and extended virtual coworkers. I am not normally a self-promotionist, but it is important to realise that in the following history “we” are the leading group in chemical textmining, objectively confirmed by the ACS Skolnik award. “we” deserve a modicum of respect in this.


I start from common practice, logic, and legal facts. My basic premises are:

  • I have the fundamental and absolute right to extract factual data from the literature and republish it as Open content. “facts cannot be copyrighted” (though collections can). It has been common practice over two or more centuries for scientists to abstract factual data from the literature to which they have access (either by subscription or through public libraries). There are huge compilations of facts. A typical example is the NIST webbook; please look at http://webbook.nist.gov/cgi/cbook.cgi?ID=C64175&Units=SI&Mask=1#Thermo-Gas. This is a typical page (of probably >> 100,000) carefully abstracted from the literature by humans. It is legal, it is valuable and it is essential.
  • We have developed technology to automate this process. I argue logically that what a human can do, so can a machine. Logic has no force in business or in court and I am forbidden to deploy my technology it by restrictive publisher contracts (see previous posts). So what is a perfectly natural extension of human practice to machines is forbidden for no reason other than the protection of business interests. It has no logical basis.
  • I wish to mine factual data from Elsevier journals, specifically “Tetrahedron” and “Tetrahedron Letters”. I shall refer to these jointly as “Tetrahedron”. The factual content in these journals is created by academics and effectively 100% of this factual content is published verbatim without editorial correction. Authors are required to sign over their rights to Elsevier (and even if there may be exceptions they are tortuous in the extreme and most authors simply sign). Elsevier staff refer to this as “Elsevier content”. I shall always quote this phrase as otherwise it implies legitimacy which I dispute – I do not believe it is legally possible to sign over factual data to a monopolist third party. But it has never been challenged in court.
  • Everything I do is Open. I have no hidden secrets in my emails and anyone is welcome to write to the University of Cambridge under FOI and request any of all my emails with Elsevier. I personally cannot publish them many of them because they contain the phrase: The information is intended to be for the exclusive use of the intended addressee(s).  If you are not an intended recipient, be aware that any disclosure, copying, distribution, or use of the contents of this message is strictly prohibited.” However I suspect an FOI request would overrule this.


I have corresponded verbally and by email with several employees of Elsevier. I have done this through my natural contacts as Elsevier provide no central place for me to discuss the questions. I shall anonymise some of the Elsevier employees. If they feel their position has been misrepresented they are welcome to post a comment here and it will be reported in full. If they send an email I reserve the right to publish it Openly.

The simple facts (which can partly be substantiated by FOI on my emails but are stated without them are):

  • About 5 years ago I wrote to all five editors of Tetrahedron and also the Elsevier office about the possibility of enhancing Tetrahedron content through text-mining. I did not receive a single reply
  • Two years ago there was a textmining meeting at Manchester, organized by NaCTeM and UKOLN (http://www.nactem.ac.uk/tm-ukoln.php). At that meeting Rafael Sidi, Vice-President Product Management, Elsevier presented “Open Up” (30 mins). [He is the named Elsevier contact in the NaCTeM / Elsevier contract]. He gave no abstract and I do not have his slides. From a contemporaneous blog (http://namesproject.wordpress.com/2009/10/ ) “Rafael Sidi of Elsevier (who got through an eye-boggling 180 slides in 30 minutes!) emphasised the importance of openness in encouraging innovation”. With no other record I paraphrase the subsequent discussion between him and me (and I would be grateful for any eyewitness accounts or recordings). If Rafael Sidi wishes to give his account, he is welcome to use this blog.


    Essentially Rafael Sidi enthusiastically stated that we should adopt open principles for scientific content and mashup everything with everything. I then asked him if I could textmine Tetrahedron and mashup the content Openly. He said I could. I then publicly said I would follow this up. I have taken this as a public commitment by Sidi (who was representing Elsevier very clearly) that factual content in Tetrahedron could be mined without further permission.


  • I then followed it up with mails and phone calls to Sidi. Suffice to report that all the drive can from me and that after six months I had made no progress. I then tried another tack with Elsevier contact. After another 6 months no progress. I then raised this in 2010-10 with a member of Elsevier staff involved with the Beyond the PDF initiative http://sites.google.com/site/beyondthepdf/ . Although not directly concerned with chemistry she took up the case (and I personally thank her efforts) and thought she had made progress (a) by getting Elsevier to draw up a contract allowing me to textmine Tetrahedron and (b) relaying this to David Tempest (Deputy Director “Universal Access”, Elsevier) who is “currently reviewing policies” and “we have finalised our policy and guidelines I would be happy to discuss this further with you.” [That was 9 months ago and I have heard nothing].

The contract is public, apparently available to anyone to negotiate (though there are no rights – all decisions are made by Elsevier). I was told:

You can mine 5 years of Tetrahedron, and will be helped to do so by Frankfurt. You can talk to them about formats.  There are two conditions:

1) You agree with the SciVerse developer’s agreement – on http://developer.sciverse.com/start this is http://developer.sciverse.com/developeragreement – this also means you are not allowed to provide access to the Tetrahedron content (no surprise)

2) You can send us a description of the project you are working on, specifically describing the entities you are interested in mining, and the way in which you will use them.

To summarise:

  • Elsevier decide whether I can mine “their” content. I have no right. I can only beg.
  • All my results belong to Elsevier and I cannot publish them. Specifically:


    3.1.3 the Developer has not used robots, spiders or any other device which could retrieve or

    index portions of the Elsevier website, the Elsevier content or the APIs for any unauthorized

    purpose, and Developer conforms to all ethical use guidelines as published on the Elsevier


So I cannot search their site except as they permit

3.1.4 the Developer acknowledges that all right, title and interest in and to the Elsevier content,

and any derivative works based upon the Elsevier content, remain with Elsevier and its

suppliers, except as expressly set forth in this Agreement, and that the unauthorized

redistribution of the Elsevier content is not permitted;

“And any derivative works” means that everything I do – chemical structures, spectral data – everything BELONGS TO ELSEVIER. Note the phrase “Elsevier content”. The whole agreement is based on the concept that Sciverse (their platform for publishing “Elsevier content”) is being developed as a walled garden where no-one has rights other than Elsevier.

Well I have only taken 18 months to get to that position. I might be able to negotiate something slightly better if I take another 2 or three years.

And, in any case, I am not begging for permission to do a project. I am asking for my right. Both implied by current practice and also started by Rafael Sidi.

[Incidentally It will be interesting to see if the University of Manchester has signed up to


And that’s where the matter rests. No progress…



But no, I received a request from Elsevier asking if they can use my software. (Why? Because our group is a/the leading one in chemical information extraction). I can’t reproduce it as it’s confidential and I have therefore omitted names , but here is my reply (copied to all the people in Elsevier including Rafael Sidi):

Dear Mr. Murray-Rust,

With great interest I have read your description of the OSCAR 4 chemical entity recognizer. We (redacted) would like to evaluate OSCAR for use in our entity recognizer system and compare it to other analysers.

Because OSCAR is Open Source you may do this without permission.

A few months ago, I have done some comparisons with other annotators and can only say that OSCAR compares quite favourably and is easily deployed – that is to say, if it runs as a Java server.

I assume these comparisons are confidential to Elsevier

This type of functionality is included in the the OSCAR 3 implementation and is really easy to access because no coding layers are required to go between our code and yours – just an http webrequest.

We are using .Net for all our development so a web interface would be real nice. I gather from the article posted (OSCAR4: a flexible architecture for

chemical text-mining) that there are several wrappers around by several users – is there any chance that there is a .Net or HTTP wrapper that we might use? A short-cut in Java to build one ourselves?

I understand this to be a request for free consultancy. Unfortunately we have run out of free consultancy at present.

Do you have any advice here?

Normally I would reply in a positive light to anyone asking polite questions, but I have had two years of unfulfilled promises from Elsevier so I am will engage on one condition – that Elsevier honour the public promise that Rafael Sidi made two years ago.

Mr Sidi stated in public that I could have permission to use OSCAR on chemical reactions published in Elsevier journals (Tetrahedron, Tett Letters, etc.) and to make the results publicly Open. Over that last two years I have tried to get action on this (see copied people). The  closest I got was an agreement which I would have to sign saying that all my work would belong exclusively to Elsevier and that I would not be able to publish any of it. (The current agreement that my library has signed for subscriptions to Elsevier is that all text-mining is explicitly and strictly forbidden). Not surprisingly I did not sign this.

By Elsevier making a public promise I assumed I would be able to do research in this field and publish all the results. In fact Elsevier has effectively held back my work for this period and looks to continue to do it. I regard Elsevier as the biggest obstacle to the academic deployment of textmining at present.

The work that you are asking me to help you with will be an Elsevier monopoly with restrictive redistribution conditions and I am not keen on supporting monopolies. If you can arrange for Elsevier to honour their promise I will be prepared to explore a business arrangement though I am making no promises at present.

Thank you very much,

I am sorry this mail is written in a less than friendly tone but I can not at present donate time to an organisation which works against the direction of my research and academia in general. If Elsevier agrees that scientific content can be textmined without permission and redistributed (as it should be if it is to be useful) then you will have helped to make progress.

I have copied in your colleagues who have been involved in the correspondence over the last two years.

[Name redacted]

I am currently treating your request as confidential as it says so but I do not necessarily regard my reply as such. You will understand that I need a reply

Needless to say I have received no reply. You may regard my reply as rude, but it is the product of broken promises from Elsevier, delays, etc. So, Rafael Sidi, if you are reading this blog I would appreciate a reply and the uncontrolled permission to mine and publish data from Tetrahedron.

Because I shall forward your response (or the lack of one) to the UK government who will use your reply as an example of whether the publishers are helpful to those wanting to textmine the literature.





Stepping down as Moderator of American Scientist Open Access Forum

In September 2011 the AmSci Open Access Forum went into its 14th year. I think I have been moderating the Forum long enough, and so I’m stepping down as moderator, effective the end of December.

Subscribers will vote on whether to continue the AmSci Forum or whether the other two OA Forums (SOAF and BOAI) are now sufficient to air views on OA.

I will of course remain active in OA and will be posting to the existing Forums (and AmSci, if it continues) and/or the OA Archivangelism blog whenever the spirit moves or the occasion calls!

Stevan Harnad

Textmining: NaCTeM and Elsevier team up; I am worried

A bit over two weeks ago the following appeared on DCC-associates: http://www.mail-archive.com/dcc-associates@lists.ed.ac.uk/msg00618.html

Mon, 07 Nov 2011 09:16:34 -0800

This press release may be of interest to list members. 


University enters collaboration to develop text mining applications

07 Nov 2011




The University of Manchester has joined forces with Elsevier, a leading 

provider of scientific, technical and medical information products and 

services, to develop new applications for text mining, a crucial research tool.


The primary goal of text mining is to extract new information such as named 

entities, relations hidden in text and to enable scientists to systematically 

and efficiently discover, collect, interpret and curate knowledge required for 



The collaborative team will develop applications for SciVerse Applications, 

which provides opportunities for researchers to collaborate with developers in 

creating and promoting new applications that improve research workflows.


The University's National Centre for Text Mining (NaCTeM), the first 

publicly-funded text mining centre in the world, will work with Elsevier's 

Application Marketplace and Developer Network team on the project. 


Text mining extracts semantic metadata such as terms, relationships and events, 

which enable more pertinent search. NaCTeM provides a number of text mining 

services, tools and resources for leading corporations and government agencies 

that enhance search and discovery.


Sophia Ananiadou, Professor in the University's School of Computer Science and 

Director of the National Centre for Text Mining, said: "Text mining supports 

new knowledge discovery and hypothesis generation. 


"Elsevier's SciVerse platform will enable access to sophisticated text mining 

techniques and content that can deliver more pertinent, focused search results."


"NaCTeM has developed a number of innovative, semantic-based and time-saving 

text mining tools for various organizations," said Rafael Sidi, Vice President 

Product Management, Applications Marketplace and Developer Network, Elsevier. 


"We are excited to work with the NaCTeM team to bring this expertise to the 

research community."


Now I have worked with NaCTeM, and actually held a JISC grant (ChETA) in which NaCTeM were collaborators and which resulted in both useful work, published articles and Open Source software. The immediate response to the news was from Simon Fenton-Jones

Let me see if I got this right.

"Elsevier, a leading provider of scientific, technical and medical

information products and services", at a cost which increases much faster

than inflation, to libraries who can't organize their researchers to back up

a copy of their journal articles so they can be aggregated, is to have their

platform, Sciverse, made more attractive, by the public purse by a simple

text mining tool which they could build on a shoestring. 


Sciverse Applications, in return, will take advantage of this public

largesse to charge more for the journals which should/could have been

compiled by public digital curators in the first instance. 


Hmmm. So this is progress.


Hey. It's not my money!  


[PMR: I think it’s “not his money” because he writes from Australia, but he will still suffer]

PMR: I agree with this analysis. I posted an initial response (http://www.mail-archive.com/dcc-associates@lists.ed.ac.uk/msg00621.html )


No – it’s worse. I have been expressly and consistently asking Elsevier for

permission to text-mine factual data form their (sorry OUR) papers. They

have prevaricated and fudged and the current situation is:

“you can sign a text-mining licence which forbids you to publish any

results and handsover all results to Elsevier”


I shall not let this drop – I am very happy to collect allies. Basically I

am forbidden to deploy my text-mining tools on Elsevier content.




I shall elaborate on this. I was about to write more, because I completely agree about the use of public money and the lack of benefit to the community. However I have been making enquiries and it appears that public funding for NaCTeM is being run down – effectively they are becoming a “normal” department of the university – with less (or no) “national” role.

However the implications of this deal are deeply worrying – because it further impoverishes our rights in the public arena and I will explain further later. I’d like to know exactly what NaCTeM and the University of Manchester are giving to Elsevier and what they are getting out of it.

This post will give them a public chance – in the comments section, please – to make their position clear.


The scandal of publisher-forbidden textmining: The vision denied

This is the first post of probably several in my concern about textmining. You do NOT have to be a scientist to understand the point with total clarity. This topic is one of the most important I have written about this year. We are at a critical point where unless we take action our scholarly rights will be further eroded. What I write here is designed to be submitted to the UK government as evidence if required. I am going to argue that the science and technology of textmining is systematically restricted by scholarly publishers to the serious detriment of the utilisation of publicly funded research.

What is textmining?

The natural process of reporting science often involves text as well as tables. Here is an example from chemistry (please do not switch off – you do not need to know any chemistry.) I’ll refer to it as a “preparation” as it recounts how the scientist(s) made a chemical compound.

To a solution of 3-bromobenzophenone (1.00 g, 4 mmol) in MeOH (15 mL) was added sodium borohydride (0.3 mL, 8 mmol) portionwise at rt and the suspension was stirred at rt for 1-24 h. The reaction was diluted slowly with water and extracted with CH2Cl2. The organic layer was washed successively with water, brine, dried over Na2SO4, and concentrated to give the title compound as oil (0.8 g, 79%), which was used in the next reaction without further purification. MS (ESI, pos. ion) m/z: 247.1 (M-OH).

The point is that this is a purely factual report of an experiment. No opinion, no subjectivity. A simple, necessary account of the work done. Indeed if this were not included it would be difficult to work out what had been done and whether it had been done correctly. A student who got this wrong in their thesis would be asked to redo the experiment.

This is tedious for a human to read. However during the C20 there have been large industries based on humans reading this and reporting the results. Two of the best known abstracters are the ACS’s Chemical Abstracts and Beilstein’s database (now owned by Elsevier). These abstracting services have been essential for chemistry – to know what has been done and how to repeat it (much chemistry involves repeating previous experiments to make material for further synthesis , testing etc.).

Over the years our group has developed technology to read and “understand” language like this. Credit to Joe Townsend, Fraser Norton, Chris Waudby, Sam Adams, Peter Corbett, Lezan Hawizy, Nico Adams, David Jessop, Daniel Lowe. Their work has resulted in an Open Source toolkit (OSCAR4, OPSIN, ChemicalTagger) which is widely used in academia and industry (including publishers). So we can run ChemicalTagger over this text and get:

EVERY word in this has been interpreted. The colours show the “meaning” of the various phrases. But there is more. Daniel Lowe has developed OPSIN which works out (from a 500-page rulebook from IUPAC) what the compounds are. So he has been able to construct a complete semantic reaction:

If you are a chemist I hope you are amazed. This is a complete balanced chemical reaction with every detail accurately extracted. The fate of every atom in the reaction has been worked out. If you are not a chemist, try to be amazed by the technology which can read “English prose” and turn it into diagrams. This is the power of textmining.

There are probably about 10 million such preparations reported in the scholarly literature. There is an overwhelming value in using textmining to extract the reactions. In Richard Whitby’s Dial-a-molecule project (EPSRC) the UK chemistry community identified the critical need to text-mine the literature.

So why don’t we?

Is it too costly to deploy?


Will it cause undue load on pubklisher servers.

No, if we behave in a responsible manner.

Does it break confidentiality?

No – all the material is “in the public domain” (i.e. there are no secrets)

Is it irresponsible to let “ordinary people” do this/


Then let’s start!



But Universities pay about 5-10 Billion USD per year as subscriptions for journals. Surely this gives us the right to textmine the content we subscribe to.


Here is part of the contract that Universities sign with Elsevier (I think CDL is California Digital Library but Cambridge’s is similar) see http://lists.okfn.org/pipermail/open-science/2011-April/000724.html for more resources

 The CDL/ Elsevier contract includes [@ "Schedule 1.2(a)


"Subscriber shall not use spider or web-crawling or other software programs, routines, robots or other mechanized devices to continuously and automatically search and index any content accessed online under this Agreement. "


What does that mean?


Whyever did the library sign this?

I have NO IDEA. It’s one of the worst abrogations of our rights I have seen.

Did the libraries not flag this up as a serious problem?

If they did I can find no record.

So the only thing they negotiated on was price? Right?

Appears so. After all 10 Billion USD is pretty cheap to read the literature that we scientists have written. [sarcasm].

So YOU are forbidden to deploy your state-of-the art technology?

PMR: That’s right. Basically the publishers have destroyed the value of my research. (I exclude CC-BY publishers but not the usual major lot).

What would happen if you actually did try to textmine it.

They would cut the whole University off within a second.

Come on, you’re exaggerating.

Nope – it’s happened twice. And I wasn’t breaking the contract – they just thought I was “stealing content”.

Don’t they ask you to find out if there is a problem?

No. Suspicion of theft. Readers are Guilty until proven innocent. That’s publisher morality. And remember that we have GIVEN them this content. If I wished to datamine my own chemistry papers I wouldn’t be allowed to.

But surely the publishers are responsive to reasonable requests?

That’s the line they are pushing. I will give my own experience in the next post.

So they weren’t helpful?

You will have to find out.

Meanwhile you are going to send this to the government, right?

Right. The UK has commissioned a report on this. Prof Hargreaves. http://www.ipo.gov.uk/ipreview-finalreport.pdf

And it thinks we should have unrestricted textmining?

Certainly for science technical and medical.

So what do the publishers say?

They think it’s over the top. After all they have always been incredibly helpful and responsive to academics. So there isn’t a real problem. See http://www.techdirt.com/articles/20111115/02315716776/uk-publishers-moan-about-content-minings-possible-problems-dismiss-other-countries-actual-experience.shtml

Nonetheless, the UK Publishers Association, which describes its “core service” as “representation and lobbying, around copyright, rights and other matters relevant to our members, who represent roughly 80 per cent of the industry by turnover”, is unhappy. Here’s Richard Mollet, the Association’s CEO, explaining why it is against the idea of such a text-mining exception:

If publishers lost the ability to manage access to allow content mining, three things would happen. First, the platforms would collapse under the technological weight of crawler-bots. Some technical specialists liken the effects to a denial-of-service attack; others say it would be analogous to a broadband connection being diminished by competing use. Those who are already working in partnership on data mining routinely ask searchers to “throttle back” at certain times to prevent such overloads from occurring. Such requests would be impossible to make if no-one had to ask permission in the first place.

They’ve got a point, haven’t they?

PMR This is appalling disinformation. This is ONLY the content that is behind the publisher’s paywalls. If there were any technical problems they would know where they come from and could arrange a solution.

Then there is the commercial risk. It is all very well allowing a researcher to access and copy content to mine if they are, indeed, a researcher. But what if they are not? What if their intention is to copy the work for a directly competing-use; what if they have the intention of copying the work and then infringing the copyright in it? Sure they will still be breaking the law, but how do you chase after someone if you don’t know who, or where, they are? The current system of managed access allows the bona fides of miners to be checked out. An exception would make such checks impossible.

[“managed access” == total ban]

If you don’t immediately see this is a spurious argument, then read the techndirt article. The ideal situation for publishers is if no-one reads the literature. Then it’s easy to control. This is, after all PUBLISHING (although Orwell would have loved the idea of modern publishing being to destroy communication).

Which leads to the third risk. Britain would be placing itself at a competitive disadvantage in the European & global marketplace if it were the only country to provide such an exception (oh, except the Japanese and some Nordic countries). Why run the risk of publishing in the UK, which opens its data up to any Tom, Dick & Harry, not to mention the attendant technical and commercial risks, if there are other countries which take a more responsible attitude.

So PMR doing cutting-edge research puts Britain at a competitive disadvantage. I’d better pack up.

But not before I have given my own account of what we are missing and the collaboration that the publishers have shown me.

And I’ll return to my views about the deal between University of Manchester and Elsevier.

60% of Journals Allow Immediate Archiving of Peer-Reviewed Articles – but it gets much much better…

The database improvements we made to SHERPA/RoMEO in August 2011 have enabled us to generate new statistics on the number of journals that permit self-archiving. We presented a provisional pie chart of journals broken down by RoMEO Colour at Open Repositories 2011. This is updated in the following chart, which uses a snapshot of the RoMEO Journals database taken on the 15th November 2011.

RoMEO Journals by RoMEO Colour 2011-11-15

An alternative way of viewing this data is to look at how many of the versions of articles that academics prefer most can be archived, as in the following chart:

RoMEO Journals by Version - Immediate Archiving Permitted - 2011-11-15

Like RoMEO Colours, this chart is based on strong open access, where there are no embargoes or restrictions that prevent immediate self-archiving. As with the colour chart, this shows that 60% of  journals allow the final peer-reviewed version of an article to be archived immediately, with a further 27% permitting the submitted version (pre-print) to be archived immediately.

Only 13% of journals do not allow immediate archiving, but moving away from the ideal of immediate open access, the situation changes once any embargo periods have expired. This is shown in the following chart:

RoMEO Journals by Version - Post-Embargo - 2011-11-15

This chart takes account of embargoes of any length. The most common embargo period is 12 months, followed by 6 months, and then 24 months. A few embargoes are longer, the maximum recorded in RoMEO now being 5 years.

Embargo (months) Percent Relative Frequency
3 1% |
6 17% |||||||||||||||||
12 47% |||||||||||||||||||||||||||||||||||||||||||||||
18 4% ||||
24 28% ||||||||||||||||||||||||||||
36 1% |
60 1% |

Expiring embargos clearly improve the situation regarding archiving, but additional restrictions may still remain. For instance, it may be necessary to obtain permission to archive from the publisher, a fee might have to be paid, or archiving may only be available to authors whose work is paid for by certain specific funders. These restrictions may therefore make archiving impractical. However, if these restrictions can be complied with, the archiving situation improves still further, as shown in our final chart:

RoMEO Journals by Version - Post Compliance - 2011-11-15

This chart shows that a remarkable 94% of journals allow archiving of peer-reviewed articles after any embargo period has expired and any addional restrictions have been complied with. Indeed, for nearly a quarter of journals, the publisher’s version/PDF itself can be archived. Just 1% of journals only permit the pre-peer review submitted version to be archived. This leaves only 5% of journals that do not permit self-archiving of some form or another.

On the date the data for these charts was compiled (15th Nov.2011), the RoMEO Journals database held about 19,000 titles. Unfortunately, assigning journals to policies is not an exact process, due to the vagueness of some publishers’ policies and the fact that some publishing houses do work for societies and other third parties whose own open access policies may take precedence. It is therefore difficult to gauge the precision of these figures, but we guestimate that they are accurate to within 2%. The charts do not take into account journals that are not covered by RoMEO’s own database, but we expect that the relative proportions would be similar.

Peter Millington

A very PLoS ONE Thanksgiving

In honor of Thanksgiving, I thought I’d share a veritable cornucopia of PLoS ONE holiday-related papers old and new.

Tryptophan is the chemical traditionally credited for common post-gorging sleepiness, but does a lot more than that too. As one of the twenty amino acid building blocks for proteins, it serves all sorts of crucial biological functions, and it’s also involved in treatments for depression, HIV-related immune responses, and behavior regulation in 10-year-olds.

Then there are the cranberries, boiled into sauce or grated into relish, which are known to have health benefits due to their antioxidant and nutrient content, including a family of compounds called flavonoids. This paper published in October reported that obese mice treated with cranberry-derived flavonoids showed improvements in their weight-related symptoms, and identified the particular molecular pathway responsible for this effect – though I’m not going to go so far as to suggest that heaping an extra serving of cranberry sauce on your turkey will keep you from needing to undo your belt buckle this holiday.

And if you’re having pumpkin pie for dessert, you might want to consider garnishing it with some of the leaves. According to this paper, pumpkin leaves are a good source of plant-based protein, although the best balance of amino acids would come from combining it with seaweed and spirulina (a common dietary supplement that is made primarily from cyanobacteria).

On second thought, maybe you should just leave the pie as is.

Instead, if apple pie is more up your alley, it may also be good to know that apple orchards can be protected from caterpillar damage by offering nest boxes for bird species like the great tit. A recent study also showed similar results for nest boxes in vineyards – in case you’re planning to have a glass of wine with your meal.

Regardless of what you’re eating or drinking for the holiday, happy Thanksgiving from PLoS ONE!

Tell the White House taxpayers should have access to the results of the research we fund – Act by Jan. 2

The opportunity

As part of the process of fulfilling Section 103 of the 2010 America COMPETES Act, the White House Office of Science and Technology Policy (OSTP) has issued a Request for Information (RFI), asking individuals and organizations to provide recommendations on approaches for broad public access and long-term stewardship to peer-reviewed scholarly publications that result from federally funded scientific research. The RFI poses eight multi-part questions.


The full text of the RFI may be found at: http://www.gpo.gov/fdsys/pkg/FR-2011-11-04/html/2011-28623.htm

NOTE: A second RFI has also been issued on the topic of public access to digital data. SPARC/ATA will coordinate with allied organizations including ARL and CNI to formulate a response.


Who should respond?

It is urgent that as many individuals and organizations as possible – at all levels – respond.

For reference, the RFI specifically calls for comments from “non-Federal stakeholders, including the public, universities, nonprofit and for-profit publishers, libraries, federally funded and non-federally funded research scientists, and other organizations and institutions with a stake in long-term preservation and access to the results of federally funded research.”

If you can’t answer all of the questions, answer as many as possible – and respond to questions as directly as possible.

Organizations beyond the U.S., with experience with openaccess policies, are also invited to contribute.


How the results will be used

The input provided through this RFI will inform the National Science and Technology Council’s Task Force on Public Access to Scholarly Publications, convened by OSTP.

OSTP will issue a report to Congress describing:

  1. Priorities for the development of agency policies for ensuring broad public access to the results of federally funded, unclassified research;
  2. The status of agency policies for public access to publications resulting from federally funded research;
  3. Public input collected.


Taxpayers paid for the research. We deserve to be able to access the results.

The main point to emphasize is that taxpayers are entitled to access the results of the research our tax dollars fund. Taxpayers should be allowed to immediately access and fully reuse the results of publicly funded research.

To discuss talking points in further detail, don’t hesitate to contact us.


How to respond

The deadline for submissions is January 2, 2012. Submissions should be sent via email to publicaccess@ostp.gov. Please note: OSTP will publicly post all submissions after the deadline (along with names of submitters and their institutions) so please make sure not to include any confidential or proprietary information in your submission. Attachments may be included.


As ever, thanks for your commitment to public access and the advancement of these crucial policies.

If you have any questions or comments, don’t hesitate to contact:


Heather Joseph
Executive Director, SPARC and spokesperson for the Alliance for Taxpayer Access
heather [at] arl [dot] org


Jennifer McLennan
Director of Programs and Operations, SPARC & the Alliance for Taxpayer Access
jennifer [at] arl [dot] org