Monthly Archives: June 2010
Factors influencing the adoption of open access scholarly communication in Tanzanian public universities
On the wide disparity in publisher cost-efficiency
Libraries and librarians are continuing to cope with the impact of the global financial crisis, and I understand that some of us are beginning to wonder why our libraries are facing deep budget cuts and staff furloughs while a few of the largest commercial publishers are boasting record profits.
Understandable, some are beginning to look for a better deal, and personally I think this is a very good thing. However, this might be a good time to highlight that there is a very wide discrepancy in the cost-effectiveness of different types of publishers. It it not fair, IMHO, to treat the mission-oriented publisher that has never charged more than they needed to survive, as if they were the same as the highly for-profit publishers. By all means, let’s look for deals – but let’s not forget that a 3% increase to a $100 subscription is only $3, while a 3% increase to a $1 million subscription is $30,000 – and the $3 increase might mean the difference to survival for the efficient publisher, while the for-profit would have to give up more than $30,000 to even begin to have something resembling a revenue cut of a much smaller order of magnitude than what is faced by many a library.
One extreme example from our own profession:
As of today, the subscription list price for the for-profit Library Management (Emerald) is EUR 11,819 (Ulrich’s). That’s $14,600 US (Bank of Canada currency conversion service, June 18, 2010). Compare this with the MAXIMUM institutional cost for ACRL’s College and Research Libraries at $80 US for non-member institutions outside of the US and Canada. This is a difference of well over a hundred fold in subscription cost for these two journals, and I would argue that of the two, it is ACRL’s College and Research Libraries that is the more prestigious. I do acknowledge that single subscription costs are of limited applicability in the world of bundled and largely consortial pricing. Also, this is an extreme example. Cost differentials in the range of 4 to 10-fold appear to be much more common (that is, the cost of one journal on a per-article basis can be a four to 10 times as much for another journal of similar or every higher quality. Imperfect though this example is, it is illustrative of the wide difference in costs between different publishers which does not necessarily correlate in a positive manner with quality.
The original analysis for this example is from my book, Scholarly Communication for Librarians, Chandos Publishing: Oxford, 2009. The cost differential has increased since the time the book was written; the ACRL cost is stable, while the Emerald cost has risen considerably.
[Disclosure: I am co-coordinator of the ARL and ACRL Scholarly Communication Institute webinar series, Building Strength through Collaboration, however this has nothing to do with ACRL publications or membership].
Heather Morrison, MLIS
The Imaginary Journal of Poetic Economics
http://poeticeconomics.blogspot.com
This post was distributed to a number of listservs on Thursday, June 17, 2010
Update June 17, 2010 – this post by Bernd-Christoph Kaemper to Scholcomm and Liblicense-l is worth repeating in full:
Oh, well, Emerald again …
Emerald (formerly MCB) has received its fair share of criticism in the past (including some from me, back in those days of the newsletter of serials pricing issues), but I’ve learned since then, that Emerald is an extreme example of pricing policies that appear to address the bold and adventurous among us, those that have no fear to apply all the techniques learned on the bazaar, rather than the timid, who ask for the list price and turn pale in horror.
To put it simple and bluntly, if you actually have a few Emerald titles in your collection that you pay at list price instead of a bundle, you or your predecessors in your job probably made a terrible mistake, or your institution has/had too much money to spend. Just cancel and start anew.
For some historical reason, that eludes my comprehension (although I’d love to hear explanations), single title prices with Emerald are fantasy prices. If you actually start talking to Emerald (and they really have friendly and competent staff), you’ll be surprised to learn that it can be possible to get a package of titles for less than the list price of the most expensive title(s) in it, and that you actually might be able to get a rather decent offer for your campus or consortium. (Disclosure: my perspective and experience is that of a large technical university with comparatively small business and social sciences, but I have also talked to colleagues from other universities.)
Essentially, you buy Emerald as a full text database (various sizes and collections available) and have to judge its efficiency on that basis (i.e. on an aggregated cost per use base), not any individual titles a la carte which would be terribly expensive.
With respect to transparency and investigations of publisher cost efficiency and value for money on the basis of various possible metrics: I guess this is a prime example of why Ted Bergstrom, Paul Courant and Preston McAfee were right to ask libraries and consortia to disclose the actual terms of their bundle deals (under a public-records request) rather to rely on list prices, cf. e.g. the ARL press release Elsevier Motion to Block License Release Denied in Open-Records Decision, http://www.arl.org/news/pr/elsevier-wsu-23jun09.shtml
and Ted Bergstroms Journal Pricing page,
http://www.econ.ucsb.edu/~tedb/Journals/jpricing.html
I’ll leave it to others who are in a better position to do this to possibly comment on the actual cost efficiency of Emerald compared to other publishers in economics on an aggregate basis, but my understanding is that this was also not the focus or intention of your actual message (to look at any particular publisher). I only wanted to give a little warning that individual title list prices have very little meaning here. (And please take also the bazaar analogy above with a grain of salt. The positive message here is: the publisher might be more responsive to your needs and constraints then you would have assumed.)
I’d also question your placative and simplistic contrasting of the (efficient) “mission-oriented publisher that has never charged more than they needed to survive” with the (inefficient) “highly for-profit publishers”.
That’s a poetic notion not one that adequately matches today’s complex business realities for scholarly publishers, both for-profit and not-for-profit.
The question of cost-efficiency is a complex one, cf. for a starter the the JISC Report
Economic Implications of Alternative Scholarly Publishing Models: Exploring the costs and benefits (Jan 2009)
and the Feedback on it provided by STM and ALPSP, as discussed under
http://www.jisc.ac.uk/publications/reports/2009/economicpublishingmodelsfinalreport.aspx
Best regards,
Bernd-Christoph Kaemper, Stuttgart University Library
Comment
This is very helpful information, thank you Bernd-Christoph. I should clarify that I do not mean to paint Emerald as a “bad guy” at all; in fact, Emerald has a very enlightened approach to green OA self-archiving policies. Nevertheless, I think that this fantasy price model – which I suspect is not unique to Emerald – bears further investigation. I plan to mull this over a little. On the other hand, ACRL and College and Research Libraries, producing top quality scholarly publishing at rock-bottom prices and making good efforts towards open access by supporting author self-archiving, and making many articles fully open access, is, in my mind, a clear-cut good guy.
This post is part of the Transitioning to Open Access series. From my perspective, a healthy open access future for scholarly communication should be built on sustainable business models.
Update July 4: for commentary on the fantasy pricing model, see Barbara Fister at Library Journal.
Setting the record straight in the UC/NPG pricing kerfuffle
The University of California (UC) dispute with Nature Publishing Group (NPG) is about journal pricing — an important topic, but one on which I have no expertise, hence take no position. It needs to be pointed out, however, that there are two points in UC’s latest response to NPG’s response that are incorrect:
(1) It is incorrect that “NPG has been a leader in adopting the ‘green’ publishing policies.”
A green publishing policy on open access (OA) means explicitly endorsing authors providing OA to the peer-reviewed final drafts of their papers (“postprints”) immediately upon acceptance for publication (as 63% of journals do, including the counterpart of NPG’s Nature, AAAS’s Science). NPG was once, in 2003, a leader in green OA, but it backslid in January 2005 to declaring that its authors should wait six months after publication before making their postprints OA.
(2) It is incorrect that “UC… libraries… pay… fees to get access to their own work.”
UC libraries (like all other libraries) pay fees to access the work of other universities. If UC is concerned about providing access to its own work, it should mandate Green OA. When other universities do likewise, UC will gain access to their work too (though for the first six months, that access to Nature articles in particular may have to be “Almost OA” rather than OA, owing to Nature’s regressive embargo…)
Stevan Harnad
American Scientist Open Access Forum
Open Access submission to Canada’s Digital Economy Consultation
On behalf of a group of Canadian and international open access advocates, I have just contributed a submission to Canada’s Digital Economy Consultation. It may be a few days before this appears on the government web site. The text of the submission can be viewed here. Update June 19: the submission is now available on the Industry Canada site.
The brief Executive Summary as posted to the government website is as follows:
We recommend that Canada develop a policy requiring open access to federally funded Canadian scholarship, i.e. research funded by the research granting councils CIHR, SSHRC, NSERC, and NRC. This policy would ensure taxpayer access to taxpayer-funded research, maximum impact of taxpayer-funded research, bring Canadian policy into line with international policy developments, and appropriately secure a place for Canada as a leader in this important area of innovation.
The policy needed is for researchers funded by federal granting agencies to be required to deposit, in their institution’s open access repository, a copy of the author’s final manuscript of all published peer-reviewed articles arising from federally funded research, immediately on acceptance for publication, with access to the deposit set as open access immediately, or after a minimum delay to accommodate publishers. The locus of deposit should be an appropriate open access repository; by default, the author’s institutional repository. Cross-deposits from institutional to central repositories (such as PubMedCentral Canada, as mandated by CIHR) can then be done automatically by software.
In 2008, CIHR adopted a Policy on Access to Research Outputs, which is in many ways an exemplary policy, although there is a loophole to be addressed, as it allows for opt-out based on publisher copyright policies. This is neither justified nor necessary. While the contributions of professional publishers are very valuable, the published article reflects the contributions of many parties, including the Canadian taxpayer, the authors, their institutions, the voluntary peer reviewers, and often human subjects as well. No one contributor to the process should have exclusive rights to the final report; an open access requirement is reasonable and fair to all.
In addition, researchers should be encouraged to do the same with their research data (while ensuring that confidentiality / anonymity of research subjects is maintained), as well as with other works, such as monographs and creative works, wherever possible.
The simple no-cost step of requiring open access deposit would have tremendous impact in advancing the effectiveness and dissemination of Canadian research.
Update June 16: if you would like to add your (or your organization’s) name, please let me know at hgmorris at sfu dot ca, and I will submit a revised version on July 9.
Uzbekistan Conference October 2010
Peer Review vs. Peer Ranking: Dynamic vs Passive Filtration
Chen & Konstan’s (C & K) paper, “Conference Paper Selectivity and Impact” is interesting, though somewhat limited because it is based only on computer science and has fuller data on conference papers than on journal papers.
The finding is that papers from highly selective conferences are cited as much as (or even more than) papers from certain journals. (Journals of course also differ among themselves in quality and acceptance rates.)
Comparing the ceiling for citation counts for high- and low-selectivity conferences by analyzing only the equated top slice of the low-selectivity conferences, C & K found that that the high-selectivity conferences still did better, suggesting that the difference was not just selectivity (i.e., filtration) but also “reputation.” (The effect broke down a bit in comparing the very highest and next-to-highest selectivity conferences, with the next-to-highest doing slightly better than the very highest; plenty of post hoc speculations were ready to account for this effect too: excess narrowness, distaste for competition, etc. at the very highest level, but not the next-highest?)
Some of this smacks of over-interpretation of sparse data, but I’d like to point out two rather more general considerations that seem to have been overlooked or under-rated:
(1) Conference selectivity is not the same as journal selectivity: A conference accepts the top fraction of its submissions (whatever it sets the cut-off point to be), with no rounds of revision, resubmission and re-refereeing (or at most one cursory final round, when the conference is hardly in the position to change most of its decisions, since the conference looms and the program cannot be made much more sparse than planned). This is passive filtration. Journals do not work this way; they have periodic issues, which they must fill, but they can have a longstanding backlog of papers undergoing revision that are not published until and unless they have succeeded in meeting the referee reports’ and editor’s requirements. The result is that the accepted journal papers have been systematically improved (“dynamic filtration”) through peer review (sometimes several rounds), whereas the conference papers have simply been statically ranked much as they were submitted. This is peer ranking, not peer review.
(2) “Reputation” really just means track record: How useful have papers in this venue been in the past? Reputation clearly depends on the reliability and validity of the selective process. But reliability and validity depend on more than the volume and cut-off point of raw submission rankings (passive filtration).
I normally only comment on openaccess-related matters, so let me close by pointing out a connection:
There are three kinds of selectivity: journal selectivity, author selectivity and user selectivity. Journals (and conferences) select which papers to accept for publication; authors select which papers to submit, and which publication venue to submit them to; and users select which papers to read, use and cite. Hence citations are an outcome of a complex interaction of all three factors. The relevant entity for the user, however, is the paper, not the venue. Yes, the journal’s reputation will play a role in the user’s decision about whether to read a paper, just as the author’s reputation will; and of course so will the title and topic. But the main determinant is the paper itself. And in order to read, use and cite a paper, you have to be able to access it. Accessibility trumps all the other factors: it is not a sufficient condition for citation, but it is certainly a necessary one.
Stevan Harnad
American Scientist Open Access Forum
A2K June 2010 Newsletter
A2K November 2009 Newsletter (Arabic)
A2K April 2010 Newsletter
A2K February 2010 Newsletter (Arabic)
BioMed Central named Chinese Academy of Sciences Open Access Institute of the Year
Renewal agreement signed for Britannica Online
Libre accès à la communication scientifique
Un nouveau site thématique et bilingue sur le libre accès.
Présenter, sélectionner et organiser les informations en rapport avec le libre accès, tels sont les objectifs de ce site.
Le libre accès est un mouvement international dont l’importance s’accroît d’année en année. Il est devenu très difficile de suivre ses évolutions et de ne pas se perdre dans la masse d’informations disponible le sujet. Nous avons voulu faire une synthèse de l’existant (en France comme à l’étranger) et ce site ne recense que ce qui nous semble vraiment marquant.
Hans DILLAERTS et Hélène BOSC
Highly Misleading Press Release by Oxford University Press Journals
Oxford University Press Journals has issued a highly misleading press release — “Open Access Uptake: Five Years On,” not making it clear that it is not Open Access (OA) uptake that is declining, but merely the uptake of OUP’s pricey “Oxford Open (OO)” paid hybrid-Gold OA option.
OUP offers its authors the option of paying (a sizeable sum) to have an article that has been published in OUP’s subscription journals made OA (freely accessible online). Each OUP journal continues to collect subscription income, and the rest of its articles continue to be non-OA, but the paid-up OO articles are made OA by OUP — along with a promise to lower OUP journal subscription costs proportionately, as hybrid Gold uptake increases. So this OUP press release is really just telling us that the uptake for the OO option is not increasing, but decreasing.
What is stated, however, is that it is OA uptake itself that is decreasing, which is the very opposite of the truth.
Globally, across all journals, “Green OA” self-archiving, by authors, of their own articles in OA repositories — already 2-3 times the uptake of OUP’s paid hybrid Gold OA option — is increasing, not decreasing, in no small part because Green OA self-archiving mandates by authors’ institutions and funders, requiring them to deposit their articles in OA repositories, are increasing.
The existence of the Green OA option is also the obvious explanation of why OUP’s OO hybrid Gold uptake is low: Why should authors pay for Gold OA when they can provide Green OA for free (especially while subscriptions are still paying the costs of publication — as well as tying up the potential funds to pay for Gold OA)?
But OUP does not mention Green OA. Nor does it mention that OUP is among the minority of major publishers that have not yet given their green light to their authors to provide Green OA immediately upon acceptance for publication, instead attempting to impose an embargo of 12 to 24 months on Green OA (perhaps in the hope of forcing their authors to resort to paying for the OO option instead).
OUP is definitely not giving a good account of itself as the history of OA is writing itself today. Cambridge University Press (CUP), for example, among university publishers, The American Physical Society (APS) and the American Association for the Advancement of Science (Science magazine), among learned-society publishers, and even Elsevier and Springer, among commercial publishers are among the majority that are behaving far more responsibly and progressively than OUP, being on the “side of the angels” insofar as endorsing the immediate Green OA option for their authors is concerned.
Stevan Harnad
American Scientist Open Access Forum