Priorities

It’s too quick to see my relentless insistence on the priority of Green OA self-archiving by authors as monomaniacal!

The reasoning is this (and it’s partly practical, partly ethical):

An AAA publications manager would be perfectly entitled and justified to say:

If authors who purport to care so much about OA to their work do not even bother to provide OA by self-archiving it — despite the fact that AAA has given them the green light to do so — and their institutions and funders don’t even bother to mandate it, then why on earth is the finger being pointed at AAA at all (and why should we regard their cares as credible)? Is AAA supposed to be the one to sacrifice its revenues to provide something that its authors don’t even care about enough to sacrifice a few keystrokes to provide for themselves?

As long as we keep focussing on where the key to providing OA isn’t (i.e., the publisher-lamp-post) our research will remain in the dark.

We have to get the priorities straight. It is not enough to be ideologically “for” Green OA self-archiving and Green OA self-archiving mandates. It is not even enough to do the keystrokes to self-archive one’s own work (though that’s a good start, and I wonder how many OA advocates are actually doing it? the global rate hovers at about 5-20%). One has to make sure that one’s own institution adopts a Green OA mandate. Then, and only then, can one go on to the next step, which is to try to persuade one’s publisher to go Gold (though persuading one’s funder to mandate Green would probably help more; Gold OA will come of its own accord, once we have universal Green OA).

But perhaps the most egregious misconstrual of OA priorities is not authors impugning their publisher for not going Gold before they and their institutions (and funders) go Green. That dubious distinction is reserved for institutions (and funders) who commit pre-emptively to funding Gold without first mandating Green!

PS No need for yet another central repository either! Institutional repositories are enough. Fill them. Mandate filling them. And central collections can then be harvested from them to your hearts’ content. Fussing about central collections, like fussing about publishers going Gold, or about finding funds to pay for Gold (or, for that matter, fussing about copyright reform, peer review reform, publishing reform or preservation) are all all an idle waste of time, energy and attention when institutional repositories are still gapingly empty and authors’ fingers are still idle

Canada’s Digital Economy Consultation: help vote up Open Access to Canadian Research

Update May 18: OA to Canadian Research is now at the top of the Canada’s Digital Content section – just 13 votes from the top overall! Please register and vote today – and consider asking your organization to make a formal submission.

Please register for Canada’s Digital Economy Consultation and vote for Open Access to Canadian Research under Canada’s Digital Content. Currently OA is at 8 votes, just 2 behind the lead suggestion in this section of the Ideas Forum.

Article "Open access: implications for scholarly publishing and medical libraries"

Purpose: The paper reviews and analyzes the evolution of the open access (OA) publishing movement and its impact on the traditional scholarly publishing model.
Procedures: A literature survey and analysis of definitions of OA, problems with the current publishing model, historical developments, funding agency responses, stakeholder viewpoints, and implications for scientific libraries and publishing are performed.
Findings: The Internet’s transformation of information access has fueled interest in reshaping what many see as a dysfunctional, high-cost system of scholarly publishing. For years, librarians alone advocated for change, until relatively recently when interest in OA and related initiatives spread to the scientific community, governmental groups, funding agencies, publishers, and the general public.
Conclusions: Most stakeholders acknowledge that change in the publishing landscape is inevitable, but heated debate continues over what form this transformation will take. The most frequently discussed remedies for the troubled current system are the “green” road (self-archiving articles published in non-OA journals) and the “gold” road (publishing in OA journals). Both movements will likely intensify, with a multiplicity of models and initiatives coexisting for some time.
Highlights
  • This paper reviews the factors and events leading up to the open access (OA) movement in scholarly publishing, including the evolution and current status of the National Institutes of Health public access policy.
  • Differing points of view of major stakeholders, such as publishers, librarians, scientists, funding agencies, and consumers are summarized.
  • Open access has and will continue to impact traditional scholarly publishing, serials pricing, and medical libraries in general.

Implications for practice
  • Open access issues may impact decision making in serials acquisition and management.
  • Librarians should take a lead in communicating important OA-related developments to user groups and administration.
  • Librarians can play major roles in connection with this new movement.

Karen M et al. J Med Libr Assoc. 2006 July; 94(3): 253–262. Full text

On Science Commons’ Moving West…

I’ve kept this blog quiet lately – for a wide range of reasons – but a few questions that have come in have prompted me to start up a new series of posts.

The main reason for the lack of posts around here is that I’ve been very busy, and for the most part, I’ve used this blog for a lot of lengthy posts on weighty topics. At least, weighty to me. If you want a more informal channel, you can follow me on twitter, as I prefer tweeting links and midstream thoughts to rapid-fire short blog entries. The joy of a blog like this for me is the chance to explore subjects in greater depth. But it also means that during times of extreme hecticness, I won’t publish here as much.

Anyhow. I’ve been busy with a pretty big task, which is getting me, my family, and the Science Commons operation moved from Boston to San Francisco. We’re moving from our longtime headquarters at MIT into the main Creative Commons offices, and it’s a pretty complex set of logistics on both personal and professional levels.

As an aside, I’m now very close to some downright amazing chicken and waffles, and that’s exciting.

Now, I would have thought this would have been interpreted by the world in the clear manner that I see it: us Science Commons folks are, and have always been, part and parcel of the Creative Commons team, so this didn’t strike me as super-important if you’re not one of the people who has to move. If you email us, our addresses end with @creativecommons.org. That’s where our paychecks come from. So having us integrate into the headquarters offices doesn’t seem such a big deal. But I keep getting rumbles that people think we’re somehow “going away” or “disappearing” – that’s why there’s going to be a series of posts on the move and its implications.

So let me be as blunt as possible: Science at Creative Commons, and the work we do at the Science Commons project, isn’t going anywhere. We are only going to be intensifying our work, actually. You can expect some major announcements in the fall about some major new projects, and you’ll learn a lot about the strategic direction we plan to take then. I can’t talk about it all yet, because not all the moving pieces are settled, but suffice to say the plans are both Big and Exciting. We’ve already added a staff member – Lisa Green – who is both a Real Scientist and experienced in Bay Area science business development, to help us realize those plans.

Our commitments and work over the past six years of operations aren’t going anywhere either. We will continue to be active, vocal, and visible proponents of open access and open data. We will continue to work on making biological materials transfer, and technology transfer, a sane and transparent process. And our commitment to the semantic web – both in terms of its underlying standards and in terms of keeping the Neurocommons up and running – is a permanent one.

You can catch up with our achievements in later posts, or follow our quarterly dispatches. We get a lot of stuff done for a group of six people, and that’s not going to change either.

Some things *are* likely to change. For example, I don’t like the Neurocommons name for that project much any more – it’s far more than neuroscience in terms of the RDF we distribute, and the RDFHerd software will wire together any kind of database that’s formatted correctly. But those changes are changes of branding, not of substance in terms of the work.

It is, however, now time to get our work and the powerful engine that is the Creative Commons headquarters together. I’m tired of seeing the fantastic folks that I work with twice a year. We’re missing a ton of opportunities to bring together knowledge in the HQ – especially around RDFa and metadata for things like scholarly norms – by being physically separated. Not to mention that the San Francisco Bay Area is perhaps the greatest place on earth to meet the people who change the world, every day, through technology.

I’m also tired of living on the road. I’m nowhere near Larry Lessig and Joi Ito in terms of my travel, but I’m closing in on ten years of at least 150,000 miles a year in airplanes. It gets old. Most of our key projects at this point are on the west coast, like Sage Bionetworks and the Creative Commons patent licenses, and we’re developing a major new project in energy data that is going to be centered in the Bay Area as well. The move gives me the advantage of being able to support those projects, which are much more vital to the long term growth of open science than conference engagements, without 12 hours of roundtrip plane flights.

I’ll be looking back at the past years of work in Boston over the coming weeks here. I’m in a reflective mood and it’s a story that needs to be told. We’ve learned a lot, and we’ve had some real successes. And we’re not abandoning a single inch of the ground that we’ve gained in those years. So if you hear tell that we’re disappearing or going away, kindly point them here and let them know they will have us around for quite some time into the future…

Read the comments on this post…

On Science Commons’ Moving West…

I’ve kept this blog quiet lately – for a wide range of reasons – but a few questions that have come in have prompted me to start up a new series of posts.

The main reason for the lack of posts around here is that I’ve been very busy, and for the most part, I’ve used this blog for a lot of lengthy posts on weighty topics. At least, weighty to me. If you want a more informal channel, you can follow me on twitter, as I prefer tweeting links and midstream thoughts to rapid-fire short blog entries. The joy of a blog like this for me is the chance to explore subjects in greater depth. But it also means that during times of extreme hecticness, I won’t publish here as much.

Anyhow. I’ve been busy with a pretty big task, which is getting me, my family, and the Science Commons operation moved from Boston to San Francisco. We’re moving from our longtime headquarters at MIT into the main Creative Commons offices, and it’s a pretty complex set of logistics on both personal and professional levels.

As an aside, I’m now very close to some downright amazing chicken and waffles, and that’s exciting.

Now, I would have thought this would have been interpreted by the world in the clear manner that I see it: us Science Commons folks are, and have always been, part and parcel of the Creative Commons team, so this didn’t strike me as super-important if you’re not one of the people who has to move. If you email us, our addresses end with @creativecommons.org. That’s where our paychecks come from. So having us integrate into the headquarters offices doesn’t seem such a big deal. But I keep getting rumbles that people think we’re somehow “going away” or “disappearing” – that’s why there’s going to be a series of posts on the move and its implications.

So let me be as blunt as possible: Science at Creative Commons, and the work we do at the Science Commons project, isn’t going anywhere. We are only going to be intensifying our work, actually. You can expect some major announcements in the fall about some major new projects, and you’ll learn a lot about the strategic direction we plan to take then. I can’t talk about it all yet, because not all the moving pieces are settled, but suffice to say the plans are both Big and Exciting. We’ve already added a staff member – Lisa Green – who is both a Real Scientist and experienced in Bay Area science business development, to help us realize those plans.

Our commitments and work over the past six years of operations aren’t going anywhere either. We will continue to be active, vocal, and visible proponents of open access and open data. We will continue to work on making biological materials transfer, and technology transfer, a sane and transparent process. And our commitment to the semantic web – both in terms of its underlying standards and in terms of keeping the Neurocommons up and running – is a permanent one.

You can catch up with our achievements in later posts, or follow our quarterly dispatches. We get a lot of stuff done for a group of six people, and that’s not going to change either.

Some things *are* likely to change. For example, I don’t like the Neurocommons name for that project much any more – it’s far more than neuroscience in terms of the RDF we distribute, and the RDFHerd software will wire together any kind of database that’s formatted correctly. But those changes are changes of branding, not of substance in terms of the work.

It is, however, now time to get our work and the powerful engine that is the Creative Commons headquarters together. I’m tired of seeing the fantastic folks that I work with twice a year. We’re missing a ton of opportunities to bring together knowledge in the HQ – especially around RDFa and metadata for things like scholarly norms – by being physically separated. Not to mention that the San Francisco Bay Area is perhaps the greatest place on earth to meet the people who change the world, every day, through technology.

I’m also tired of living on the road. I’m nowhere near Larry Lessig and Joi Ito in terms of my travel, but I’m closing in on ten years of at least 150,000 miles a year in airplanes. It gets old. Most of our key projects at this point are on the west coast, like Sage Bionetworks and the Creative Commons patent licenses, and we’re developing a major new project in energy data that is going to be centered in the Bay Area as well. The move gives me the advantage of being able to support those projects, which are much more vital to the long term growth of open science than conference engagements, without 12 hours of roundtrip plane flights.

I’ll be looking back at the past years of work in Boston over the coming weeks here. I’m in a reflective mood and it’s a story that needs to be told. We’ve learned a lot, and we’ve had some real successes. And we’re not abandoning a single inch of the ground that we’ve gained in those years. So if you hear tell that we’re disappearing or going away, kindly point them here and let them know they will have us around for quite some time into the future…

The access problem — small, medium, or large?


On Mon, 10 May 2010 Joseph Esposito wrote in l-liblicense:

“Harnad is hoping to replace the small problem of access with the large problem of fiscal recklessness.”

On May 14, 2010 Jim Stemper replied:

‘The Research Information Network’s 2009 study “Overcoming Barriers: Access to Research Information Content” goes to some lengths to show that the access problem is not “small.” Some excerpts:’

“Of the 800 respondents, over 40% said that they were unable readily to access licensed content at least weekly; and two-thirds at least monthly. The key reasons for failing to secure access were perceived to be […] that the library had not purchased a licence for the content, because of budgetary constraints (56%). Around 59 per cent of respondents thought that non-availability of content does have some impact on their research, while 18 per cent say the impact is ‘significant’ either in terms of timing and/or comprehensiveness and/or other quality impact.”



And let’s not forget the Open Access Impact Advantage: If journal affordability constraints are a direct indicator of the fact that the access problem is not small but large, the fact that in every field OA enhances both citation and download impact are indirect indicators of that same fact (apart from being benefits in their own right):

Hitchcock, S. “The effect of open access and downloads (‘hits’) on citation impact: a bibliography of studies

To see efforts to give research access priority over publisher revenue as “fiscal recklessness” is (yet again) a symptom of the entrenched but fallacious Gutenberg-era assumption that the (publishing) tail somehow has the natural right to keep wagging the (research) dog?

Stevan Harnad
American Scientist Open Access Forum

Best Draft Model for US University Green OA Self-Archiving Mandate So Far: U North Texas

The University of North Texas is hosting an Open Access Symposium on Tuesday, May 18, 2010. The event features both nationally and internationally recognized leaders in the open access initiative. The symposium is intended as a catalyst to move UNT and other academic institutions in Texas forward in their consideration of institutional open access policies.

The UNT Open Access Policy Committee has just completed a first complete draft of a policy for open access to scholarly works at the University of North Texas. The Committee sees this as an initial step in broadening discussion by the UNT campus community on open access and the policy. The Policy on Open Access to Scholarly Works. This draft was distributed to the UNT Faculty Senate at its May 12, 2010 meeting:

POLICY STATEMENT (excerpts; full text here)

In support of long-term stewardship and preservation of UNT faculty members? scholarly works in digital form, the UNT community members agree to the following:

? Each UNT community member deposits a final version of his/her scholarly works in to which he or she made substantial intellectual contributions in the UNT Libraries Scholarly Works repository…

In support of greater access to scholarly works, the UNT community members agree to the following for peer-reviewed, accepted-for-publication, journal articles:

? Immediate Deposit: Each UNT community member deposits an electronic copy of his/her final edited version after peer review and acceptance of each article, no later than the date of its publication. Deposit is made into the UNT Libraries Scholarly Works repository. The author is encouraged to make the deposit available to the public by setting access to the deposit as Open Access Immediately Upon Deposit.

? Optional Delayed Open Access: Upon express direction by a UNT community member for an individual article, the Provost or Provost?s designate will adjust the Open Access Immediately Upon Deposit requirement to align with publishers? policies regarding open access of self-archived works or the wishes of the community member

? Licensing: Where not prohibited by a publisher, each UNT community member grants to UNT permission to make scholarly peer-reviewed journal articles to which he or she made substantial intellectual contributions publicly available in the UNT Libraries Scholarly Works repository for the purpose of open dissemination. Each UNT community member grants to UNT a nonexclusive, irrevocable, worldwide license to exercise any and all rights under copyright relating to his or her scholarly articles, in any medium, and to authorize others to do so, provided that the articles are not sold. The Provost or Provost’s designate will waive application of the license for a particular article upon express direction by a community member.

? Who Deposits: In the case of multiple authors from multiple institutions, where a UNT community member has made substantial intellectual contributions to the article, the UNT community member will deposit a copy of the article. In the case of multiple UNT authors, and where the lead author is from UNT, the lead author (or designate) will deposit a copy of the article.

Private sector and long term responsibility for scholarly work? Nonsense!

In a recent letter opposing the U.S. Federal Research Public Access Act (FRPAA), Martin Frank writes: “Copyright is essential to protecting these works and to preserving incentives for the private sector to continue to invest in peer review, editing, publishing, and maintaining the electronic record of vetted scientific journal articles”.

One issue with this sentence that I would like to highlight, for now: it is nonsense to suggest that the private sector has a meaningful role in long-term maintenance of scholarly articles.

A private sector publisher is completely within its rights to cease to exist, or change business operations, at any time. The public has no rights to ask a private sector entity to undertake a responsibility with an infinite time span. Has anyone asked publishers to undertake this role? If so, what were they thinking? This is not the traditional role of publishers, but rather the traditional role of libraries as a memory institution.

One of the benefits of the National Institutes of Health’s Public Access Policy is that it moves the traditional role of the U.S. National Library of Medicine in preserving the medical research literature into the internet age, as well as sharing the burden globally through the developing PubMedCentral International network (with the UK and Canada up and running already). Academic libraries everywhere are busy ensuring the preservation of electronic collections, just as they have preserved print collections in the past (and present, of course).

On behalf of many: division within traditional publishers re anti-FRPAA lobbying

Correction May 20, 2010: I am delighted to report that Springer, owner of BioMedCentral, did NOT sign the anti-FRPAA letter and I originally reported. Rather, the letter was signed by Springer Publishing Company, a medical publishing company that has nothing to do with Springer/BMC.

Thanks to Wim van der Stelt, Springer, EVP Business Development, for this most helpful clarification:

“Your blogpost dated 05/12 about publishers anti frpaa letter contains a mistake that I’d really like to be corrected.

Springer is no signatory of the letter, we currently are even not a member of AAP/PSP. The Springer mentioned in the list of signatories is “Springer publishing company”, a medical publisher that is in no way related to Springer, let alone to BioMed Central.

I’d like to stress that Springer’s policy is to cooperate with customers and other stakeholders to further develop scholarly communication and that we are willing to experiment and develop new business models in case there is a need for that. That is the reason for our ongoing OA development activities, including the acquisition of BioMed Central”.

Heather again:

My profuse apologies to Springer / BioMedCentral, and thanks very much to Wim van der Stelt and Springer for this most welcome feedback – and enlightened viewpoint.

This correction supports the major point of my blogpost, that there is division within traditional publishers regarding anti-FRPAA lobbying.

Original post, omitting the error:

A recent letter lobbying against FRPAA starts with: on behalf of many publisher members.

Interesting word, many. I have no doubt that the writer would have preferred to use words like “all”, “almost all”, or “most”.

My take on this is that this is an indication of struggle within the anti-FRPAA lobbying group, which makes one wonder: just how strong is the opposition? For example, the letter refers to university presses; but only 3 presses are signatories, and only one of these is based in the U.S. (University of Chicago Press). According to the American Association of University Presses membership page, AAUP has over 130 members worldwide. If only 1 American University Press has signed this letter – this is less than 1% of the membership for this group.

It is curious that two UK university presses (Oxford and Cambridge) have signed, given that FRPAA is predated by OA policies at all of the UK Research Councils.

Even looking at the signatories, it is clear that there is internal struggle at these organizations as well. For example, Oxford University Press is a signatory, even though OUP has some innovative OA experiments in progress.

The letter can be downloaded from here.

PostGutenberg Peer Review

Joseph Esposito [JE] asks, in liblicense-l:

JE: ?What happens when the number of author-pays open access sites grows and these various services have to compete with one another to get the finest articles deposited in their respositories??

Green OA mandates require deposit in each author’s own institutional repository. The hypothesis of Post-Green-OA subscription cancellations (which is only a hypothesis, though I think it will eventually prove to be right) is that the Green OA version will prove to be enough for users, leaving peer review as the only remaining essential publishing service a journal will need to perform.

Whether on the non-OA subscription model or on the Gold-OA author-pays model, the only way scholarly/scientific journals compete for content is through their peer-review standards: The higher-quality journals are the ones with more rigorous and selective criteria for acceptability. This is reflected in their track records for quality, including correlates of quality and impact such as citations, downloads and the many rich new metrics that the online and OA era will be generating.

JE: ?What will the cost of marketing to attract the best authors be??

It is not “marketing” but the journal’s track record for quality standards and impact that attract authors and content in peer-reviewed research publication. Marketing is for subscribers (institutional and individual); for authors and their institutions it is standards and metrics that matter.

And, before someone raises the question: Yes, metrics can be manipulated and abused, in the short term, but cheating can also be detected, especially as deviations within a rich context of multiple metrics. Manipulating a single metric (e.g., robotically inflating download counts) is easy, but manipulating a battery of systematically intercorrelated metrics is not; and abusers can and will be named and shamed. In research and academia, this risk to track record and career is likely to counterbalance the temptation to cheat. (Let’s not forget that metrics, like the content they are derived from, will be OA too…)

JE: ?I am not myself aware of any financial modeling that attempts to grapple with an environment where there are not a handful of such services but 200, 400, more.?

There are already at least 25,000 such services (journals) now! There will be about the same number post-Green-OA.

The only thing that would change (on the hypothesis that universal Green OA will eventually make subscriptions unsustainable) is that the 25,000 post-Green-OA journals would only provide peer review: no more print edition, online edition, distribution, archiving, or marketing (other than each journal’s quality track record itself, and its metrics). Gone too would be the costs of these obsolete products and services, and their marketing.

(Probably gone too will be the big-bucks era of journal-fleet publishing. Unlike with books, it has always been the individual journal’s name and track record that has mattered to authors and their institutions and funders, not their fleet-publisher’s imprimatur. Software for implementing peer review online will provide the requisite economy of scale at the individual journal level: no need to co-bundle a fleet of independent journals and fields under the same operational blanket.)

JE: ?As these services develop and authors seek the best one, what new investments will be necessary in such areas as information technology??

The best peer review is provided by the best peers (for free), applying the highest quality standards. OA metrics will grow and develop (independent of publishers), but peer review software is pretty trivial and probably already quite well developed (hence hopes of “patenting” new peer review “systems” are probably pipe-dreams.)

JE: ?Will the fixed costs of managing such a service rise along with greater demands by the most significant authors??

The journal quality hierarchy will remain pretty much as it is now, with the highest-quality (hence most selective) journals the fewest, at the top, grading down to the many average-level journals, and then the near-vanity press at the bottom (since just about everything eventually gets published somewhere, especially in the online era).

(I also think that “no-fault peer review” will evolve as a natural matter of course — i.e., authors will pay a standard fee per round of peer review, independent of outcome: acceptance, revision/re-refereeing or rejection. So being rejected by a higher-level journal will not be a dead loss, if the author is ready to revise for a lower-level journal in response to the higher-level journal’s review. Nor will rejected papers be an unfair burden, bundled into the fee of the authors of accepted papers.)

JE: ?As more services proliferate, what will the cost of submitting material on an author-pays basis be??

There will be no more new publishing services, apart from peer review (and possibly some copy-editing), and no more new journals either; 25,000 is probably enough already! And the cost per round of refereeing should not prove more than about $200.

JE: ?Will the need to attract the best authors drive prices down??

There will be no “need to attract the best authors,” but the best journals will get them by maintaining the highest standards.

Since the peers review for free, the cost per round of refereeing is small and pretty fixed.

JE: ?If prices are driven down, is there any way for such a service to operate profitably as the costs of marketing and technology grow without attempting to increase in volume what is lost in margin??

Peer-reviewed journal publishing will no longer be big business; just a modest scholarly service, covering its costs.

JE: ?If such services must increase their volume, will there be inexorable pressure to lower some of the review standards in order to solicit more papers??

There will be no pressure to increase volume (why should there be)? Scholars try to meet the highest quality standards they can meet. Journals will try to maintain the highest quality standards they can maintain.

JE: ?What is the proper balance between the right fee for authors, the level of editorial scrutiny, and the overall scope of the service, as measured by the number of articles developed??

Much ado about little, here.

The one thing to remember is that there is a trade-off between quality-standards and volume: The more selective a journal, the smaller is the percentage of all articles in a field that will meet its quality standards. The “price” of higher selectivity is lower volume, but that is also the prize of peer-reviewed publishing: Journals aspire to high quality and authors aspire to be published in journals of high quality.

No-fault refereeing fees will help distribute the refereeing load (and cost) better than (as now) inflating the fees of accepted papers to cover the costs of rejected papers (rather like a shop-lifting surcharge!). Journals lower in the quality hierarchy will (as always) be more numerous, and will accept more papers, but authors are likely to continue to try a top-down strategy (as now), trying their luck with a higher-quality journal first.

There will no doubt be unrealistic submissions that can (as now) be summarily rejected without formal refereeing (or fee). The authors of papers that do merit full refereeing may elect to pay for refereeing by a higher-level journal, at the risk of rejection, but they can then use their referee reports to submit a more roadworthy version to a lower-level journal. With no-fault refereeing fees, both journals are paid for their costs, regardless of how many articles they actually accept for publication. (PotGutenberg publication means, I hasten to add, that accepted papers are certified with the name and track-record of the accepting journal, but those names just serve as the metadata for the Green OA version self-archived in the author’s institutional repository.)

Harnad, S. (2009) The PostGutenberg Open Access Journal. In: Cope, B. & Phillips, A (Eds.) The Future of the Academic Journal. Chandos.

Harnad, S (2010) No-Fault Refereeing Fees: The Price of Selectivity Need Not Be Access Denied or Delayed. (Draft under refereeing).

And let’s not forget what peer-reviewed research publishing is about, and for: It is not about provisioning a publishing industry but about providing a service to research, researchers, their institutions and their funders. Gutenberg-era publication costs meant that the Gutenberg publisher evolved, through no fault of its own, into the tail that wagged the paper-trained research pooch; in the PostGutenberg era, things will at last rescale into more proper and productive anatomic proportions…

Stevan Harnad
American Scientist Open Access Forum