“IU’s [Indiana University’s] Open Access policy was also discussed at the meeting. The Library Committee of the Bloomington Faculty Council [BFC] researched open access to make a recommendation to the BFC as to whether IU should adopt an active open access policy. The committee did not recommend an active policy, said Jason Jackson, Library Committee chair. IU currently has a passive policy in which professors can publish their articles open access if they prefer. “Our concern, instead, was open access that’s achieved through the deposit of scholarly articles and manuscripts into a repository, such as IU Scholar Works,” Jackson said….”
The pygmy blue whale, cousin to the more well-known Antarctic blue whale, has an enigmatic history. Pygmy blue whales dwell in vast expanses of the Indian and southern Pacific oceans, and are a highly mobile species. The species was identified in 1966—although it’s likely to have been confused with its cousin the “true” blue whale prior to 1966—so it’s only in recent years that we’ve been able to catch glimpses of these elusive cetaceans during their migrations to and from breeding and feeding grounds. The researchers of a recent PLOS ONE paper tested out a new method of tracking these whales: satellite telemetry (described below). Using this method, the researchers mapped the migration of pygmy blue whales as they moved from the coast of Australia to the waters of Indonesia. We caught up with author Virginia Andrews-Goff to get some additional details on what it’s like to track these tiny giants.
How did you become interested in pygmy blue whales, and how did you get involved in mapping their migratory movements?
This research was carried out by the Australian Marine Mammal Centre, a national research centre focused on understanding, protecting and conserving whales, dolphins, seals, and dugongs in the Australian region. The work we carry out aims to provide scientific research and advice that underpins Australia’s marine mammal conservation and policy initiatives. We, therefore, have a keen interest in all whales that migrate through Australian waters including pygmy blue, right and humpback whales.
Pygmy blue whales are of particular interest, however, as so little is known in regard to their movements and population status. Large scale movements of whales are particularly hard to study and what we do know about pygmy blue whales we have mainly learnt from examining whaling records. Fortunately, pygmy blue whales were targeted by the whaling industry for only a very short period of time in the late 1950s and early 1960s just prior to the IWC banning the hunting of all blue whales in 1966.
What are the challenges of better understanding whale migration in general?
Large-scale, long-term whale movements are challenging to study as it is impractical to do so by direct observation. Therefore, we need to use devices, such as satellite tags, that can be attached to the whale to provide real-time location information.
What is satellite telemetry and how did it enable your findings?
In this case, satellite telemetry refers to the use of a satellite-linked tag attached to the whale. This tag communicates with the Argos satellite system when the antenna breaks the surface of the water. A location can then be determined when multiple Argos satellites receive the tag’s transmissions. We then receive this location data in almost real time via the Argos website, which allows us to track the movement of the tagged whale.
Based on your tracking, you found that the pygmy blue whales traveled from the west coast of Australia north to breeding grounds in Indonesia. Can you give readers a sense of why they travel this route?
Generally, whales migrate between productive feeding grounds (at high latitudes) in the summer to warmer breeding grounds (at low latitudes) during the winter. The exact reason for this general pattern is unclear, though quite a few theories exist, including to avoid predators, to assist the thermoregulatory ability of the calf, and to birth in relatively calm waters. Because of the timing of this migration, we believe these animals travel to Indonesian waters to calve. Usually it is assumed that whales fast outside of the summer when no longer located in the productive feeding grounds. Interestingly, these pygmy blue whales travel from productive feeding grounds off Western Australia to productive breeding grounds in Indonesia and therefore, probably have the opportunity to feed (and not fast) on the breeding grounds.
You’ve mentioned that pygmy blue whale migratory routes correspond with shipping routes. How does this interaction impact the whales?
Baleen whales (whales that use filters to feed instead of teeth) use sound for communication and to gain information about the environment they occupy. When pygmy blue whale movements correspond to shipping routes, there is potential for the noise generated by the ships to play some role in altering calling rates associated with social encounters and feeding.
Why is it important for us to better understand pygmy blue whale migration, and how does mapping their migratory movements help conservation efforts for this endangered animal?
Our coauthor, Trevor Branch, hypothesised in 2007 that pygmy blue whales occupying Australian waters traveled into Indonesian waters. However, prior to this study, we didn’t actually know that this was the case. As such, conservation efforts relevant to the pygmy blue whales that use Australian waters are required outside of Australian waters too. We can also now gain some understanding of risks within the pygmy blue whale migratory range, such as increased ambient noise from development, shipping, and fishing, and therefore assist in mitigating these risks.
What’s next for you and your research team?
A question mark still remains over the movements of the pygmy blue whales that utilise the Bonney Upwelling feeding grounds off southern Australia. Genetic evidence indicates mixing between the animals in the feeding areas of the Perth Canyon (the animals that were tagged in this study) and the Bonney Upwelling. This indicates the potential for individuals from the Bonney Upwelling to follow a similar migration route to those animals feeding in the Perth Canyon. However, it is also thought that Bonney Upwelling animals may utilise the subtropical convergence region south of Australia. We plan to collaborate on a research project that aims to tag the pygmy blue whales of the Bonney Upwelling and ascertain whether these animals move through the same areas and are therefore exposed to the same risks as the Perth Canyon animals.
Citation: Double MC, Andrews-Goff V, Jenner KCS, Jenner M-N, Laverick SM, et al. (2014) Migratory Movements of Pygmy Blue Whales (Balaenoptera musculus brevicauda) between Australia and Indonesia as Revealed by Satellite Telemetry. PLoS ONE 9(4): e93578. doi:10.1371/journal.pone.0093578
Image 1: IA19847 Blue pygmy whale
Photograph © Mike Double/Australian Antarctic Division
Image 2: IA19850 Blue pygmy whale
Photograph © Mike Double/Australian Antarctic Division
Image 3: pone.0093578
Image 4: IA19851 Blue pygmy whale off Western Australian coast near Perth, Western Australia, Australia Photograph © Mike Double/Australian Antarctic Division
The post Satellite Telemetry Uncovers the Tracks of Tiny Ocean Giants appeared first on EveryONE.
Recently Dutch legal scholars sent an open letter to Kluwer, the most important publisher of legal books and journals. Sixty two professors urgently request a reduction of the costs this publisher charges universities. They calculated that annually Dutch universities pay 5 million euros to Kluwer for access to and reusage of legal publications they wrote themselves. Kluwer does not permit availability of the publications in Open Access. Those who signed the letter consider this to be undesirable.
The process leading up to publication is paid for by public money and commercial publishers force universities to sign expensive contracts for access to those publications.
TheContentMine is a project to extract all facts from the scientific literature. It has now been going for about 6 weeks – this is a soft-launch. We continue to develop it and record our progress publicly. It’s a community project and we are starting to get offers of help right now. We welcome these but we shan’t be able to get everything going immediately.
We want people to know what they are committing to and what they can expect in return. So yesterday I drafted an initial Philosophy – we welcome comments.
Our philosophy is to create an Open resource for everyone created by everyone. Ownership and control of knowledge by unaccountable organisations is a major current threat; our strategy is to liberate and protect content.
We are a meritocracy. We are inspired by Open communities such as the Open Knowledge Foundation, Mozilla, Wikipedia and OpenStreetMap all of whom have huge communities who have developed a trustable governance model.
We are going ahead on several fronts – “breadth-first”, although some areas have considerable depth. Just like Wikipedia or OSM you’ll come across stubs and broken links – it’s the sign of an Open growing organisation.
There’s so much to do, so we are meeting today to draft maps, guidelines, architecture. We’re gathering the community tools – wikis, mail lists, blogs, Github, etc. As the community grows we can scale in several directions:
- primary source. Contributors can choose particular journals or institutions/theses to mine from.
- subject/discipline. You may be interested in Chemistry or Phylogenetic Trees, Sequences or Species.
- technology. Concentrate on OCR, Natural Language Processing, Crawling, Syntax or develop your own extraction techniques
- advocacy and publicity. A major aim is to influence scientists and policy makers to make content Open
- community – its growth and practice.
We are developing a number of subprojects which will demonstrate our technology and how the site will work. Hope to report more tomorrow.
I am gutted that I missed the Q+A session with Professor Sir Leszek Borysiewicz the Vice-chancellor of Cambridge University. It doesn’t seem to have been advertised widely – only 17 people went – and it deserves to be repeated.
The indefatigable Richard Taylor – who reports everything in Cambridge – has reported it in detail. It was a really important meeting. I’ll highlight one statement, which chills me to the bone (note that this is RT’s transcript):
“the publishers are faster off the mark than governments are. Elsevier is already looking at ways in which it can control open data as a private company rather than the public bodies concerned.”
Now I know this already – I’ve spent 4 years finding out in detail about Elsevier’s publishing practices. It’s good that the VC realises it as well. Open Access is a mess – the Universities have given part of their priceless wealth to the publishers and are desperately scrabbling to get some of it back. The very lack of will and success makes me despondent – LB says:
“And I know disadvantaging the individual academic by not having publication in what is deemed to be the top publications available? So it’s a balance in the argument that we have.”
in other words we have to concede control to the publishers to get the “value” of academics publishing where they want.
Scholarly publishing costs about 15,000,000,000 USD per year. Scholarly knowledge/data is worth at least ten times that (> 100,000,000,000 USD/year). [I’ll justify the figure later]. And we are likely to hand it all over to Elsevier (or Macmillan Digital Science).
I’ve done what I can to highlight the concern. This was the reason for my promoting the phrase “Open Data” in 2006 – and in helping create the Panton Principles for Open Data in Science in 2008. The idea is to make everyone aware that Open Data is valuable and needs protecting.
Because if we don’t Elsevier and Figshare and the others will possess and control all our data. And then they will control us.
Isn’t this overly dramatic?
No. Elsevier has bought Mendeley – a social network for managing academic bibliography. Scientists put their current reading into Mendeley and use it to look up others. Mendeley is a social network which knows who you are, and who you are working with.
Do you trust Mendeley? Do you trust Elsevier? Do you trust and large organisations without independent control (GCHQ, NSA, Google, Facebook)? If you do, stop reading and don’t worry.
In Mendeley, Elsevier has a window onto nearly everything that a scientist is interested in. Every time your read a new paper Mendeley knows what you are interested in. Mendeley knows your working habits – what time are you spending on your research?
And this isn’t just passive information. Elsevier has Scopus – a database of citations. How does a paper get into this? – Scopus decides, not the scientific world. Scopus can decide what to highlight and what to hold back. Do you know how Journal Impact Factors are calculated? I don’t because it’s a trade secret. Does Scopus’ Advisory Board guarantee transparency of practice? Not that I can see. Since JIF’s now control much academic thinking and planning, those who control them are in a position to influence academic practice.
Does Mendeley have an advisory board? I couldn’t find one. And when I say “advisory board”, I mean a board which can uncover unacceptable practices. I have no evidence that anything wrong is being done, but I have no evidence that there are any checks against it. Elsevier has already created fake journals for Merck, so how can I be sure it will resist the pressure to use Mendeley for inappropriate purposes? Is Mendeley any different from Facebook as far as transparency is concerned? Is there any guarantee that it is not snooping on academics and manipulating and selling opinion? “Dear VC – this is the latest Hot Topics from Mendeley; make your next round of hirings in these fields”.
I’m also concerned that Figshare will go the same way. I have have huge respect for Mark Hahnel who founded it. But Figshare also doesn’t appear to have an advisory board. Do I trust Macmillan? “we may anonymize your Personal Information so that you cannot be individually identified, and provide that information to our partners, investors, Content providers or other third parties.” Since information can be anonymised or useful but not both are you happy with that?
There aren’t any easy solutions. If we do nothing, are we trusting our academic future to commercial publishers who control the information and knowledge flow. We have to take back our own property – the knowledge that *we* produce. Publishers should be the servants of knowledge – at present they are becoming the tyrants.
The Open-Source Retreat that is being sponsored by stripe looks quite intriguing. Stripe relies on a lot of open source software, and they’ve announced a program to give a grant to a small number of developers to come to San Francisco to work full-time on an open-source project for a period of 3 months. The awardees will have space in Stripe’s SF office, and will be asked to give a couple of internal tech talks over the course of the program, but otherwise it’ll be no-strings-attached.
This is a clever model for supporting open source development, and I hope this idea catches on with other companies that benefit from open source. I can think of a number of academic developers who would love the idea of a sabbatical to work on an open source code project, to meet new people who might use their code, and to get a fresh perspective in new surroundings – an open source sabbatical. This could be a great way for companies that benefit from open source scientific software to help encourage and influence the development of the tools they use.
The deadline for applying to the Stripe program is May 31st, and the program will run from September 1st through December 1st.
“When The Chronicle of Higher Education published its “Cautionary Tale” about a dissertation discovered, by its author, to be available for sale on Amazon.com without his knowledge, it was bound to stir up another round of anxiety over how dissertations are distributed in a digital world.
Like PLOS ONE, the English language is rapidly taking over the world (we kid). In 2010, English clocked in at over 360 million native speakers, and it is the third-most-commonly used native language, right behind Mandarin Chinese and Spanish. While these languages spread, however, other indigenous languages decline at an accelerated pace. A fraction of these enigmatic languages belong to uncontacted indigenous groups of the Amazonian rainforest, groups of people in South America who have little to no interaction with societies beyond their own. Many of these groups choose to remain uncontacted by the rest of the world. Because of their isolation, not much is known about these languages beyond their existence.
The researchers of a recent PLOS ONE paper investigated one such language, that of the Carabayo people who live in the Colombian Amazon rainforest. Working with the relatively scarce historical data that exists for the Carabayo language—only 50 words have been recorded over time—the authors identified similarities between Carabayo and Yurí and Tikuna, two known languages of South America that constitute the current language family, Ticuna-Yurí. Based on the correspondences, the authors posit a possible genealogical connection between these languages.
Few resources were available to the authors in this endeavor. They analyzed historical wordlists collected during the last encounter with the Carabayo people in 1969—the only linguistic data available from this group— against wordlists for the Yurí language. In addition, they sought the expertise of a native speaker of Tikuna, a linguist trained in Tikuna’s many dialects. Using these resources, the authors broke down the Carabayo words into their foundational forms, starting with consonants and vowels. They then compared them to similarly deconstructed words in Yurí and Tikuna.
The examination involved the evaluation of similarities in the basic building blocks of these words: the number of times a specific sound (or phoneme) appeared; the composition and patterns of the smallest grammatical units of a word (a morpheme); and the meanings attached to these words. When patterns appeared between Carabayo and either Yurí or Tikuna, the authors considered whether or not the languages’ similarities constituted stronger correspondences. They also paid attention to the ways in which these words would have been used by the Carabayo when the lists were originally made many years ago.
The Yurí language was first recorded in the 19th century, but it is thought to have become extinct since then. From these lists, five words stood out: in Carabayo, ao ‘father’, hono ‘boy’, hako ‘well!’, and a complex form containing both the Yurí word from warm, noré, and the Yurí word, t?au, which corresponds in English to ‘I’ or ‘my’. Given the evidence, the authors contend that the strongest link between Carabayo and Yurí is found in the correspondence of t?au. The study of other languages has indicated that first person pronouns are particularly resistant to “borrowing”, or the absorption of one language’s vocabulary into another. Therefore, the authors surmise it is unlikely in this instance that either of the languages absorbed t?au from the other, but that they share a genealogical link.
Similarly, the comparison of Carabayo words to words of the living language of Tikuna provided a high number of matches, including in Carabayo gudda ‘wait’ and gu ‘yes’. The matches especially exhibit sound correspondences of Carabayo g (or k) and the loss of the letter n in certain circumstances. Table 7 from the article shows the full results (click to enlarge):
Although it is possible that the Carabayo language represents a language that had not yet been documented until the time of 1969, the results of the researchers’ evaluation have led them to conclude that Carabayo more likely belongs to the language family of Ticuna-Yurí. The relationship of Carabayo to Yurí and Tikuna changes the structure of the Ticuna-Yurí family by placing Carabayo on the map as a member of that family. The Tikuna language, once considered to be the sole surviving member of the Ticuna-Yurí family, might now have a sibling, and the identity of a barely known language has become that much more defined.
For the authors, this research is a complicated endeavor. The desire to advance our knowledge and understanding of these precious languages must be balanced with the desires of the uncontacted indigenous groups, some of whom voluntarily choose to remain in isolation. As the authors themselves express, the continued study of these uncontacted languages seeks to engender an awareness in the larger community of the people who speak these languages, and to reiterate their right to be left to live their lives as they wish—in isolation.
Citation: Seifart F, Echeverri JA (2014) Evidence for the Identification of Carabayo, the Language of an Uncontacted People of the Colombian Amazon, as Belonging to the Tikuna-Yurí Linguistic Family. PLoS ONE 9(4): e94814. doi:10.1371/journal.pone.0094814
Image 2: pone.0094814
The post Linking Isolated Languages: Linguistic Relationships of the Carabayo appeared first on EveryONE.
Fred Friend died two days ago. He had been a dedicated, tireless and inspired advocate for OA ever since the idea was first baptized with a name (Budapest 2001, where he was one of the original co-drafters and signatories of the BOAI).
Fred’s commitment to OA did not, I believe, originate only ex officio, as Director of Scholarly Communication at UCL, in the serials crisis with which he and all other library directors have had to struggle for decades. Fred also had a profound sense of justice (one that extended beyond local happenings sub specie aeternitatis). He simply felt that OA was right. And what he did on its behalf he did out of character and conviction. (He was also extremely forgiving, as I can humbly attest.)
Fred was, in his own words, a Friend of Open Access. It is undeniable that OA has now lost a precious ally. But I think it is equally undeniable (and I am sure Fred knew it too) that OA is unstoppable now. That is in no small part true thanks to the efforts of this modest and faithful Friend.
Heartfelt sympathy to Fred’s family; I hope that in their pain they will also find room for some pride.
Fred Friend died two days ago, He had been a dedicated, tireless and inspired advocate for OA ever since the idea was first baptized with a name (Budapest 2001, where he was one of the original co-drafters and signatories of the BOAI).
Fred’s commitment to OA did not, I believe, originate only ex officio, as Director of Scholarly Communication at UCL, the serials crisis with which he and all other library directors have had to struggle for decades. Fred also had a profound sense of justice (one that extended beyond local happenings sub specie aeternitatis). He simply felt that OA was right. And what he did on its behalf he did out of character and conviction. (He was also extremely forgiving, as I can humbly attest.)
Fred was, in his own words, a Friend of Open Access. It is undeniable that OA has now lost a precious ally. But I think it equally undeniable (and I am sure Fred knew it too) that OA is unstoppable now, and that that is in no small part true thanks to the efforts of this modest and faithful Friend.
Heartfelt sympathy to Fred’s family; I hope that in their pain they will also find room for some pride.
Yeast—including more than 1500 species that make up 1% of all known fungi—plays an important role in the existence of many of our favorite foods. With a job in everything from cheese making to alcohol production to cocoa preparation, humans could not produce such diverse food products without this microscopic, unicellular sous-chef. While we have long been aware of our dependence on yeast, new research in PLOS ONE suggests that some strains of yeast would not be the same without us, either.
Studies have previously shown how our historical use of yeast has affected the evolution of one of the most commonly used species, Saccharomyces cerevisiae, creating different strains that are used for different purposes (bread, wine, and so on). To further investigate our influence on yeast, researchers from the University of Bordeaux, France, took a look at a different yeast species of recent commercial interest, Torulaspora delbrueckii. In mapping the T. delbrueckii family tree, the authors show not only that human intervention played a major role in the shaping of this species, but they provide us with valuable information for further improving this yeast as a tool for food production.
The authors collected 110 strains of T. delbrueckii from global sources of wine grapes, baked goods, dairy products, and fermented beverages. Possible microsatellites, or repeating sequences of base pairs (like A-T and G-C), were found in one strain’s DNA and used to create tools that would identify similar sequences in the other strains. They used the results to pinpoint eight different microsatellite markers (base pair sequences) that were shared by some strains but not others to measure genetic variation in the T. delbrueckii family. The composition of each strain was measured using microchip electrophoresis, a process in which DNA fragments migrate through a gel containing an electric field, which helps researchers separate the fragments according to size. As each strain’s microsatellite markers were identified, the information was added to a dendrogram (a funny-looking graph, shown below) to illustrate the level of similarity between strains. The researchers also estimated the time it took different strains to evolve by comparing the average rate of mutation and reproduction time for T. delbrueckii to the level of genetic difference between each strain.
The dendrogram shows four clear clusters of yeast strains heavily linked to each sample’s origin. Two groups contain most of the strains isolated from Nature, but can be distinguished from each other by those collected on the American continents (nature Americas group) and those collected in Europe, Asia, and Africa (nature Old World group). The other two clusters include strains collected from food and drink samples, but cannot be discriminated by geographic location. The grape/wine group contains 27 strains isolated from grape habitats in the major wine-producing regions of the world: Europe, California, Australia, New Zealand, and South America. The bioprocess group contains geographically diverse strains collected from other areas of food processing—such as bread products, spoiled food, and fermented beverages—and includes a subgroup of strains used specifically for dairy products. Further analysis of the variation between strains confirmed that, while the clusters don’t perfectly segregate the strains according to human usage, and geographic origin of the sample played some role in diversity, a large part of the population’s structure is explained by the material source of the strain.
Divergence times calculated for the different groups further emphasize the connection between human adoption of T. delbrueckii yeast and the continued evolution of this species. The grape/wine cluster of strains diverged from the Old World group approximately 1900 years ago, aligning with the expansion of the Roman Empire, and the spread of Vitis vinifera, or the common grape, alongside. The bioprocesses group diverged much earlier, an estimated four millennia ago (around the Neolithic era), showing that yeast was used for food production long before it was domesticated for wine making.
While T. delbrueckii has often been overlooked by winemakers in favor of the more common S. cerevisiae, it has recently been gaining traction for its ability to reduce levels of volatile compounds that negatively affect wine’s flavor and scent. It has also been shown to have a high freezing tolerance when used as a leavening agent, making it of great interest to companies attempting to successfully freeze and transport dough. Though attempts to develop improved strains of this yeast for commercial use have already begun, we previously lacked an understanding of its life-cycle and reproductive habits. In creating this T. delbrueckii family tree, the authors also gained a deeper understanding of the species’ existence, which may help with further development for technological use.
Yeast has weaseled its way into our hearts via our stomachs, and it seems that, in return, we have fully worked our way into its identity. With a bit of teamwork, and perhaps a splash of genetic tweaking, we can continue this fruitful relationship and pursue new opportunities in Epicureanism. I think we would all drink to that!
Reference: Albertin W, Chasseriaud L, Comte G, Panfili A, Delcamp A, et al. (2014) Winemaking and Bioprocesses Strongly Shaped the Genetic Diversity of the Ubiquitous Yeast Torulaspora delbrueckii. PLoS ONE 9(4): e94246. doi:10.1371/journal.pone.0094246
Image 1: Figure 1 from article
Image 2: Figure 3 from article
The post For Yeast’s Sake: The Benefits of Eating Cheese, Chocolate, and Wine appeared first on EveryONE.
The only effective way to make inflated subscriptions unsustainable is for funders and institutions to mandate Green OA self-archiving.
Tim Gowers is quite right that ?the pace of change is slow, and the alternative system that is most strongly promoted ? open access articles paid for by article processing charges [?Gold OA?] ? is one that mathematicians tend to find unpalatable. (And not only mathematicians: they are extremely unpopular in the humanities.)? there is no sign that they will help to bring down costs any time soon and no convincing market mechanism by which one might expect them to.?
This is all true as long as the other form of OA (?Green OA? self-archiving by authors of published articles in OA repsositories, mandated by funders and institutions) has not prevailed. Pre-Green Gold is “Fool’s-Gold.” Only Post-Green Gold is Fair-Gold.
The current Finch/RCUK policy, preferring Gold OA, has had its predictable perverse effects:
1. sustaining arbitrary, bloated Gold OA fees
2. wasting scarce research funds
3. double-paying publishers [subscriptions plus Gold]
4. handing subscription publishers a hybrid-gold-mine
5. enabling hybrid publishers to double-dip
6. abrogating authors’ freedom of journal-choice [based on cost-recovery model, embargo or licence instead of on quality]
7. imposing re-mix licenses that many authors don’t want and most users and fields don’t need
8. inspiring subscription publishers to adopt and lengthen Green OA embargoes [to maxmize hybrid-gold revenues]
9. handicapping Green OA mandates worldwide [by incentivizing embargoes]
10. allowing journal-fleet publishers to confuse and exploit institutions and authors even more
But the solution is also there (as already adopted by University of Liege and FRS-FNRS (the Belgian Francophone research funding council), EC Horizon2020 proposed and now adopted by HEFCE for REF2020.
a. funders and institutions mandate immediate-deposit
b. of the peer-reviewed final draft
c. in the author’s institutional repository
d. immediately upon acceptance for publication
e. whether journal is subscription or Gold
f. whether access to the deposit is immedate-OA or embargoed
g. whether license is transfered, retained or CC-BY;
h. institutions implement repository’s facilitated email eprint request Button;
i. institutions designate immediate-deposit the mechanism for submitting publications for research performance assessment;
j. institutions monitor and ensure immediate-deposit mandate compliance
This policy restores author choice, moots publisher embargoes, makes Gold and CC-BY completely optional, provides the incentive for author compliance and the natural institutional mechanism for verifying it, consolidates funder and institutional mandates; hastens the natural death of OA embargoes, the onset of universal Green OA, and the resultant institutional subscription cancellations, journal downsizing and transition to Fair-Gold OA at an affordable, sustainable price, paid out of institutional subscription cancellation savings instead of over-priced, double-paid, double-dipped Fool’s-Gold. And of course Fair-Gold OA will license all the re-use rights users need and authors want to allow.
In summary, plans by universities and research funders to pay the costs of Gold OA today are premature. Funds are short; 80% of journals (including virtually all the top journals) are still subscription-based, tying up the potential funds to pay for Gold OA; the asking price for Gold OA is still high; and there is concern that paying to publish may inflate acceptance rates and lower quality standards. What is needed now is for universities and funders to mandate Green OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication). That will provide immediate OA; and if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions) that will in turn induce journals to cut costs (print edition, online edition, access-provision, archiving), downsize to just providing the service of peer review, and convert to the Gold OA cost-recovery model; meanwhile, the subscription cancellations will have released the funds to pay these residual service costs. The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.
Harnad, S. (2007) The Green Road to Open Access: A Leveraged Transition. In: Anna Gacs. The Culture of Periodicals from the Perspective of the Electronic Age. L?Harmattan. 99-106.
______ (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).
______ (2013) Comments on HEFCE/REF Open Access Mandate Proposal. Open access and submissions to the REF post-2014
______ (2013) Finch Group reviews progress in implementing open access transition amid ongoing criticisms. LSE Impact of Social Sciences Blog November 18th 2013
______ (2013) ?Nudging? researchers toward Gold Open Access will delay the shift to wider access of research. LSE Impact of Social Sciences Blog December 5th, 2013
I’m now finishing the second month of my Shuttleworth Fellowship – the most important thing in my whole career. My project The Content Mine aims to liberate all the facts in the scientific literature.
That’s incredibly ambitious and I don’t know in detail how it’s going to happen – but I am confident it will.
This week we posted our website – and showed how we create content. What’s modern is that this is a community website – we’re inspired by Wikipedia and OpenStreetmap where volunteers can find their own area of interest and contribute. Since there is no other Open resource for content-mining we shall provide that – we have 100 pages and intend to go beyond 1000. Obviously you can help with that. And of course Wikipedia’s information is invaluable.
We have an incredible team:
- Michelle Brook . Michelle is Manager and making a massive impression with her work on Open Access.
- Jenny Molloy. Jenny has co-authored the foundations of Open Content Mining and ran the first workshop last year.
- Ross Mounce. Ross has championed Open Content Mining in Brussels and is developing software for mining phylogenetics.
- Mark MacGillivray. Co-authored Open Bibliography and founded CottageLabs who are supporting our web presence and IT infrastructure.
- Richard Smith-Unna. Founder of the volunteer scientist-developer community solvers.io to which he is pitching ContentMine to support Crawling.
But we have also masses of informal links and collaborations. Because we are Open, people want to find out what we are doing and offer help. It’s possible that much of our requirements for crawling may be provided by the community – and that’s happening over the last week. We’ve had an important contribution to our approach to Optical Character Recognition. Today I was skyped with suggestions about Chemistry in the ContentMine.
This all happens because of the Digital Enlightenment. People round the world are seeing the possibilities of zero-cost software, efficient voluntary Open communities and the value of liberated Knowledge. There’s many projects wanting to liberate bibliography, reform authoring, re-use bioscience, etc. Occasionally we wake up and think “wow! problem solved!”. If you think “we”, not “me”, the world changes.
The Fellows and Foundation are fantastic. I have an hour Skype every week with Karien, and another hour with the whole Fellowship. These are incredibly valuable. With such a huge ambition we need focus.
There’s huge synergy with several formal and many informal projects. Once you decide that your software and output is Open, you can move several times faster. No tedious agreements to sign. No worries about secrecy, so no delays in making knowledge open. Of the formal projects :
- Andy Howlett is doing the 3rd year of his PhD in the Unilever Centre here on metabolism. He can use the 10 years’ worth of Open Source we have developed and because his contributions are also Open we’ll benefit in return.
- Mark Williamson is using our software in similar fashion.
- Ross Mounce and Matt Wills at Bath are running the PLUTo project. Because it’s completely Open they can use our software and we can re-use their results.
- we are starting work with Chris Steinbeck at EBI on automated extraction of metabolites and phytochemistry from the literature.
Informally we are working with Volker Sorge (Birmingham) and Noureddin Sadawi (Brunel) on scientific computer vision and re-use of information for Blind and Visually Impaired people. With Egon Willighagen and John May on the (Open) Chemistry Development Kit. With the Crystallography Open Database…
How can it possibly work?
In the same way that Steve Coast “single-handedly” and with zero-cash built up OpenStreetmap.
- promoting the concept. We are already well known in the community and people are watching and starting to participate.
- by building horizontal scalability. By dividing the problem into separate journals, we can build per-journal solutions. By identifying independent disciplines (chemistry, species, phylogenetics…) we can develop independently.
- an Open modular software and information architecture. We build libraries and tools, not applications. So it’s easy to reconfigure. If people want a commandline approach we can offer that.
- By re-using what’s already Open. We need a chemical database? don’t build it ourselves – work with EBI and Pubchem. An Open bibliography? work with Europe PubMedCentral.
- by attracting and honouring volunteers. RichardSU has discovered the key point is to offer evening-sized problems. Developers don’t want to tackle a complex infrastructure – they want something where the task is clear and they can complete before they go to bed. And we have to make sure that they are promoted as first-class citizens.
Much of what we do will depend on what happens every week. A month ago I hadn’t planned for solvers.io; or Longan Java OCR; or Peer Library; or JournalToCs; or BoofCV; or …
PS: You might wonder what a 72-year-old is doing running a complex knowledge project. RichardSU asked that on hacker-news and I’m pleased that others value my response. If Neelie Kroes can change the world at 72, so can I – and so can YOU.
If you are retired you’re exactly the sort of person who can make massive contributions to the Content Mine. And it’s fun.
An amazing post came out yesterday from an amazing person. Tim Gowers is a Fields Medallist (the mathematics equivalent of the Nobel Prize). But Tim is also a star in the world of Open. 5 years ago he launched the Polymath project – a completely Open, meritocratic , collaborative project in citizen mathematics. They solved a complex and difficult mathematical problem is an astonishingly short time. It’s rightly regarded as an exemplar of what the future can be in the century of the Digital Enlightement.
But Tim has also fought the political battle for Openness and freedom of access to scholarship. Two years ago Tim was incensed by the outrageous cost of Elsevier journals and called for a boycott (“The Cost of Knowledge”). This was instantly successful, getting thousands of signatures in weeks. (I have signed it).
Now he’s taken this further in a large project and huge blog post. The prices of scholarly journals are closely guarded secrets. Universities use public money to buy subscriptions and the Elsevier requires the prices to be confidential. They even require the confidentiality clause to be confidential. (PMR: Why do Universities meekly sign this). But there is a way forward. Universities are public institutions and as such bound by the Freedom Of Information Acts.
So Tim has made requests to all Russell Group Universities asking for details of the contract and prices with Elsevier.
I know how much effort this is because I’ve done a similar thing (asking for restrictive clauses in publisher contracts). Some universities give positive helpful replies (Cambridge was one – https://www.whatdotheyknow.com/request/licences_with_subscription_publi#outgoing-341924 – this shows the process). But some Universities try to avoid giving useful answers. In that case we may have to go back and re-ask the question differently or even write to the Information Commissioner. It’s a LOT of work.
So Tim has published a huge amount of information and comment.
- read it
- read Michelle Brook’s great summary (first).
give it to your students to read. Students, Give it to your lecturers and professors to read.
- write to your MP. (I have)
Here’s part of Michelle’s summary:
Cambridge spent £1,161,571 in 2012.
Scale that up and you find that the UK is paying over 150 MILLION pounds to Elsevier every year.
And , although they are smaller, there are hundreds of other publishers out there.
The world pays 15 BILLION dollars to scholarly publishers each year. And a significant amount of that is used to stop YOU reading it (http://scholarlykitchen.sspnet.org/2014/04/24/rearguard-and-vanguard/).
Timothy Gowers’ excellent post details his research looking for Elsevier pricing mechanisms and actual prices paid by university libraries through FOI requests. Lots of useful data noted here for future number-crunching.