A huge thank you to Jeffrey Beall

Jeffrey Beall is the author of the important and useful list of predatory open access publishers, available on his Scholarly Open Access blog. Jeffrey reports that there is a concerted effort to discredit him and his work.

To me this just highlights the importance of his work and the complete lack of ethics of those behind this effort. I applaud Jeffrey’s service to the open access community, both through his list and through sharing this experience. Thanks, Jeffrey – please keep up the good work!

Houghton Report on OA Cost/Benefits in Germany

General cost analysis for scholarly communication in Germany: results of the ‘Houghton Report’ for Germany by John W. Houghton, Berndt Dugall, Steffen Bernius, Julia Krönung, Wolfgang König

Management Summary: Conducted within the project ?Economic Implications of New Models for Information Supply for Science and Research in Germany?, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP).
Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs:
– Traditional subscription publishing,
– Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication)
– Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access? self-archiving operating in parallel with subscription publishing; and (ii) the ?overlay services? model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))
– the NLP.
Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities.
Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.
This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ?author-pays? Open Access publishing and the ?over-lay journals? alternatives (?Gold Open Access?), and for parallel subscription publishing and self-archiving (?Green Open Access?). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ?Green Open Access? self-archiving). Whether ?Green Open Access? self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time.
The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback? of information) is the prohibition of access to research and knowledge for society.

Some Comments:

Like previous Houghton Reports, this one has carefully compared unilateral and global cost/benefits for Gold Open Access Publishing and Green Open Access Self-Archiving. In this case, the options also included the German National License Program (NLP), a negotiated national site license providingGerman researchers with access to most of the journals they need.

As it found in other countries, the Report finds that Green OA self-archiving provides the best benefit/cost ratio in Germany too.

It needs to be noted, however, that among the scenarios compared, only subscription publishing (including licensed subscriptions) and Gold OA publishing are publishing models. Green OA self-archiving is not a substitute publishing model but a system of providing OA under the subscription/licensing model — by supplementing it with author self-archiving (and with self-archiving mandates adopted by authors’ institutions and funders).

“Open Access self-archiving? [is] further divided into (i) Green Open Access? self-archiving operating in parallel with subscription publishing; and (ii) the ?overlay services? model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))”

Strictly speaking, the “overlay services model” is just another hypothetical Gold OA publishing model, but one in which the Gold OA fee is only paying for the service of peer-review, branding and quality control rather than for the all the rest of the products and services journals that are currently still being co-bundled in journal subscriptions and their costs (print edition, online edition, access-provision, hosting, archiving).

This hypothetical Gold OA model is predicated, however, on the assumption that there is universal Green OA self-archiving too, in order to perform the access-provision, hosting and archiving functions of what was formerly co-bundled under the subscription model.

Hence for existing journals the “overlay” Gold OA model is really just the second stage of a 2-stage transition that begins with the Green OA self-archiving access-provision system. In such a transition scenario, although Green OA would begin as a supplement to the subscription model, it would become an essential contributor to the sustainability of the overlay Gold OA model.

“comparing costs and benefits? [of] open access on returns to R&D over a 20 year period? we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.”

Again, it needs to be kept in mind that what are being compared are not just independent alternative publishing models, but also supplementary means of providing OA; so in some cases there are some very specific sequential contingencies and interdependencies among these models and scenarios.

“The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ?Green Open Access? self-archiving).”

I presume that in considering the costs and benefits of German national licensing the Houghton Report considered both the unilateral German national licensing scenario and the scenario if reciprocated globally. In this regard, it should be noted that OA has both user-end benefits [maximized access] and author-end benefits [maximized impact]: Unilateral national licenses provide only the former, not the latter. Both unilateral Green and unilateral Gold, in contrast, provide only the latter but not the former. So what needs to be taken into account is global scalability and sustainability: How likely are other nations (and institutions) to wish — and afford – to reciprocate under the various scenarios?

“Whether ?Green Open Access? self-archiving in parallel with subscriptions is a sustainable model over the longer term is debatable”

First of all, if subscription publishing itself is not a sustainable model, then of course Green OA self-archiving is not a sustainable supplement either.

But in the hypothetical “overlay” Gold OA model it is being assumed that Green OA self-archiving is indeed sustainable — as a practice, not as a substitute form of publishing. (It is naive to think of spawning 28,000 brand-new Gold OA peer-reviewed journals in place of the circa 28,000 journals that exist today: A conversion scenario is much more realistic.)

And probably the most relevant sustainability question is not about the sustainability of the practice of Green OA self-archiving (keystrokes and institutional repositories), nor the sustainability of subscription publishing, but the sustainability of subscription publishing in parallel with universal Green OA self-archiving. One natural possibility is that globally mandated Green OA self-archiving will make journal subscriptions unsustainable, inducing a transition in publishing models, with journals, under cancelation pressure, cutting inessential products and services and their costs, and downsizing to what is being here called the “overlay” Gold OA model (though that’s probably not the aptest term to describe the outcome), while at the same time releasing the subscription cancelation funds to pay the much lower peer review service fees it entails.

“The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio.”

And what also need to be taken into account are sequential contingencies and priorities: Green OA self-archiving is not only the cheapest, fastest and surest way to provide OA, but it is also the natural way to induce a subsequent transition to affordable, sustainable Gold OA. But in order to be able to do that, it has to come first.

“Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability.|

Green OA self-archiving mandates are enforceable too. And global scaleability and sustainability has to be taken into account too, not just local access-provision.

“The true cost of toll access publishing (beside[s] the [cost of the] “buyback? of information) is the prohibition of access to research and knowledge for society.”

But when toll access publishing is globally supplemented by mandatory Green OA self-archiving, the “prohibition” is pre-empted, at next to no extra cost.

AMI2 Content mining using PDF and SVG: progress

I’m now returning to UK for a few weeks before coming back to AU to continue. This is a longish post but important for anyone wanting to know the details of how we build an intelligent PDF reader and what it will be able to do. Although the examples are chemistry-flavoured the approach applies to a wide range of science.

To recall…

AMI2 is a project to build an intelligent reader of the STM literature. The base is PDF documents (though Word, HTML and LaTeX will also be possible and much easier and of higher quality). There are three phases at present (though this and the names may change):

  • PDF2SVG. This converts good PDF losslessly into SVG characters, path and images. It works well for (say) student theses and ArXiV submissions but fails for most STM publisher PDFs because the quality of the “typesetting ” is non-conformant and we have to use clunky, fragile heuristics. More on later blogs and below.
  • SVGPLUS. This turns low-level SVG primitives (characters and paths) into higher level a-scientific objects such as paragraphs, sections, word, subscripts, rectangles, polylines, circles, etc. In addition it analyses components that are found universally in science (figures, tables, maths equations) and scientific document structure. It also identifies graphs, plots, etc. (but not chemistry, sequences, trees…)
  • SVG2XML. This interprets SVGPLUS output as science. At present we have prototyped chemistry, phylogenetics, spectroscopy and have a plugin architecture that others can build on. The use of SVG primitives makes this architecture much simpler.

We’ve written a report and here are salient bits. It’s longish so mainly for those interested in the details. But it has a few pictures…

PDFs and their interpretation by PDF2SVG


Science is universally published as PDF documents, usually created by machine and human transformation of Word or LaTeX documents. Almost all major publishers regard “the PDF” as the primary product (version of record) and most scientists read and copy PDFs directly from the publishers’ web sites; the technology is independent of whether this is Open or closed access. Most scientists read, print and store large numbers of PDFs locally to support their research.

PDF was designed for humans to read and print, not for semantic use. It is primarily “electronic paper” – all that can be guaranteed is coloured marks on “e-paper”. It was originally proprietary and has only fairly recently become an ISO standard. Much of the existing technology is proprietary and undocumented. By default, therefore a PDF only conveys information to a sighted human who understands the human semantics of the marks-on-paper.

Over 2 million scholarly publications are published each year, most only easily available in PDF. The scientific information in them is largely lost without an expert human reader, who often has to transcribe the information manually (taking huge time and effort). Some examples:

In a PDF these are essentially black dots on paper. We must develop methods to:

  • PDF2SVG: Identify the primitives (in this case characters, and symbols). This should be fairly easy but because the technical standard of STM publishing is universally very non-conformant to standards (i.e. “poor”) we have had to create a large number of arbitrary rules. This non-conformity is a major technical problem and would be largely removed by the use of UTF-8 and Unicode standards.
  • . SVGPLUS (and below): Understand the words (e.g. that “F”-”I”-”g” and “E”-”x”-”c”-”e”-”s”-”s” are words). PDF has no concept of “word”, “sentence”, “paragraph”, etc.
  • Detect that this is a Figure (e.g. by interpreting “Fig. “)
  • Separate the caption from the plot
  • Determine the axial information (labels, numbers and tics and interpret (or here guess) units
  • Extracts the coordinates of points (black circles)
  • Extract the coordinates of the line

If the PDF is standards-compliant it is straightforward to create the SVG. We use the Open Source PDFBox from Apache to “draw” to a virtual graphics device. We intercept these graphics calls and extract information on:

  • Position and orientation. PDF objects have x,y coordinates and can be indefinitely grouped (including scaling). PDF resolves all of this into a document on a virtual A4 page (or whatever else is used). The objects also have style attributes (stroke and fill colours, stroke-widths , etc.). Most scientific authors use simple colours and clean lines which makes the analysis easier.
  • Text (characters). Almost all text is individual characters which can be in any order (“The” might be rendered in the order “e”-”h”-”T”. Words are created knowing the screen positions of their characters. In principle all scientific text (mathematical equations, chemical symbols, etc.) can be provided in the Unicode toolset (e.g. a reversible chemical reaction symbol

    is the Unicode point U+21CC or html entity &#x21cc and will render as such in all modern browsers.

  • Images. These are bitmaps (normally rectangular arrays of pixels) and can be transported as PNG, GIF, JPEG, TIFF, etc. There are cases (e.g. photographs of people or scientific objects) where bitmaps are unavoidable. However some publishers and authors encode semantic information as bitmaps, thereby destroying it. Here is an example:

    Notice how the lines are fuzzy (although the author drew them cleanly). It is MUCH harder to interpret such a diagram than if it had been encoded as characters and lines. Interpretation of bitmaps is highly domain-dependent and usually very difficult or impossible. Here is another (JPEG)

    Note the fuzziness which is solely created by the JPEG (lossy) compression. Many OCR tools will fail on such poor quality material

  • Path (graphics primitives). These are used for objects such as
    • graphical plots (x-y, scatterplots, bar charts)
    • chemical structures

      This scheme, if drawn with clean lines, is completely interpretable by our software as chemical objects

    • diagrams of apparatus
    • flowcharts and other diagrams expressing relationships

    Paths define only Move, Line, Curve. To detect a rectangle SVGPLUS has to interpret these commands (e.g. MLLLL).

There are, unfortunately, a large occurrence of errors and uncertainties. The most common is the use of non-standard, non-documented encodings for characters. These come from proprietary tools (such as Font providers of TeX, etc,) and from contracted typesetters. In these cases we have to cascade down:

  • Guess the encoding (often Unicode-like)
  • Create a per-font mapping of names to Unicode. Thus “MathematicalPi-One” is a commonly used tool for math symbols: its “H11001″ is drawn as a PLUS and we translate to Unicode U+002B but there is no public (or private) translation table (We’ve asked widely). So we have to do this manually by comparing glyphs (the printed symbol) to tables of Unicode glyphs. There are about 20 different “de facto” fonts and symbol sets in wide scientific use and we have to map them manually (maybe while watching boring cricket on TV). We have probably done about 60% of what is required
  • Deconstruct the glyphs. Ultimately the PDF provides the graphical representation of a glyph on the screen, either as vectors or as a bitmap. We recently discovered a service (shapecatcher) which interprets up to 11,000 Unicode glyphs and is a great help. Murray Jensen has also written a glyph browser which cuts down the human time very considerably.
  • Apply heuristics. Sometimes authors or typesetters use the wrong glyph or kludge it visually. Here’s an example:

    Most readers would read as “ten-to-the-minus-seven” but the characters are actually “1″, “0″, EM-DASH, “7″. EM-DASH – which is used to separate clauses like this – is not a mathematical sign so it’s seriously WRONG to use it. We have to add heuristics (a la UNIX lint) to detect and possibly correct. Here’s worse. There’s a perfectly good Unicode symbol for NOT-EQUALS (U+2260)

    Unfortunately some typsetters will superimpose an EQUALS SIGN (=)with a SLASH (/). This is barbaric and hard and tedious to detect and resolve. The continued development of PDF2SVG and SVGPLUS will probably be largely hacks of this sort.

SVG and reconstruction to semantic documents SVGPLUS


SVGPLUS assumes a correct SVG input of Unicode characters, SVG Paths, and SVGImages (the latter it renders faithfully and leaves alone). The task is driven by a control file in a declarative command language expressed in XML. We have found this to be the best method of representing the control, while preserving flexibility. It has the advantage of being easily customisable by users and because it is semantic can be searched or manipulated. A simple example:

<semanticDocument xmlns=”http://www.xml-cml.org/schema/ami2″>

<documentIterator filename=”org/xmlcml/svgplus/action/ “>


<variable name=”p.root” value=”${d.outputDir}/whitespace_${p.page}” type=”file”/>

<whitespaceChunker depth=”3″ />

<boxDrawer xpath=”//svg:g[@LEAF=’3′]” stroke=”red” strokeWidth=”1″ fill=”#yellow” opacity=”0.2″ />

<pageWriter filename=”${p.root}_end.svg” />





This document identifies the directory to use for the PDFs (“action”), iterates over each PDF it finds, creates (SVG) pages for each, processes each of those with a whitespaceChunker (v.i.) and draws boxes round the result and writes each page to file. (There are many more components in SVGPLUS for analysing figures, etc). A typical example is:


SVGPLUS has detected the whitespace-separated chunks and drawn boxes round the “chunks”. This is the start of the semantic document analysis. This follows a scheme:

  • Detect text chunks and detect the font sizes.
  • Sort into lines by Y coordinate and sort within lines by X coordinate. The following has 5 / 6 lines:



    Normal, superscript, normal, subscript (subscript), normal

  • Find the spaces (PDF often has no explicit space characters – the spaces have to be calculated by intercharacter distance. This is not standard and is affected by justification and kerning.
  • Interpret variable font-size as sub- and super-scripts.
  • Manage super-characters such as the SIGMA.
  • Join lines. In general one line can be joined to the next by adding a space. Hyphens are left as their interpretation depends on humans and culture. The output would thus be something like:

    the synthesis of blocks, stars, or other polymers of com~plex architecture. New materials that have the potential of revolutionizing a large part …

    This is the first place at which words appear.

  • Create paragraphs. This is through indentation heuristics and trailing characters (e.g. FULL STOP).
  • Create sections and subsections. This is normally through bold headings and additional whitespace. Example:

    Here the semantics are a section (History of RAFT) containing two paragraphs


The PATH interpretation is equally complex and heuristic. In the example below:

The reversible reaction is made up of two ML paths (“lines”) and two filled curves (“arrowheads”). All this has to be heuristically determined. The arcs are simple CURVE-paths. (Note the blank squares are non-Unicode points)


In the axes of the plot

All the tick-marks are independent paths – SVGPLUS has to infer heuristically that it is an axis.

In some diagrams there is significant text:

Here text and graphical primitives are mixed and have to be separated and analysed.


In summary SVVGPLUS consists of a large number of heuristics which will reconstruct a large proportion (but not all) scientific articles into semantic documents. The semantic s do and will include:

  • Overall sectioning (bibliographic metadata, introduction, discussion, experimental, references/citations
  • Identification and extraction of discrete Tables, Figures, Schemes
  • Inline bibliographic references (e.g. superscripted)
  • Reconstruction of tables into column-based object(where possible)
  • Reconstruction of figures into caption and graphics
  • Possible interpretation of certain common abstract scientific graphical objects (graphs, bar charts)
  • Identification of chemical formulae and equations
  • Identification of mathematical equations

There will be no scientific interpretation of these objects


Domain specific scientific interpretation of semantic documents


This is being developed as a plugin-architecture for SVGPLUS. The intention is that a community develops pragmatics and heuristics for interpreting specific chunks of the document in a domain specific manner. We and our collaborators will develop plugins for translating documents into CML/RDF:

  • Chemical formulae and reaction schemes
  • Chemical synthetic procedures
  • Spectra (especially NMR and IR)
  • Crystallography
  • Graphical plots of properties (e.g. variation with temperature, pressure, field, molar mass, etc.)

More generally we expect our collaborators (e.g. Ross Mounce, Panton Fellow, paleophylogenetics at University of Bath UK) to develop:

  • Mathematical equations (into MathML).
  • Phylogenetic trees (into NEXML)
  • NA and protein sequences into standard formats
  • Dose-response curves
  • Box-plots


Fidelity of SVG rendering in PDF2SVG. This includes one of the very rare bugs we cannot solve:



[Note that the equations are identical apart from the braces which are mispositioned and too small. There is no indication in TextPosition as to where this scaling comes from.

In PDFReader the equation is correctly displayed (the text is very small so the screenshot is blurry. Nonetheless it’s possible to see that the brackets are correct)



The Carrot and the Stick?

Unraveling Motivation and Attention

by Randolph S. Marshall, Career Corner Editor

A commentary on the recent Brain and Behavior article, “Effects of Motivation on Reward and Attentional Networks: an fMRI Study”, by Ivanov et al.

How does the anticipation of a reward interact with cognitive demand? This is the basic question that was asked by K-23 awardee IIllyan Ivanov. In his article just published in Brain and Behavior, Ivanov and colleagues used BOLD fMRI to examine regional brain activation in a 3-pronged experiment that pitted the motivational system against the attentional system. Both the motivation of an anticipated reward and higher levels of attention are known to speed up cognitive reaction times behaviorally, but what is the influence of the motivational system on cognitive control as a task requires more cognitive muscle? Does reward anticipation enhance performance or interfere with it?  What if there is not only promise of reward, but risk of monetary loss? These questions are important both for our understanding of systems biology, and for implications of treatment of individuals with attention deficit/hyperactivity disorder, obsessive-compulsive disorder, and drug addiction where attention and motivation may be altered.

In this study of 16 healthy adults, behavior results were as anticipated: shorter reaction times were seen with reward anticipation, particularly with the easier, “congruent” task trials. The imaging results confirmed that attentional network regions (right ACC, right primary motor cortex, supplemental motor and somatosensory association cortices bilaterally, right middle frontal gyrus and right thalamus) activated more during the higher cognitive demands of the non-congruent trials whereas key components of the motivational network (bilateral insula and ventral striatum) engaged with the unique “surprising non-reward” component of the task. Furthermore, the interaction effects showed that cognitive conflict elicited greater activation, but only in the absence of reward incentives – as if subjects worked harder to avoid possible loss. Conversely, reward anticipation decreased activity in the attentional networks possibly due to improved information processing.

Surprisingly, the more difficult task components decreased activity in the striatum and the orbito-frontal cortex suggesting that harder trails may have been experienced as less rewarding. These results were interpreted as showing that in the context of a difficult task one can maximize performance through both increasing efforts to obtain rewards on easier trials and committing more attentional effort to avoid punishment and losses during more difficult trials. The authors conclude that there is not a direct correlation between motivational incentives and improvement of performance, but that their interplay will highly depend on the context.

I interviewed Dr. Ivanov about his experiment, and asked him to talk about the process of beginning his career in clinical neuroscience. Dr. Ivanov is currently Assistant Professor in Child Psychiatry at Mt. Sinai Medical Center in New York. He completed a K-23/R02 sponsored by grant in 2010, sponsored by NIDA/AACAP and now is completing his work on an R03 to study the effects of motivation and attention in more depth.

Marshall:  What was the most interesting finding for you in this study?

Ivanov: The interaction effect, which suggested that incentives may boost information processing but can also be a distractor and possibly hamper performance on cognitive tasks. This is interesting because new studies suggest that if you have strong stimuli (e.g. a drug like methylphenidate) this interaction effect may be reversed as we hope to show in a follow up study.

Marshall: Was clinical relevance an important motivator for you in pursuing this project, or were you more interested in the systems biology aspect?

Ivanov: I would say both. As a clinician I was interested in the main idea which was whether we could tap into risk factors that would help us understand the motivational and attentional systems. I wanted to know if there is a biological signature or hallmark for what treatment might be helpful in children at risk for later substance abuse.

Marshall: How important was mentorship in the design and implementation of this work?

Ivanov: Crucial, especially with neuroimaging.  The amount of time and the amount of knowledge needed was very high.  I had both inside and outside mentors. I studied with the Director of Child Psychiatry at Mt. Sinai, Jeffrey Newcorn, and with outside mentors also, which turned out to be a very good thing. I worked with Tom Crowley, an adult psychiatrist at Denver, and Edith London from UCLA, who was a mentor for my K-23.  I also went to the Wellcome Trust Centre for Neuroimaging in London a couple of times to work with Karl Friston. Through this process what you find is that you accumulate a group of people around the country or the world who you can then count on later for advice and support.

Marshall: What was the hardest part about getting this project done?

Ivanov: I didn’t know much about neuroimaging when I started. I was naïve about the time needed to complete a neuroimaging study in young children. It’s not like clinical work, in which we get used to working quickly.  Getting used to working in that scientific environment is different.  It is also very demanding moving humans into human research, particularly youths. You have to work with kids and family through the whole process.  Children have their natural curiosity, but entering the fMRI scanner is not an everyday experience and they can be fearful – having a skilled research team is crucial.

Marshall: What is the next hypothesis to test? Is it a direct follow up of this project or will you work on a parallel project?

Ivanov: We may be able to set up a treatment trial. We want to ask, do you see clinical   subgroups with particular biological signatures that might optimize our treatments for high risk groups.

Marshall: What advice would you give a young investigator looking to get a first K-award or similar grant funded?

Ivanov: Get a good mentor. A good mentor will help flesh out your ideas.  Also, you have to find an area you are really interested in and feel really passionate about.  And when you start thinking about the process, don’t have the goal right away of producing the paper that will turn science around.  Concentrate on learning, increasing your background knowledge, and developing your network. The best outcome for the K is to develop the confidence and skills that will let you succeed in the future.

Going to the ASCB Annual Meeting? PLOS would like to meet you!

MA104 cells labelled with actin (green) and DNA (blue). Image credit: PLoS ONE 7(10): e47612. doi:10.1371/journal.pone.0047612

Are you attending the upcoming Annual meeting of the American Society for Cell Biology?  Then we want to meet you in person!  PLOS ONE has published thousands of papers in the field of Cell Biology, so we know there must be a lot of PLOS ONE authors out there.  Whether you are an editor, reviewer, author or prospective author, we hope to see you! For more information about where we’ll be and when, please read on.


An evening with the PLOS Editorial Boards:

PLOS is hosting a reception for all Editorial Board members for an evening of food, drink and discussion.  It will be a great opportunity to connect with your fellow Editors, and a few staff Editors will also be on hand.  The highlight of the evening will be speakers Emma Ganley and Jason Swedlow, focusing on the challenges and importance of sharing data in the world of cell biology.

  • Emma Ganley is a Senior Editor on PLOS Biology, with experience in data availability and navigation in online publication. 
  • Jason Swedlow is co-founder of Open Microscopy Environment (OME), and directs his own research group at the University of Dundee.

When: 6 to 8pm , Tuesday, December 18, 2012

Where: The Box [link] – 1069 Howard Street (between 6th & 7th), San Francisco, CA 94103

Be sure to RSVP, because space is limited: http://scibar.eventbrite.com

Get in touch if you would like further information or have any questions!


Calling all PLOS ONE authors to the PLOS booth in the Exhibition Hall!

Have you published with PLOS ONE?  Come by booth #1322!  We would love to show you your article level metrics in exchange for a t-shirt!  Find out who has cited your work, how many people are using it in their Mendeley library, and the number of times the pdf has been downloaded (among many other things).  PLOS ONE staff will be on hand to discuss the benefits of publishing with PLOS, and to answer all of your questions both specific and general.

We look forward to meeting you!



The Evolution of Author Guidelines

Congratulations are due to PeerJ for succeeding in bringing into focus an essential publisher service that has been little publicised in the past.

The journal opened for submissions on December 3rd, and many tweets and blogs have been spawned by the following passage in the Instructions for Authors:

We want authors spending their time doing science, not formatting.

We include reference formatting as a guide to make it easier for editors, reviewers, and PrePrint readers, but will not strictly enforce the specific formatting rules as long as the full citation is clear.

Styles will be normalized by us if your manuscript is accepted.

Of course, it would be ridiculous to assert that every manuscript ever submitted up to this point had perfectly formatted references in journal style; in fact it is relatively rare to make no edits at all on a reference list. Journal Production Editors have been converting reference formats since journal publishing began; laboriously at first, but the digital revolution has certainly helped in recent years, with more automated processes and specialist typesetters taking on much of the tedium.

 As the PeerJ guidelines correctly state, a requirement for a particular style can help the editorial and review process, and I would go further in saying that it can impose some rigour on the creation of the reference list, helping to ensure that all critical elements are present. However, it has been the case for some time that publishers have barely batted an eye if an article happens to arrive in the incorrect format, as long as all of the important content was present.

 At Wiley, we took this a stage further on the launch of our Wiley Open Access program back in May 2011. We made a point of paring the formatting requirements down to a bare minimum for the entire article. The Author Guidelines state:

 We place very few restrictions on the way in which you prepare your article, and it is not necessary to try to replicate the layout of the journal in your submission. We ask only that you consider your reviewers by supplying your manuscript in a clear, generic and readable layout, and ensure that all relevant sections are included. Our production process will take care of all aspects of formatting and style.

And with respect to the references:

 As with the main body of text, the completeness and content of your reference list is more important than the format chosen. A clear and consistent, generic style will assist the accuracy of our production processes and produce the highest quality published work, but it is not necessary to try to replicate the journal’s own style, which is applied during the production process. If you use bibliographic software to generate your reference list, select a standard output style, and check that it produces full and comprehensive reference listings…The final journal output will use the ‘Harvard’ style of reference citation. If your manuscript has already been prepared using the ‘Vancouver’ system, we are quite happy to receive it in this form. We will perform the conversion from one system to the other during the production process.

There is no doubt that this service, which has been quietly in operation in most journals for some time, has now been thrown much more into the limelight, and this can only be positive because it showcases one of the valuable services that professional publishing can provide.

Reading through the blogs, I see that the more overt adoption of this service as a point of policy is already spreading to more journals, as it has to eLife, and Elsevier’s Free Radical Biology & Medicine.

 This can only be a good thing.

Will Wilcox, Journals Content Management Director for Life Sciences

Upgrade to SHERPA/JULIET Released

The Centre for Research Communications is pleased to announce the release of an upgrade to its SHERPA/JULIET service, the go-to database of research funders’ open access policies – http://www.sherpa.ac.uk/juliet/.

SHERPA/JULIET now has grown to cover 110 funders.

Growth of the SHERPA/JULIET database to 2012-12-12

The increase in size has necessitated an upgrade to the JULIET website and the introduction of several new features, including:

  • Redesign of the look and feel of the website to match the JULIET’s partner service RoMEO – the database of publishers’ copyright and open access policies.
  • The introduction of a search interface, in addition to the existing “browse” list. This allows you to search by funder’s name and country. In “advanced mode”, you can also search according to the funders’ policy requirements for open access publications, and the archiving of publications and data.
  • New statistical charts. While the current focus of JULIET is on the United Kingdom, we are extending coverage to the rest of the world.
  • A prototype Application Programmers’ Interface (API).
  • Lists of new additions and news stories.

JULIET is currently funded by JISC via UK RepositoryNet+ (www.repositorynet.ac.uk/).


Temporary Farewell to AU and thanks to some of its mammals

I go back to UK today and am finishing up in the Prahran (Melbourne) apartment where I’ve been for 2.5 months. Prahran is a great place to be – easy tram ride to the CBD (centre) of Melbourne and less than 1 hour commute to CSIRO (including walking) after I had figured the 5 and 64 and the (somewhat random) semi-express nature of the Cranbourne trains. Trams generally great (except the one that broke down in Swanston street (which gums up everything) and the rather unpredictable nature of late trains. Excellent shuttle from Huntingdale station To Monash Univ.

Too many human mammals to thank but they include:

  • Nico Adams – unlimited praise and appreciation for him fixing this up. We are planning I will be back next year, probably late Jan.
  • Murray Jensen (CSIRO) for his collaboration on AMI2 – Murray has a huge range of expertise and his knowledge of fonts was both unexpected and absolutely critical.
  • Everyone involved at CSIRO.
  • Dave Flanders and the Flanders-irregulars – a mixture of incipient OKF, hackers, meeting in Melbourne cafes where the wifi and coffee is great. (This is a fantastic aspect of the Melbourne scene you can get café and free wifi at State Library of Vic, National Gallery, Fed Square, RMIT in Swanston (where Nico and I worked on reports, and next year.
  • Connie and colleagues for the great time in Perth.
  • Mat Todd and colleagues for Sydney
  • The Astor cinema in Prahran/Chapel Street. It’s a 1930′s art deco showing a mixture of classic films (Bogart, Bergman, Crawford, Stewart…) and new releases. TWO films per sitting and ice creams out of this world.
  • Prahran and its cafes. I am off to have brunch shortly. Wifi and great atmosphere).
  • The people we met on our travels down the Great Ocean Road and elsewhere – Wombat Cottage (with Wombats), Birdwatchers, Reserves (e.g. Tower Hill)…
  • And others that I’ve failed to add – sorry.

Lots of animals and birds –we’ve probably ticked 50+ AU birds. We’re told Werribee sewage works is the place we must visit next time. The most interesting mammal was Thylarctos Plummetus . This can be dangerous to humans (what isn’t dangerous in AU?) but there are no recorded fatalities. Here’s the best picture we could get:

Our guide wouldn’t let us get any closer because of the potential danger. It’s clearly not a Koala and it looks ready to fall out of the tree.

The animals are sad and excited to be going to UK. Here’s AMI and AMI with some classic Australian tucker which I’ve had to leave behind:

We didn’t manage to make any #animalgarden photocomics – too much to do hacking grotty PDFs L

See you soon…



Freedom for scholarship in the internet age (doctoral dissertation)

On November 21, 2012, I successfully defended my doctoral thesis, Freedom for scholarship in the internet age. The post-defence draft is now (temporarily) available in the SFU Library’s thesis intake system, at https://theses.lib.sfu.ca/thesis/etd7530 
(Note the thesis is not yet appearing online, I am looking into this).


Freedom for Scholarship in the Internet Age examines distortion in the current scholarly communication system and alternatives, focusing on the potential of open access. High profits for a select few scholarly journal publishers in the area of science, technology, and medicine contrast with other portions of the scholarly publishing system such as university presses that are struggling to survive. Two major societal trends, commercialization and irrational rationalization, are explored as factors in the development of distortion in the system, as are potential alternatives, including the commons, state subsidy, DIY publishing, and publishing cooperatives. Original research presented or summarized includes the quarterly series The Dramatic Growth of Open Access, an empirical study of economic possibilities for transition to open access, interviews with scholarly monograph publishers, and an investigation into the potential for transition to open access in the field of communication. The similarities and differences between open access and various Creative Commons licenses are mapped and analyzed. The conclusion features a set of recommendations for open access. Carefully transitioning the primary economic support for scholarly publishing (academic library budgets) from subscriptions to open access is seen as central to a successful transition. Open access changes the form of the commodity with respect to commercial publication, from the scholarly work per se to the publishing service; a major improvement that overcomes the trend towards enclosure of information, but not necessarily the dominance of the commercial sector. A multi-faceted approach is recommended as optimal to overcome potential vulnerabilities of any single approach to open access. The open access movement is advised to be aware of the less understood societal trend of irrational (or instrumental) rationality, a trend that open access initiatives are just as vulnerable to as subscriptions or purchase-based systems. The remedy for irrational rationality recommended is a systemic or holistic approach. It is recommended that open access be considered part of a potential for broader societal transformation, based on the Internet’s capacity to function as an enabler of many to many communication that could form the basis of either a strong democracy or a decentralized socialism.

After the library audit, the thesis will be moved into the SFU Library Institutional Repository, SUMMIT, sometime in 2013.

Beall’s List of Predatory Publishers or Beall’s Predatory Business at the expenses of Publishers

We have received this email:

To whom it may concern

I was surprised when one of our editors told me that the name of Ashdin Publishing is found in the list of “Beall’s List: Potential, possible, or probable predatory scholarly open-access publishers” (http://scholarlyoa.com/publishers/) and I was surprised because of the following reasons:

  1. The author did not just mention the criteria for determining predatory open-access publishers, but he insisted on mentioning the full names and details of the publishers as well.
  2. Some of these criteria, for determining predatory open-access publishers, can be applied on a huge number of publishers (include some of the large and famous ones), but he did not mention any of them.
  3. Some of the publishers names are removed from this list without saying the reasons for this removal.
After I received the e-mail below, I am not any more surprised. Now, I am sure that the author, irrespective the good reasons he may has for preparing this list, wants to blackmail small publishers to pay him. 
I invite all of you to read what people say commenting on his article (http://www.nature.com/news/predatory-publishers-are-corrupting-open-access-1.11385):

Dr Gillian Dooley (Special Collections Librarian at Flinders University):

Jeffrey Beall’s list is not accurate to believe. There are a lot of personal biases of Jeffrey Beall. Hindawi still uses heavy spam emailing. Versita Open still uses heavy spam emailing. But these two publishers have been removed in Jeffrey Beall’s list recently. There is no reason given by Jeffrey Beall why they were removed. Jeffrey Beall is naive in his analysis. I think some other reliable blog should be created to discuss more fruitfully these issues. His blog has become useless.

Mark Robinson (Acting Editor, Stanford Magazine):

It is a real shame that Jeffrey Beall using Nature.com’s blog to promote his predatory work. Jeffrey Beall just simply confusing us to promote his academic terrorism. His list is fully questionable. His surveying method is not scientific. If he is a real scientist then he must do everything in standard way without any dispute. He wanted to be famous but he does not have the right to destroy any company name or brand without proper allegation. If we support Jeffrey Beall’s work then we are also a part of his criminal activity. Please avoid Jeffrey Beall’s fraudulent and criminal activity.

Now a days anyone can open a blog and start doing things like Jeffrey Beall which is harmful for science and open access journals. Nature should also be very alert from Jeffrey Beall who is now using Nature’s reputation to broadcast his bribery and unethical business model.

Now, I invite all of you in order to take all precautions and not being misled by this blackmailer.

Ashry A. Aly
Ashdin Publishing

——– Original Message ——–

Subject: Open Access Publishing
Date: Mon, 03 Dec 2012 17:39:18 +0000
From: Jeffrey Beall 
To: info@ashdin.com
I maintain list of predatory open access publishers in my blog

Your publisher name is also included in 2012 edition of my predatory open
access publishers list. My recent article in Nature journal can be read


I can consider re-evaluating your journals for 2013 edition of my list. It
takes a lot my time and resources. The fee for re-evaluation of your
publisher is USD 5000. If your publisher name is not in my list, it will
increase trustworthiness to your journals and it will draw more article
submissions. In case you like re-evaluation for your journals, you can
contact me.

Jeffrey Beall

PLOS ONE Launches a New Peer Review Form

Today PLOS ONE launches a new peer review form. While this might not sound like much of an announcement, the fact that our reviewer board currently contains over 400,000 scientists, and grows by the hour, means that an awful lot of people will see this form over the coming months!

The purpose of the form is to better direct and streamline the review process by focusing on our specific publication criteria. The job of the PLOS ONE reviewer is not to decide whether the study represents a significant advance to the field, or whether additional experiments need to be performed to increase the impact, or whether it is suitable for a broad interest journal. The reviewer must simply ascertain whether the study has been performed correctly, and whether the data support the conclusions. So that’s what we ask reviewers in the form. The form also addresses some of our other criteria, like whether the manuscript adheres to data sharing standards and whether the manuscript is written in intelligible standard English. By limiting the focus of the reviewers in this way, we hope to reduce the burden that many reviewers feel, and (hopefully) speed up the time it takes to review.

We know that academics spend an enormous amount of time reviewing papers. But while it increases the workload of already busy people, the majority would agree that it is a vital part of the scientific process, and a necessary part of the job. The hardest part of a traditional review is making the recommendation on whether the study represents a significant enough advance to meet the journal’s criteria for acceptance, and this is the thing that most holds up the evaluation of manuscripts. Remove that part, and review should be quicker, less cumbersome and easier – but, and here’s the kicker, will have no discernible effect on the literature as a whole. Papers that are ‘right’ will always be published somewhere, but it may take a year to find that place due to the endless rejection cycle of most journals. So the innovation of PLOS ONE was to remove this step, and it was immensely successful. Now all we need to do is remind people of this fact when they submit their review. The form aims to do just that, and we believe it takes us a step closer to the ideal of publishing ‘right’ studies with minimal fuss and maximal efficiency.

We haven’t created too many check boxes, drop-down menus or word limits. There are just four required questions about whether the submission meets our criteria, and plenty of flexibility to let reviewers include specific comments as needed. You can read more about the specifics of the form here, and please contact plosone@plos.org with any questions or feedback.

Ecology and Evolution at the British Ecological Society Annual Meeting

ECE 2 11 coverTo celebrate the new partnership with the British Ecological Society, Ecology and Evolution will be sponsoring the Welcome Mixer quiz at this year’s British Ecological Society Annual Meeting.

This year’s BES Annual Meeting is taking place between 17th and 20th December and with more abstracts submitted and delegates registered than any year over the past decade, this meeting will be one of the biggest and best for some time! There will be a number of networking opportunities throughout the Meeting, starting with the Welcome Mixer on Monday 17th, 19:30-21:30, at the spectacular Birmingham Museum and Art Gallery – photos to follow!

BES President Professor Georgina Mace will be introducing the Annual Meeting at this Welcome Mixer, and announcing that the BES journals have joined other high-impact titles in offering authors a rapid manuscript transferral system which maintains the highest standards of peer review while increasing the efficiency of the process. Both Ecology and Evolution Editors-in-Chief, Andrew Beckerman and Allen Moore, and a number of other BES Journal Editors will be attending both the Welcome Mixer, and the Annual Meeting.

Physiological Societies Partner with Wiley on New Open Access Journal

APS logo new                      TPS logo

The American Physiological Society and The Physiological Society Partner with Wiley on New Open Access Journal.   Susan Wray, Liverpool, UK Named Editor-in-Chief and Thomas Kleyman, Pittsburgh, USA, Named Deputy Editor-in-Chief.

The Physiological Society (TPS), and The American Physiological Society (APS) announced today their partnership to publish the new open access peer-reviewed journal, Physiological Reports, which will launch early next year.

Physiological Reports will offer peer-reviewed research across all areas of basic, translational and clinical physiology and allied disciplines for physiologists, neuroscientists, biophysicists and clinicians.  The journal will serve as the first fully open access online-only journal for The American Physiological Society and The Physiological Society, and joins their combined prestigious portfolio of peer-reviewed print and online subscription-based scientific journals, such as the American Journal of Physiology, The Journal of Physiology, and Experimental Physiology.

Susan Wray Named Editor-in-Chief and Thomas Kleyman Named Deputy Editor-in-Chief 

Interior block 3Susan Wray, Ph.D., University of Liverpool, UK, is a Professor and former Head of the Department of Physiology at the University of Liverpool, and is a Fellow of the Academy of Medical Sciences. Her research focuses on the physiology of smooth muscle and how it contracts. She has served in various roles on the Editorial Boards of The Journal of Physiology and Experimental Physiology.

KleymanThomas Kleyman, M.D., University of Pittsburgh, USA, is Professor and Chief, Renal-Electrolyte Division at the University of Pittsburgh. His research has been devoted to renal physiology.   Dr. Kleyman has served as Editor-in-Chief of the American Journal of Physiology – Renal Physiology for the last five years.

Philip Wright, Chief Executive of The Physiological Society said, “This represents a landmark for The Physiological Society launching its first new journal in over 100 years and we are delighted to be doing this in partnership with The American Physiological Society. Most importantly this is an OA journal that will be run by two of the world’s leading physiological societies for its members and for physiologists around the world.”

Martin Frank, Executive Director of The American Physiological Society, said “The time was right for both The American Physiological Society and The Physiological Society to create an open access journal to meet the evolving needs of our joint constituencies. It is very gratifying to know that both Societies are able to join together to create this journal, which will serve the needs of the international community of physiologists.”

Jackie Jones, Publisher for Physiological Reports, Wiley, added, “We are very enthusiastic about this new venture with The Physiological Society and The American Physiological Society. We are looking forward to extending our existing relationships with both societies by developing Physiological Reports into a successful, high quality journal.”

The journal will publish articles under the Creative Commons Attribution (CC-BY) License enabling authors to be fully compliant with open access requirements of funding organizations. All articles will be published as fully open access on Wiley Online Library and deposited in PubMed Central immediately upon publication. 

A publication fee will be payable by authors on acceptance of their articles. The first 100 papers accepted for publication will be published free of charge. Authors affiliated with, or funded by, an organization that has a Wiley Open Access Account can publish without directly paying any publication charges.

My/Our talk to CSIRO Publishing. How should we communicate science? “This article uses recyclable scientific information”.

I’ve been working with Nico Adams at CSIRO (Melbourne/Clayton, AU) for nearly 3 months, supported by a Fellowship. CSIRO (http://www.csiro.au/) is a government institution similar in many ways to a National Laboratory. It does research (public and private) and publishers it. But it is also a publisher in its own right – everything from Chemistry, to Gliding mammals, to how to build your dream home. Nico and I have struck up a rapport with people in CSIRO publishing http://www.publish.csiro.au/ and today – my last full day in AU – we are going to visit and present some of what we have done and more generally have a discussion where we learn about what CSIRO Publishing does.

CSIRO publishes a range of journals and we’ll be concentrating on that, though we’ll also be interested in reports, books, etc. We’ve had the opportunity to work with public and non-public content and to use that as a guide to our technology development (all the software I write is, of course, Open Source). Among the questions I’ll want to raise (not specifically CSIROPub) are:

  • Is the conventional journal type-setting process still needed? I will argue NO – that it costs money and makes information worse. ArXiV has totally acceptable typography in Word or LaTeX and this is better than most journals for content-mining, etc.
  • How should data be published? I shall take small-molecule crystal structures as an example. At present CSIRO sends crystal structures to CCDC where they are no openly accessible. I’ll argue they should be part of the primary scientific record.

Nico will be talking about semantics – what it is and how it can be used. I think he’ll hope to show the machine extraction of content from Aust. J Chem.

I’ll probably play down the political aspect in my formal presentation. The main issue now is how we recreate a market where scientific communication (currently broken) can be separated from the awarding of scientific glory (reputation). I’ll concentrate on the communication.

I have simple, practical, understandable IMMEDIATE proposal addressing the document side of STM (this doesn’t of course address the issues of data semantics or whatever) .

  • The current primary documentary version of scientific record should not be PDF but be a Word Or LaTeX or HTML or XML (e.g. NLM-DTD) document.
  • All documents should use UTF-8 and Unicode.

There are zillions of Open tools that adhere to UTF-8 and Unicode.

Where PDFs are used they should adhere to current information standards, specifically:

A graduate thesis is a BETTER document than the output of almost any publisher I have surveyed. STM publishing destroys information quality. All documents I have looked at on ArXiV and BETTER that the output of STM publishing.

So I shall make the following proposals:

  • CSIRO publishing should publish in a standards-compliant manner.
  • CSIRO should make supplemental data Openly available (we’ll take crystallography as the touch stone).

The average cost to the public for the publication of a scientific paper is around 3000 USD. The information quality is a disgrace. Some of that money can be saved by doing it better. It’s similar to recycling. It makes sense to re-use your plastic bags, toilet paper, etc. (Yes, Healesville animal sanctuary promotes green bum-wiping to save the environment (technically recycled paper)).

Let’s have a sticker:

“This journal promotes recycled scientific information”

I’ll be presenting the work that Murray Jensen and I have been doing on AMI2 . MANY thanks to Murray – he has been given an “AMI” in small acknowledgement.

Murray’s AMI in typical Melbourne bush.

AMI progresses steadily. It’s taken much longer than I thought primarily because STM publication quality is AWFUL. It’s now at a stage where we can almost certainly make an STM publication considerably better. However Murray and I have hacked the worst. AMI2-PDF2SVG turns PDF and AWFUL-PDF into good Unicode-compliant SVG. I’m concentrating on AMI2-SVGPLUS which turns SVG into meaningful documents. Nearly there. Again the absurd process of creating double column justified PDF (that no scientist would willingly pay for) destroys information seriously and SVGPLUS has to recover it. Then the final exciting part will create science from the document.

I’ll hope to present some today.



MicrobiologyOpen – Issue 1.4 is now live!

MicrobiologyOpenThe latest issue of MicrobiologyOpen is now live. All 14 excellent articles are fully open access: free to read, download and share.

Below are two articles highlighted by the Editor-in-Chief, Pierre Cornelis:

purple_lock_openAssessment of the relevance of the antibiotic 2-amino-3-(oxirane-2,3-dicarboxamido)-propanoyl-valine from Pantoea agglomerans biological control strains against bacterial plant pathogens by Ulrike F. Sammer, Katharina Reiher, Dieter Spiteller, Annette Wensing and Beate Völksch.
Summary: The epiphyte Pantoea agglomerans 48b/90 (Pa48b) is a promising biocontrol strain against economically important bacterial pathogens such as Erwinia amylovora. Strain Pa48b produces the broad-spectrum antibiotic 2-amino-3-(oxirane-2,3-dicarboxamido)-propanoyl-valine (APV) in a temperature-dependent manner. An APV-negative mutant still suppressed the E. amylovora population and fire blight disease symptoms in apple blossom experiments under greenhouse conditions, but was inferior to the Pa48b wild-type indicating the influence of APV in the antagonism. In plant experiments with the soybean pathogen Pseudomonas syringae pv. glycinea both, Pa48b and the APV-negative mutant, successfully suppressed the pathogen. Our results demonstrate that the P. agglomerans strain Pa48b is an efficient biocontrol organism against plant pathogens, and we prove its ability for fast colonization of plant surfaces over a wide temperature range.

purple_lock_openA novel regulator RcdA of the csgD gene encoding the master regulator of biofilm formation in Escherichia coli by Tomohiro Shimada, Yasunori Katayama, Shuichi Kawakita, Hiroshi Ogasawara, Masahiro Nakano, Kaneyoshi Yamamoto and Akira Ishihama.
Summary: The FixJ/LuxR family transcription factor CsgD is a master regulator of biofilm formation in Escherichia coli. Previously, we identified more than 10 transcription factors that participate in regulation of the csgD promoter. After genomic SELEX screening of regulation targets, an uncharacterized TetR-type transcription factor YbjK was found to be involved in regulation of the csgD promoter. In addition, a number of stress-response genes were found to be under the direct control of YbjK. Taken together, we propose to rename it to RcdA (regulator of csgD). One unique feature of RcdA is its mode of DNA binding. Gel shift, DNase-I footprinting, and atomic force microscopic (AFM) analyses indicated that RcdA is a DNA-binding protein with a high level of cooperativity, with which it covers the entire surface of probe DNA through protein–protein interaction and moreover it induces the formation of aggregates of DNA–RcdA complexes.

Read the other articles in this issue >

Submit your paper to MicrobiologyOpen here >

To find out when other issues publish sign up for e-toc alerts here >