New Open Access Fund at SFU Library

Excerpt:

At its January 2010 meeting, the Senate Library Committee adopted sweeping recommendations that will make SFU one of only three Canadian universities to embrace Open Access (OA) publishing. “We’re going to put our money where our mouth is,” says Bird. OA Journals are scholarly peer-reviewed journals freely available on the web without subscription fees, but they are often supported through Article Processing Charges (APCs) levied to authors. Fees range from a few hundred to several thousand dollars per accepted paper. Prominent examples are BioMed Central, Public Library of Science, and Hindawi.

(Thanks to Gwen Bird).

Note also that there is a link to the full SFU OA Strategy document at the bottom of this page.

Kudos to Gwen Bird & SFU Library!

Global Knowledge Exchange Network launched

The Global Knowledge Exchange Network has just launched – here is a description, from the About page:

The Global Knowledge Exchange Network (GKEN) is a community for scholars and practitioners to share and explore new ideas and emerging trends related to scholarly research or everyday practice. More specifically, the community is devoted to understanding the changing role of information –its creation, management, dissemination and use– in scholarly research, higher education and business practice.

The project is sponsored jointly by the Harvard Business School Knowledge and Library Services and the Copenhagen Business School Library. The GKEN Founding Team includes Mary Lee Kennedy and Gosia Stergios (Harvard Business School), and René Steffensen and Leif Hansen (Copenhagen Business School).

Tech4Society, Day 3

I’ll start my final post on the Tech4Society conference by giving thanks to the Ashoka folks for getting me here to be a part of this conference. Most of the time, even in the developing world, I’m surrounded by digital natives, or people who emigrated to the digital nation. It’s an enveloping culture, one that can skew the perception of the world to one where everyone worries about things like copyrights and licenses, and whether or not data should be licensed or in the public domain.

There’s a big world of entrepreneurs out there just hacking in the real world. First life, if you will. High touch, not high tech.

Being enveloped in their world for a few days gave me a lot of new perspectives on the open access and open educational resources movements. As always with this blog, my intention to write may exceed my delivery of text, but I’m going to try to chew through the perspectives. Getting off the road in a few weeks is going to help.

But I now get at a deep level the way that obsessive cultures of information control in the scholarly and educational literature represent a high tax, inbound and outbound, on the entrepreneur, whether social or regular. If you don’t know the canon, you’re doomed to repeat it. And we don’t have the time, the money, or the carbon to repeat experiments we know won’t work. We can’t afford to let good ideas go un-amplified, because we need tens of thousands of good ideas.

At my panel today on scale, we focused mainly on why scale is hard, the problems of scale. The CC experience – going from 2 people in a basement at Stanford to 50 countries in 6 years – is an example of what I called “catastrophic success”. It’s a nice way to think of what I also like to call the Jaws moment, after the scene in the 1970s action film where, having hoped to find a shark to catch, they find one muuuuuch bigger than they expected. The relevant quote is “we’re gonna need a bigger boat” – and that is what happens sometimes at internet scale. Entrepreneurs need to know why they want to scale, what scale means to them, and how to measure success, especially social entrepreneurs. Because if cash isn’t the only metric, the metrics you choose will wind up defining your success at scale.

There was a great question about scaling passion. I am going to try and address that in another post. I’m not quite in a mental state to get that post out yet, though.

It wasn’t just the social entrepreneurs, but also CC community experiences. Gautam John challenged me, eloquently and at length, about the way that Creative Commons engages with its community. I went into the argument convinced of my position, and left much less so. That’s as good as arguments get for me.

Ashoka and Lemelson foundations are doing great work, supporting inventors around the world (though I would have liked to have seen some Eastern bloc inventors – a curious lack of Slavic accents – wonder why). It was an honor to crash their party.

Read the comments on this post…

Tech4Society, Day 2

Getting ready to head up to Tech4Society’s final day. I’m on a panel called the tipping point, about how to scale social entrepreneurial success beyond a local region or state. My instinct is to say “pack your suitcase and start traveling” but that’s not very helpful. Even if it’s how I have been approaching the problem.

Yesterday I wasn’t on a panel. It was a good moment to do some listening. I sat in on a few panels, but was most moved by the trends in Africa session. In other trends panels, the trends were things like “open source” – positive trends. In Africa it was all about how difficult the governance problems are, how an innovator or social entrepreneur is looked on with at best skepticism or out worst outright hostility, by both local society and by the government.

It was still amazing to hear the breadth of ingenuity at work. I heard about training rats to sniff out landmines, clay refrigerators that allow girls to go to school rather than hawking the harvest before it spoils…and in the same breath, about how it takes five hours to get one hour of work done, because of the difficulty of keeping a steady power supply.

At lunch I crashed the Indonesian table, where I was asked if I was part of the youth venture group. Nicest age-related compliment I’ve gotten in a while (the youth venture folks are like 16 years old). But it does strip away any pretense of gravitas I thought I might have had.

I also got to spend some quality time with Richard Jefferson of CAMBIA. Richard is a seasoned social entrepreneur who has been hacking away at the patent problem in “open” biotech for about 20 years now. I always learn a lot from him.

At the end of the day the heat and the jetlag caught me, and I fell asleep before the dinner, which is a bummer.

I’m looking forward to having some time off the road in a few weeks to try and integrate this experience with the other travel over the past four months. There’s a long way between the World Economic Forum at Davos and this. The entrepreneurs here are doing what they do against such long odds that it can make the whole “cult of the successful entrepreneur” in the US look kind of lame.

It doesn’t take a hero to make a social networking site, it just takes some Ruby code. We have layer upon layer upon layer of infrastructure that makes it easy to innovate in the US. We have stable power grids, for the most part, and communications lines. You can buy a computer for under $500, slap Linux on it, and you’re ready to start a software company. You don’t have to pay a registration fee that takes six months, or worry that the government is going to crack down on you (despite what some crackpots may think) if you protest or run a business that disagrees with the ruling elites.

That level of social, political, and technical infrastructure lifts us all up who benefit from it. It’s invisible to most of us most of the time, and it’s a good thing to be reminded that it’s not something to be taken for granted.

Off to day 3.

Read the comments on this post…

The scholar’s copy

There has been much useful discussion on this list about scholars
as authors, and rightly so. Today, I would like to introduce a
view of what we scholars need nowadays as readers.

Increasingly, my reading is onscreen. The copy of an article or
book that works best for me is the one that I can download to my
desktop, and mark up as I please with highlighting and
commentary. I want to be able to re-copy to multiple folders if
this suits how I work. If I am using the same article for two
different projects, for example, I may want two copies with
different highlighting reflecting the most salient points to each
particular project. This ideal is a copy that I can search,
along with everything else on my computer, either for keywords or
key phrases in the text, or for my own notes. I can share a copy
freely with colleagues or students, with or without my notes,
either privately, or openly, on the web. I may want to create a
new version before sending, with customized notes to fit the
needs of my fellow researcher or student.

My access to my ideal scholar’s copy is not dependent on whether
or not my library can afford a subscription, or whether I
continue at the institution with the subscription. If I submit
an article for publication, I can keep copies of the works that I
referenced.

This is true of journal articles, reports of all kinds, and
e-books, too.

This is one of the reasons why we need libre open access. So
far, only a small percentage of OA is clearly libre OA.
However, once scholars like me begin to experience the
difference, my prediction is that demand for libre OA will grow,
while demand for digital rights management (DRM)-ridden works
will decrease.

It would be most useful if search services would permit limiting
to libre OA (e.g. CC-licensed works).

Heather Morrison, MLIS
PhD Student
Simon Fraser University School of Communication
hgmorris at sfu dot ca

This message was first posted to the Liblicense discussion list.

On the Nature of Ideas

I did an interview recently where the author, clearly having done some homework, called out an old quote of mine arguing that ideas aren’t like widgets or screws, that they’re not industrial objets.

I’d said that a long time ago, inspired by John Perry Barlow’s Declaration of Independence of Cyberspace. Here’s the money quote: “Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish.”

The world that John Perry was talking about has not come to pass, completely. The governments have certainly moved to impose more and greater controls. But as Lessig noted just a few years later in Code: and Other Laws of Cyberspace, the aspects of cyberspace that promised liberation, a nation of the Mind…those aspects were the output of human-controlled systems, and humans could and would change the rules if they didn’t like the outcomes.

I was there for parts of these conversations. Gave JPB a ride around town, harassing him about the declaration and about Cassidy. I put together Lessig’s book party for Code when it came out. But the thing about ideas stuck with me more than the rest.

I’d studied epistemology, the theory of knowledge. You get a lot of examples of attempting to codify ideas (and brains, the storage tanks of ideas) into the machinery of the time (See the masterful book, “Memory Practices in the Sciences” for more). But in the end, ideas resist complete capture.

They’re ethereal. We’ve spent thousands of years trying to codify them, into Plato’s forms, into machines, now into code. The dominant industrial paradigm tends to be the stuff we use to try and understand them and their human substrates – pumps and machines to explain the brain in the industrial age, circuits and pathways in the digital. This ethereal nature makes it hard to get the ideas into the powerful information systems of the day, which are based on bits and bytes. It’s one of the reasons that the most powerful idea transmissions systems we have are humanist – text, sound, video. It’s why something as lousy as powerpoint can take over, because it’s a way for people to talk to people.

It’s hard to make ideas into widgets or screws because of this. It’s also hard because we all see the world differently, even those of us who agree. We use common words as proxies to help convey that this red ball is an apple and this green ball is also an apple. Making the word apple into an abstracted computation tool is hard, because you have to decide what it means, and convince others to use your meaning rather than their own. Cyc’s been pushing on this for 25 years and we still don’t have the Star Trek computer recognizing our voices.

But we’re starting to have to try to make ideas at least representable as widgets. The problem is that the information space is overwhelming us as people. We can, using robots in the lab, sensor networks in the ocean, miniature microphones in public spaces, genotype smears on red light signals, generate data at such a level that we simply cannot use our own brains to proces the data into an information state that lets us extract, test, and generate ideas.

There’s two things we can do, one easy and one hard. First, we can make the existing technologies for idea transmission (writing it down onto paper and publishing it) more democratic and network friendly. That starts with good formats: putting ideas into PDFs is a terrible idea. The format blocks the ability to take the text out, remix it, translate it, reformat it, text mine it, have it read to a blind person via text-to-speech, and on and on. It continues with open access (so we don’t create a digital divide first, and so we enable the entrepreneurs of the world, wherever they are).

I’m at a conference in Hyderabad called Tech4Society that is packed to the gills with inventors and social entrepreneurs, who for the most part have no access to the scientific and technical literature. It’s all in English – which many, but not all, speak here. It’s very expensive – nuclear physics journals can cost more – per year – than a new car. And this is a tax on the entrepreneurs of the world.

Inventors have to invent. It’s in their blood. And they have the capacity to rapidly combine information from multiple sources to assemble new projects. I heard today of systems that leverage sugar palms in Indonesia to power villages, of local decentralized power panels for wind and solar to give each house its own power, and more and more and more. But this is being done without the newest knowledge, knowledge that is on the web somewhere…but locked up by paywalls.

We as Americans send a lot of money. We’d be a damned sight better if we sent a lot of knowledge.

The Open Access movement is being driven mainly inside the developed world. US and EU librarians feel the pinch of the serials pricing crisis, and funders like the US National Institute of Health and the Wellcome Trust take policy directions that lead towards the availability of the biomedical research. And it’s wonderful that the solutions to these problems all lift the developing world along the way. It seems that the scholarly literature will, in fits and starts, and in some disciplines faster or slower, find its proper place on the net, free of commercial restrictions, one of these days.

But it’s not just ideas, it’s what to do with the ideas. Richard Jefferson today made the lovely point that the patent literature is a giant database of recipes to make inventions. And that if you can find the inventions that were patented in the US, but not in India, you’ve got a lot of good stuff to work on in India. This is true. And deeply important.

But I got a little melancholy thinking of the stuff that comes before an inventor becomes a social entrepreneur, ready to apply for funding or speak in front of 200 people at a conference. Maybe they can’t read the patents and understand the information. Maybe they just need to build some furniture for their house, or fix the stove. I had a sense-memory of long shelves of the books in Home Depot, the how-to guides, the recipes for doing simple stuff, unpatented stuff, but essential stuff, and I look at the amazing user-driven innovative spirit that rules the day in India, and I want to cry at the amount of knowledge that is deprived. Give these folks the books and get out of the way!

I wish we could come together as a culture and create an open source set of how-to books to parallel the scholarly literature. Those book are how I learned to rewire sockets, to fix plumbing. Where I learned what was dangerous and what was safe. They’re a place where those ideas, laid out in the papers that are becoming free, became methods that I could use. Where the ideas became actionable for me. Imagine if those books were movable from my server where I wrote them, to a server in Africa who translated them into Kiswahili, or Chichewa. If they could be formatted to be read on the mobile phones ubiquitous across the world. If they could lead to one more hour of light per night through the creation of lightweight photovoltaics.

Has anyone out there done this yet? Anyone interested in doing it? Anyone immediately get a rash and freak out? All of those reactions are interesting to me.

The second part of why ideas are hard will have to wait for the next post. Suffice to say the word “semantic” will feature prominently.

I’ll post more from day 2 tomorrow. Jetlag over and out.

Read the comments on this post…

Gwyddion – Open Source SPM analysis

Gwyddion We just discovered a very cool open source program for analyzing scanning probe microscopy (SPM) data files. There a number of incompatible and proprietary file formats for surface microscopies (AFM, MFM, STM, SNOM/NSOM) and getting data out from a microscope for further processing (including baseline leveling, profile analysis, and statistical analysis) can be a difficult task. Gwyddion is a Gtk+ based package that runs on Linux, Mac OS X (with MacPorts) and Windows and appears to do nearly everything that some expensive commercial packages (and some free closed-source packages) can do. Some of our colleagues were very happy to discover this piece of wizardry!

Open Access: Self-Selected, Mandated & Random; Answers & Questions

What follows below is what we hope will be found to be a conscientious and attentive series of responses to questions raised by Phil Davis about our paper (Gargouri et al, currently under refereeing) — responses for which we did further analyses of our data (not included in the draft under refereeing).

Gargouri, Y., Hajjem, C., Lariviere, V., Gingras, Y., Brody, T., Carr, L. and Harnad, S. (2010) Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research.(Submitted)

We are happy to have performed these further analyses, and we are very much in favor of this sort of open discussion and feedback on pre-refereeing preprints of papers that have been submitted and are undergoing peer review. They can only improve the quality of the eventual published version of articles.

However, having carefully responded to Phil’s welcome questions, below, we will, at the end of this posting, ask Phil to respond in kind to a question that we have repeatedly raised about his own paper (Davis et al 2008), published a year and a half ago…

RESPONSES TO DAVIS’S QUESTIONS ABOUT OUR PAPER:

PD:
“Stevan, Granted, you may be more interested in what the referees of the paper have to say than my comments; I’m interested in whether this paper is good science, whether the methodology is sound and whether you interpret your results properly.”

We are very appreciative of your concern and hope you will agree that we have not been interested only in what the referees might have to say. (We also hope you will now in turn be equally responsive to a longstanding question we have raised about your own paper on this same topic.)

PD:
“For instance, it is not clear whether your Odds Ratios are interpreted correctly. Based on Figure 4, OA article are MORE LIKELY to receive zero citations than 1-5 citations (or conversely, LESS LIKELY to receive 1-5 citations than zero citations). You write: “For example, we can say for the first model that for a one unit increase in OA, the odds of receiving 1-5 citations (versus zero citations) increased by a factor of 0.957 [re: Figure 4 (p.9)]”… I find your odds ratio methodology unnecessarily complex and unintuitive…”

Our article supports its conclusions with several different, convergent analyses. The logistical analysis with the odds ratio is one of them, and its results are fully corroborated by the other, simpler analyses we also reported, as well as the supplementary analyses we append here now.

[Yassine has since added that your confusion was our fault because by way of an illustration we had used the first model (0 citations vs. 1-5 citations), with its odds ratio of 0.957 (“For example, we can say for the first model that for a one unit increase in OA, the odds of receiving 1-5 citations (versus zero citations) increased by a factor of 0.957 “). In the first model the value 0.957 is below and too close to 1 to serve as a good illustration of the meaning of the odds ratio. We should have chosen a better example. one in which (Exp(ß) is clearly greater than 1. We should have said: “For example, we can say for the second model that for a one unit increase in OA, the odds of receiving 5-10 citations (versus 1-5 citations) increased by a factor of 1.323.” This clearer example will be used in the revised text of the paper. (See Figure 4S with a translation to display the deviations relative to an odds ratio of one rather than zero {although Excel here insists on labelling the baseline “0” instead of “1”! This too will be fixed in the revised text}.]

PD:
“Similarly in Figure 4 (if I understand the axes correctly), CERN articles are more than twice as likely to be in the 20+ citation category than in the 1-5 citation category, a fact that may distort further interpretation of your data as it may be that institutional effects may explain your Mandated OA effect. See comments by Patrick Gaule and Ludo Waltman on the review”

Here is the analysis underlying Figure 4, re-done without CERN, and then again re-done without either CERN or Southampton. As will be seen, the outcome pattern, as well as its statistical significance, are the same whether or not we exclude these institutions. (Moreover, I remind you that those are multiple regression analyses in which the Beta values reflect the independent contributions of each of the variables: That means the significant OA advantage, whether or not we exclude CERN, is the contribution of OA independent of the contribution of each institution.)

SUPPLEMENTARY FIGURE S1

PD:
“Changing how you report your citation ratios, from the ratio of log citations to the log of citation ratios is a very substantial change to your paper and I am surprised that you point out this reporting error at this point.”

As noted in Yassine’s reply to Phil, that formula was incorrectly stated in our text, once; in all the actual computations, results, figures and tables, however, the correct formula was used.

PD:
“While it normalizes the distribution of the ratios, it is not without problems, such as: 1. Small citation differences have very large leverage in your calculations. Example, A=2 and B=1, log (A/B)=0.3”

The log of the citation ratio was used only in displaying the means (Figure 2), presented for visual inspection. The paired-sample t-tests of significance (Table 2) were based on the raw citation counts, not on log ratios, hence had no leverage in our calculations or their interpretations. (The paired-sample t-tests were also based only on 2004-2006, because for 2002-2003 not all the institutional mandates were yet in effect.)

Moreover, both the paired-sample t-test results (2004-2006) and the pattern of means (2002-2006) converged with the results of the (more complicated) logistical regression analyses and subdivisions into citation ranges.

PD:
“2. Similarly, any ratio with zero in the denominator must be thrown out of your dataset. The paper does not inform the reader on how much data was ignored in your ratio analysis and we have no information on the potential bias this may have on your results.”

As noted, the log ratios were only used in presenting the means, not in the significance testing, nor in the logistic regressions.

However, we are happy to provide the additional information Phil requests, in order to help readers eyeball the means. Here are the means from Figure 2, recalculated by adding 1 to all citation counts. This restores all log ratios with zeroes in the numerator (sic); the probability of a zero in the denominator is vanishingly small, as it would require that all 10 same-issue control articles have no citations!

The pattern is again much the same. (And, as noted, the significance tests are based on the raw citation counts, which were not affected by the log transformations that exclude numerator citation counts of zero.)

SUPPLEMENTARY FIGURE S2

This exercise suggested a further heuristic analysis that we had not thought of doing in the paper, even though the results had clearly suggested that the OA advantage is not evenly distributed across the full range of article quality and citeability: The higher quality, more citeable articles gain more of the citation advantage from OA.

In the following supplementary figure (S3), for exploratory and illustrative purposes only, we re-calculate the means in the paper’s Figure 2 separately for OA articles in the citation range 0-4 and for OA articles in the citation range 5+.

SUPPLEMENTARY FIGURE S3:

The overall OA advantage is clearly concentrated on articles in the higher citation range. There is even what looks like an OA DISadvantage for articles in the lower citation range. This may be mostly an artifact (from restricting the OA articles to 0-4 citations and not restricting the non-OA articles), although it may also be partly due to the fact that when unciteable articles are made OA, only one direction of outcome is possible, in the comparison with citation means for non-OA articles in the same journal and year: OA/non-OA citation ratios will always be unflattering for zero-citation OA articles. (This can be statistically controlled for, if we go on to investigate the distribution of the OA effect across citation brackets directly.)

PD:
“Have you attempted to analyze your citation data as continuous variables rather than ratios or categories?”

We will be doing this in our next study, which extends the time base to 2002-2008. Meanwhile, a preview is possible from plotting the mean number of OA and non-OA articles for each citation count. Note that zero citations is the biggest category for both OA and non-OA articles, and that the proportion of articles at each citation level decreases faster for non-OA articles than for OA articles; this is another way of visualizing the OA advantage. At citation counts of 30 or more, the difference is quite striking, although of course there are few articles with so many citations:

SUPPLEMENTARY FIGURE 4


REQUEST FOR RESPONSE TO QUESTION ABOUT DAVIS ET AL’S (2008) PAPER:

Davis, PN, Lewenstein, BV, Simon, DH, Booth, JG, & Connolly, MJL (2008)
Open access publishing, article downloads, and citations: randomised controlled trial British Medical Journal 337

Critique of Davis et al’s paper: “Davis et al’s 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No ConclusionBMJ Responses.

Davis et al had taken a 1-year sample of biological journal articles and randomly made a subset of them OA, to control for author self-selection. (This is comparable to our mandated control for author self-selection.) They reported that after a year, they found no significant OA Advantage for the randomized OA for citations (although they did find an OA Advantage for downloads) and concluded that this showed that the OA citation Advantage is just an artifact of author self-selection, now eliminated by the randomization.

What Davis et al failed to do, however, was to demonstrate that — in the same sample and time-span — author self-selection does generate the OA citation Advantage. Without showing that, all they have shown is that in their sample and time-span, they found no significant OA citation Advantage. This is no great surprise, because their sample was small and their time-span was short, whereas many of the other studies that have reported finding an OA Advantage were based on much larger samples and much longer time spans.

The question raised was about controlling for self-selected OA. If one tests for the OA Advantage, whether self-selected or randomized, there is a great deal of variability, across articles and disciplines, especially for the first year or so after publication. In order to have a statistically reliable measure of OA effects, the sample has to be big enough, both in number of articles and in the time allowed for any citation advantage to build up to become detectable and statistically reliable.

Davis et al need to do with their randomization methodology what we have done with our mandating methodology, namely, to demonstrate the presence of a self-selected OA Advantage in the same journals and years. Then they can compare that with randomized OA in those same journals and years, and if there is a significant OA Advantage for self-selected OA and no OA Advantage for randomized OA then they will have evidence that — contrary to our findings — some or all of the OA Advantage is indeed just a side-effect of self-selection. Otherwise, all they have shown is that with their journals, sample size and time-span, there is no detectable OA Advantage at all.

What Davis et al replied in their BMJ Authors’ Response was instead this:

PD:
“Professor Harnad comments that we should have implemented a self-selection control in our study. Although this is an excellent idea, it was not possible for us to do so because, at the time of our randomization, the publisher did not permit author-sponsored open access publishing in our experimental journals. Nonetheless, self-archiving, the type of open access Prof. Harnad often refers to, is accounted for in our regression model (see Tables 2 and 3)… Table 2 Linear regression output reporting independent variable effects on PDF downloads for six months after publication Self-archived: 6% of variance p = .361 (i.e., not statistically significant)… Table 3 Negative binomial regression output reporting independent variable effects on citations to articles aged 9 to 12 months Self-archived: Incidence Rate 0.9 p = .716 (i.e., not statistically significant)…”

This is not an adequate response. If a control condition was needed in order to make an outcome meaningful, it is not sufficient to reply that “the publisher and sample allowed us to do the experimental condition but not the control condition.”

Nor is it an adequate response to reiterate that there was no significant self-selected self-archiving effect in the sample (as the regression analysis showed). That is in fact bad news for the hypothesis being tested.

Nor is it an adequate response to say, as Phil did in a later posting, that even after another half year or more had gone by, there was still no significant OA Advantage. (That is just the sound of one hand clapping again, this time louder.)

The only way to draw meaningful conclusions from Davis et al’s methodology is to demonstrate the self-selected self-archiving citation advantage, for the same journals and time-span, and then to show that randomization wipes it out (or substantially reduces it).

Until then, our own results, which do demonstrate the self-selected self-archiving citation advantage for the same journals and time-span (and on a much bigger and more diverse sample and a much longer time scale), show that mandating the self-archiving does not wipe out the citation advantage (nor does it substantially reduce it).

Meanwhile, Davis et al’s finding that although their randomized OA did not generate a citation increase, it did generate a download increase, suggests that with a larger sample and time-span there may well be scope for a citation advantage as well: Our own prior work and that of others has shown that higher early download counts tend to lead to higher citation counts later.

Bollen, J., Van de Sompel, H., Hagberg, A. and Chute, R. (2009) A principal component analysis of 39 scientific impact measures in PLoS ONE 4(6): e6022,

Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) 1060-1072.

Lokker, C., McKibbon, K. A., McKinlay, R.J., Wilczynski, N. L. and Haynes, R. B. (2008) Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study BMJ, 2008;336:655-657

Moed, H. F. (2005) Statistical Relationships Between Downloads and Citations at the Level of Individual Documents Within a Single Journal. Journal of the American Society for Information Science and Technology 56(10): 1088- 1097

O’Leary, D. E. (2008) The relationship between citations and number of downloads Decision Support Systems 45(4): 972-980

Watson, A. B. (2009) Comparing citations and downloads for individual articles Journal of Vision 9(4): 1-4

Kto p?aci za Open Access?

Na to newralgiczne, pojawiaj?ce si? w ró?nych kontekstach pytanie stara si? odpowiedzie? Raym Crow. Niedawno SPARC, która ju? od 10 lat wspiera rozwój komunikacji naukowej w dobie nowych mediów, opublikowa?a jego przewodnik “Income models for Open Access: An overview of current practice,” (“Modele finansowania Open Access – przegl?d bie??cych praktyk”).
W swoim raporcie Crow prezentuje stosowane […]

Internet w warsztacie historyka

Marcin Wilkowski, twórca portalu Historia i media, oraz Maciej Rynarzewski organizuj? realizowany w ca?o?ci on-line kurs dla studentów historii po?wi?cony zagadnieniom u?ycia nowoczesnych technologii w nauce: Historia i internet.
Szczegó?owe tematy poruszane podczas kursu skupiaj? si? m. in. wokó? zagadnie? otwarto?ci nauki, digital humanities, digitalizacji i udost?pniania ?róde? w internecie, czy nowoczesnych narz?dzi u?atwiaj?cych kwerendy.
Pomys? zorganizowania […]