SUMMARY: The MELIBEA evaluator of Open Access policies could prove useful in shaping OA mandates — but it still needs a good deal of work. Currently it conflates institutional and funder policies and criteria, mixes green and gold OA criteria, color-codes in an arbitrary and confusing way, and needs to validate its weights (e.g., against policy success criteria such as the percentage and growth rate of annual output deposited since the policy was adopted).
The MELIBEA Open Access policy validator is timely and promising. It has the potential to become very useful and even influential in shaping OA mandates — but that makes it all the more important to get it right, rather than releasing MELIBEA prematurely, when it still risks increasing confusion rather than providing clarity and direction in OA policy-making.
Remedios Melero is right to point out that — unlike the CSIC Cybermetrics Lab‘s ‘s University Rankings and Repository Rankings — the MELIBEA policy validator is not really meant to be a ranking. Yet MELIBEA has set up its composite algorithm and its graphics to make it a ranking just the same.
It is further pointed out, correctly, that MELIBEA’s policy criteria for institutions and funders are not (and should not be) the same. Yet, with the coding as well as the algorithm, they are treated the same way (and funder policy is taken to be the generic template, institutional policy merely an ill-fitting special case).
It is also pointed out, rightly, that a gold OA publishing policy is not central to institutional OA policy making — yet there it is, contributing sizeable components to the MELIBEA algorithm.
It is also pointed out that MELIBEA’s green color code has nothing to do with the “green OA” coding — yet there it is — red, green yellow — competing with the widespread use of green to designate OA self-archiving, and thereby inducing confusion, both overt and covert.
MELIBEA could be a useful and natural complement to the ROARMAP registry of OA policies. I (and no doubt other OA advocates) would be more than happy to give MELIBEA feedback on every aspect of its design and rationale.
But as it is designed now, I can only agree with Steve Hitchcock’s points and conclude that consulting MELIBEA today would be likely to create and compound confusion rather than helping to bring the all-important focus and direction to OA policy-making that I am sure CSIC, too, seeks, and seeks to help realize.
Here are just a few prima facie points:
(1) Since MELIBEA is not, and should not be construed as a ranking of OA policies — especially because it includes both institutional and funder policies — it is important not to plug it into an algorithm until and unless the algorithm has first been carefully tested, with consultation, to make sure it weights policy criteria in a way that optimizes OA progress and guides policy-makers in the right direction.
(2) For this reason, it is more important to allow users to generate separate flat lists of institutions or funders on the various policy criteria, considered and compared independently, rather than on the basis of a prematurely and arbitrarily weighted joint algorithm.
(3) This is all the more important since the data are based on less then 200 institutions, whereas the CSIC University Rankings are based on thousands. Since the population is still so small, MELIBEA risks having a disproportionate effect on initial conditions and hence direction-setting; all the more reason not to amplify noise and indirection by assigning untested initial weights without carefully thinking through and weighing the consequences.
(4) A potential internal cross-validator of some of the criteria would be a reliable measure of outcome — but that requires much more attention to estimating the annual size and growth-rate of each repository (in terms of OA’s target contents, which are full-text articles), normalized for institution size, annual total target output (an especially tricky denominator problem in the case of multi-institutional funder repositories) and the age of the policy. Policy criteria (such as request/require or immediate/delayed) should be cross-validated against these outcome measures (such as percentage and growth rate of annual target output) in determining the weights in the algorithm.
(5) The MELIBEA color coding needs to be revised — and revised quickly, if there is to be an algorithm at all. All those arbitrary colors in the display of single repositories as ranked by the algorithm are both unnecessary and confusing, and the validator is not comprehensibly labelled. The objective should be to order and focus clearly and intuitively. Whatever is correlated with more green OA output (such as a higher level or faster growth rate in OA’s target content, normalized) should be coded as darker or bigger shades of green. The same should be true for the policy criteria, separately and jointly: in each case, request/require, delayed/immediate, etc., the greenward polarity is obvious and intuitive. This should be reflected in the graphics as well as in any comparative rankings.
(6) If it includes repositories with no OA policy at all (i.e., just a repository and an open invitation to deposit) then all MELIBEA is doing is duplicating ROAR and ROARMAP, whereas its purpose, presumably, is to highlight, weigh and compare specific policy differences among (the very few) repositories that DO have policies.
(7) The sign-up data are also rather confusing; the criteria are not always consistent, relevant or applicable. The sign-up seems to be designed to make a funder-mandate the generic option, whereas this is quite the opposite of reality. There are far more institutions and institutional repositories and policies than there are funders, many of the funder criteria do not apply to institutions, and many of the institutional criteria make no sense for funders. There should be separate criterial lists for institutional policies and for funder policies; they are not the same sort of thing. There is also far too much focus and weight on gold OA policy and payment. If included at all, they should only be at the end, as an addendum, not the focus at the beginning, and on a par with green OA policy.
(8) There is also potential confusion on the matter of “waivers” or “opt-outs”: There are two aspects of a mandate. One concerns whether or not deposit is required (and if so, whether that requirement can be waived) and the other concerns whether or not rights-reservation is required (and if so, whether that requirement can be waived). These two distinct and independent requirements/waivers are completely conflated in the current version of MELIBEA.
I hope there will be substantive consultation and conscientious redesign of these and other aspects of MELIBEA before it can be recommended for serious consideration and use.