NIGERIA’S LOW CONTRIBUTION TO RECOGNIZED WORLD RESEARCH LITERATURE: CAUSES AND REMEDIES: Accountability in Research: Vol 0, No ja

Abstract:  We present a first time study on identifying the causes and remedies to Nigeria’s low contribution to research literature. A mixed research approach involving 300 academic staff from several areas of specialization in southern Nigeria was adopted, using structured questionnaire and semi-structured interview schedule. Data obtained were analysed using descriptive statistics and thematic technique. Furthermore, 43.7%, 28.6%, and 27.7% of the respondents were from the university, polytechnic, and the college of education system, respectively. While 78.4% of the respondents agreed that the high cost of open access publication in top journals influenced Nigeria’s low contribution to research literature, over 75% reported that the low contribution was due to high cost of attending international conferences. Other factors identified were stringent conditions for paper acceptance (89.7%), scarcity of relevant information about Africa (85.4%) and paucity of high impact journals in the libraries of Nigerian tertiary institutions (6.7%). Others were poor funding, non-usage of research findings by policymakers, lack of adequate facilities, and high penchant for publication in predatory journals, informed by promotion criteria not supportive of quality. Participants advocated for increased funding, reduced conference fees and entrenchment of collaboration between reputable publishers abroad and African publishers.

 

OASPA Webinar: Gaining Insights into Global OA eBook Usage – Questions answered – OASPA

 

Following on from the recent webinar entitled Analyzing Open – Gaining Insights into Global OA eBook Usage, we asked our speakers to respond to the unanswered questions posed by attendees via the webinar chat. You can find those questions and answers below. This may be useful for those who missed it or wish to share with colleagues.

Celebrating 20?years of open access publishing at BMC Musculoskeletal Disorders | BMC Musculoskeletal Disorders | Full Text

“Twenty years ago, on October 23, the first article published by BMC Musculoskeletal Disorders appeared free online. Over 5700 publications later, we celebrate our anniversary as the largest Open Access journal in the ‘Orthopaedics and Sports Medicine’ and ‘Rheumatology’ fields. Our ‘open, inclusive, and trusted’ ethos, along with our efficient and robust peer review services, are recognized by the musculoskeletal field.

The early pioneers of BMC Musculoskeletal Disorders pushed the Open Access publishing model, in order to better support the needs of both the clinical and research communities. We pride ourselves on the continual innovation of author services, data transparency, and peer review models. These advances would not have been possible without your efforts – so a massive thank you to all the authors, editorial teams, and reviewers who have contributed to our success. Excellent reviewers are the nucleus of any thriving journal, and we have been lucky to collaborate with so many talents….”

 

Open Context: Web-based research data publishing

“Open Context reviews, edits, annotates, publishes and archives research data and digital documentation. We publish your data and preserve it with leading digital libraries. We take steps beyond archiving to richly annotate and integrate your analyses, maps and media. This links your data to the wider world and broadens the impact of your ideas….”

Home – The Alexandria Archive Institute

“The Alexandria Archive Institute is a non-profit technology company that preserves and shares world heritage on the Web, free of charge. Through advocacy, education, research, and technology programs like Open Context, we pioneer ways to open up archaeology and related fields for all….”

Dear Scholars, Delete Your Account At Academia.Edu

“As privatized platforms like Academia.edu look to monetize scholarly writing even further, researchers, scientists and academics across the globe must now consider alternatives to proprietary companies that aim to profit from our writing and offer little transparency as to how our work will be used in the future.

In other words: It is time to delete your Academia.edu account….”

User Behavior Access Controls at a Library Proxy Server are Okay | Disruptive Library Technology Jester

“The webinar where Cory presented was the first mention I’d seen of a new group called the Scholarly Networks Security Initiative (SNSI). SNSI is the latest in a series of publisher-driven initiatives to reduce the paywall’s friction for paying users or library patrons coming from licensing institutions. GetFTR (my thoughts) and Seamless Access (my thoughts). (Disclosure: I’m serving on two working groups for Seamless Access that are focused on making it possible for libraries to sensibly and sanely integrate the goals of Seamless Access into campus technology and licensing contracts.)…”

WHOIS behind SNSI & GetFTR? | Motley Marginalia

“I question whether such rich personally identifiably information (PII) is required to prevent illicit account access. If it is collected at all, there are more than enough data points here (obviously excluding username and account information) to deanonymize individuals and reveal exactly what they looked at and when so it should not be kept on hand too long for later analysis.

Another related, though separate endeavor is GetFTR which aims to bypass proxies (and thereby potential library oversight of use) entirely. There is soo much which could be written about both these efforts and this post only scratches the surface of some of the complex issues and relationships affect by them.

The first thing I was curious about was, who is bankrolling these efforts? They list the backers on their websites but I always find it interesting as to who is willing to fund the coders and infrastructure. I looked up both GetFTR and SNSI in the IRS Tax Exempt database as well as the EU Find a Company portal and did not find any results. So I decided to do a little more digging matching WHOIS data in the hopes that something might pop out, nothing interesting came of this so I put it at the very bottom….

It should come as no surprise that Elsevier, Springer Nature, ACS, and Wiley – which previous research has shown are the publishers producing the most research downloaded in the USA from Sci-Hub – are supporting both efforts. Taylor & Francis presumably feels sufficiently threatened such that they are along for the ride….”

Academics band together with publishers because access to research is a cybercrime | chorasimilarity

“This is the world we live in. That is what I understand from reading about the Scholarly Networks Security Initiative. and it’s now famous webinar, via Bjorn Brembs october post.

I just found this, after the post I wrote yesterday. I had no idea about this collaboration between publishers and academics to put spyware on academic networks for the benefit of publishers.

What I find worrying is not that publishers, like Elsevier, Springer Nature or Cambridge University Press, want to protect their business against the Sci-hub threat. This is natural behaviour from a commercial point of view. These businesses (not sure about CUP) see their activity atacked, so they fight back to keep their profit up.

The problem is with the academics. Why do they help the publishers? For whose benefit?…”

Why I care about replication studies

In 2009 I attended a European Social Cognition Network meeting in Poland. I only remember one talk from that meeting: A short presentation in a nearly empty room. The presenter was a young PhD student – Stephane Doyen. He discussed two studies where he tried to replicate a well-known finding in social cognition research related to elderly priming, which had shown that people walked more slowly after being subliminally primed with elderly related words, compared to a control condition.

His presentation blew my mind. But it wasn’t because the studies failed to replicate – it was widely known in 2009 that these studies couldn’t be replicated. Indeed, around 2007, I had overheard two professors in a corridor discussing the problem that there were studies in the literature everyone knew would not replicate. And they used this exact study on elderly priming as one example. The best solution the two professors came up with to correct the scientific record was to establish an independent committee of experts that would have the explicit task of replicating studies and sharing their conclusions with the rest of the world. To me, this sounded like a great idea.

And yet, in this small conference room in Poland, there was this young PhD student, acting as if we didn’t need specially convened institutions of experts to inform the scientific community that a study could not be replicated. He just got up, told us about how he wasn’t able to replicate this study, and sat down.

It was heroic.

If you’re struggling to understand why on earth I thought this was heroic, then this post is for you. You might have entered science in a different time. The results of replication studies are no longer communicated only face to face when running into a colleague in the corridor, or at a conference. But I was impressed in 2009. I had never seen anyone give a talk in which the only message was that an original effect didn’t stand up to scrutiny. People sometimes presented successful replications. They presented null effects in lines of research where the absence of an effect was predicted in some (but not all) tests. But I’d never seen a talk where the main conclusion was just: “This doesn’t seem to be a thing”.

On 12 September 2011 I sent Stephane Doyen an email. “Did you ever manage to publish some of that work? I wondered what has happened to it.” Honestly, I didn’t really expect that he would manage to publish these studies. After all, I couldn’t remember ever having seen a paper in the literature that was just a replication. So I asked, even though I did not expect he would have been able to publish his findings.

Surprisingly enough, he responded that the study would soon appear in press. I wasn’t fully aware of new developments in the publication landscape, where Open Access journals such as PlosOne published articles as long as the work was methodologically solid, and the conclusions followed from the data. I shared this news with colleagues, and many people couldn’t wait to read the paper: An article, in print, reporting the failed replication of a study many people knew to be not replicable. The excitement was not about learning something new. The excitement was about seeing replication studies with a null effect appear in print.

Regrettably, not everyone was equally excited. The publication also led to extremely harsh online comments from the original researcher about the expertise of the authors (e.g., suggesting that findings can fail to replicate due to “Incompetent or ill-informed researchers”), and the quality of PlosOne (“which quite obviously does not receive the usual high scientific journal standards of peer-review scrutiny”). This type of response happened again, and again, and again. Another failed replication led to a letter by the original authors that circulated over email among eminent researchers in the area, was addressed to the original authors, and ended with “do yourself, your junior co-authors, and the rest of the scientific community a favor. Retract your paper.”

Some of the historical record on discussions between researchers around between 2012-2015 survives online, in Twitter and Facebook discussions, and blogs. But recently, I started to realize that most early career researchers don’t read about the replication crisis through these original materials, but through summaries, which don’t give the same impression as having lived through these times. It was weird to see established researchers argue that people performing replications lacked expertise. That null results were never informative. That thanks to dozens of conceptual replications, the original theoretical point would still hold up even if direct replications failed. As time went by, it became even weirder to see that none of the researchers whose work was not corroborated in replication studies ever published a preregistered replication study to silence the critics. And why were there even two sides to this debate? Although most people agreed there was room for improvement and that replications should play some role in improving psychological science, there was no agreement on how this should work. I remember being surprised that a field was only now thinking about how to perform and interpret replication studies if we had been doing psychological research for more than a century.
 

I wanted to share this autobiographical memory, not just because I am getting old and nostalgic, but also because young researchers are most likely to learn about the replication crisis through summaries and high-level overviews. Summaries of history aren’t very good at communicating how confusing this time was when we lived through it. There was a lot of uncertainty, diversity in opinions, and lack of knowledge. And there were a lot of feelings involved. Most of those things don’t make it into written histories. This can make historical developments look cleaner and simpler than they actually were.

It might be difficult to understand why people got so upset about replication studies. After all, we live in a time where it is possible to publish a null result (e.g., in journals that only evaluate methodological rigor, but not novelty, journals that explicitly invite replication studies, and in Registered Reports). Don’t get me wrong: We still have a long way to go when it comes to funding, performing, and publishing replication studies, given their important role in establishing regularities, especially in fields that desire a reliable knowledge base. But perceptions about replication studies have changed in the last decade. Today, it is difficult to feel how unimaginable it used to be that researchers in psychology would share their results at a conference or in a scientific journal when they were not able to replicate the work by another researcher. I am sure it sometimes happened. But there was clearly a reason those professors I overheard in 2007 were suggesting to establish an independent committee to perform and publish studies of effects that were widely known to be not replicable.

As people started to talk about their experiences trying to replicate the work of others, the floodgates opened, and the shells fell off peoples’ eyes. Let me tell you that, from my personal experience, we didn’t call it a replication crisis for nothing. All of a sudden, many researchers who thought it was their own fault when they couldn’t replicate a finding started to realize this problem was systemic. It didn’t help that in those days it was difficult to communicate with people you didn’t already know. Twitter (which is most likely the medium through which you learned about this blog post) launched in 2006, but up to 2010 hardly any academics used this platform. Back then, it wasn’t easy to get information outside of the published literature. It’s difficult to express how it feels when you realize ‘it’s not me – it’s all of us’. Our environment influences which phenotypic traits express themselves. These experiences made me care about replication studies.

If you started in science when replications were at least somewhat more rewarded, it might be difficult to understand what people were making a fuss about in the past. It’s difficult to go back in time, but you can listen to the stories by people who lived through those times. Some highly relevant stories were shared after the recent multi-lab failed replication of ego-depletion (see tweets by Tom Carpenter and Dan Quintana). You can ask any older researcher at your department for similar stories, but do remember that it will be a lot more difficult to hear the stories of the people who left academia because most of their PhD consisted of failures to build on existing work.

If you want to try to feel what living through those times must have been like, consider this thought experiment. You attend a conference organized by a scientific society where all society members get to vote on who will be a board member next year. Before the votes are cast, the president of the society informs you that one of the candidates has been disqualified. The reason is that it has come to the society’s attention that this candidate selectively reported results from their research lines: The candidate submitted only those studies for publication that confirmed their predictions, and did not share studies with null results, even though these null results were well designed studies that tested sensible predictions. Most people in the audience, including yourself, were already aware of the fact that this person selectively reported their results. You knew publication bias was problematic from the moment you started to work in science, and the field knew it was problematic for centuries. Yet here you are, in a room at a conference, where this status quo is not accepted. All of a sudden, it feels like it is possible to actually do something about a problem that has made you feel uneasy ever since you started to work in academia.

You might live through a time where publication bias is no longer silently accepted as an unavoidable aspect of how scientists work, and if this happens, the field will likely have a very similar discussion as it did when it started to publish failed replication studies. And ten years later, a new generation will have been raised under different scientific norms and practices, where extreme publication bias is a thing of the past. It will be difficult to explain to them why this topic was a big deal a decade ago. But since you’re getting old and nostalgic yourself, you think that it’s useful to remind them, and you just might try to explain it to them in a 2 minute TikTok video.

History merely repeats itself. It has all been done before. Nothing under the sun is truly new.
Ecclesiastes 1:9

Thanks to Farid Anvari, Ruben Arslan, Noah van Dongen, Patrick Forscher, Peder Isager, Andrea Kis, Max Maier, Anne Scheel, Leonid Tiokhin, and Duygu Uygun for discussing this blog post with me (and in general for providing such a stimulating social and academic environment in times of a pandemic).

Want to Make College More Affordable for Your Students? | News Center | University of Nevada, Las Vegas

“UNLV Libraries and other departments across campus have recently formed the Open Educational Resources Task Force, led by Amy Tureen, head of the Library Liaison Program, to address these issues. The task force’s first webinar, titled “Why Didn’t My Students Buy the Textbook? How Some UNLV Faculty Members are Using Open Educational Resources,” recently tackled the relationship between students, textbooks, and learning. Melissa Bowles-Terry of the Faculty Center led the event sponsored by OIT, Office of Online Education, Faculty Center, University Libraries, UNLV Bookstore. 

Open Educational Resources (OERs) include learning materials, such as textbooks, syllabi, lectures, assignments, projects, and papers, that are available to all for higher education. OERs are both easily accessible and serve as a collection of knowledge across multiple sources. 

The webinar explored how to make college more accessible to students through OERs….”

Webinar: Publishing partnerships to support open access | Hindawi

“The scholarly journals market has undergone huge transformations in recent years with the biggest one being the move of open access from a small radical movement to becoming a core part of scholarly publishers’ journal strategy.

In collaboration with Maverick Publishing Specialists and GeoScienceWorld, panelists on the webinar will discuss: the benefits derived from publishing partnerships when transitioning onto an open access model, the services currently offered in the market, and different publishing partnership models currently available. …”

LCVP, The Leipzig catalogue of vascular plants, a new taxonomic reference list for all known vascular plants | Scientific Data

Abstract:  The lack of comprehensive and standardized taxonomic reference information is an impediment for robust plant research, e.g. in systematics, biogeography or macroecology. Here we provide an updated and much improved reference list of 1,315,562 scientific names for all described vascular plant species globally. The Leipzig Catalogue of Vascular Plants (LCVP; version 1.0.3) contains 351,180 accepted species names (plus 6,160 natural hybrids), within 13,460 genera, 564 families and 84 orders. The LCVP a) contains more information on the taxonomic status of global plant names than any other similar resource, and b) significantly improves the reliability of our knowledge by e.g. resolving the taxonomic status of ~181,000 names compared to The Plant List, the up to date most commonly used plant name resource. We used ~4,500 publications, existing relevant databases and available studies on molecular phylogenetics to construct a robust reference backbone. For easy access and integration into automated data processing pipelines, we provide an ‘R’-package (lcvplants) with the LCVP.

 

E.P.A.’s Final Deregulatory Rush Runs Into Open Staff Resistance

“As President Trump’s Environmental Protection Agency rushes to complete its regulatory rollbacks, agency staff, emboldened by the Biden victory, moves to stand in the way….

Thomas Sinks directed the E.P.A.’s science advisory office and later managed the agency’s rules and data around research that involved people. Before his retirement in September, he decided to issue a blistering official opinion that the pending rule — which would require the agency to ignore or downgrade any medical research that does not expose its raw data — will compromise American public health….”

[Free Webinar] Increasing transparency and trust in preprints: Steps journals can take

“What steps can academic journals take to help scholars and the general public (especially the mainstream media) more easily determine preprint quality and distinguish peer-reviewed preprints from unvetted ones?

We explored this question during Scholastica’s recent panel-style webinar, “Increasing transparency and trust in preprints: Steps journals can take“ as part of Peer Review Week 2020, themed “Trust in Peer Review.” …”