The number of publishers that offer academics, researchers, and postgraduate students the opportunity to publish articles and book chapters quickly and easily has been growing steadily in recent years. This can be ascribed to a variety of factors, e.g., increasing Internet use, the Open Access movement, academic pressure to publish, and the emergence of publishers with questionable interests that cast doubt on the reliability and the scientific rigor of the articles they publish.
All this has transformed the scholarly and scientific publishing scene and has opened the door to the appearance of journals whose editorial procedures differ from those of legitimate journals. These publishers are called predatory, because their manuscript publishing process deviates from the norm (very short publication times, non-existent or low-quality peer-review, surprisingly low rejection rates, etc.).
The object of this article is to spell out the editorial practices of these journals to make them easier to spot and thus to alert researchers who are unfamiliar with them. It therefore reviews and highlights the work of other authors who have for years been calling attention to how these journals operate, to their unique features and behaviors, and to the consequences of publishing in them.
The most relevant conclusions reached include the scant awareness of the existence of such journals (especially by researchers still lacking experience), the enormous harm they cause to authors’ reputations, the harm they cause researchers taking part in promotion or professional accreditation procedures, and the feelings of chagrin and helplessness that come from seeing one’s work printed in low-quality journals. Future comprehensive research on why authors decide to submit valuable articles to these journals is also needed.
This paper therefore discusses the size of this phenomenon and how to distinguish those journals from ethical journals.
“But Dr. Brunham had a plan. He had long been a proponent of a model of scientific discovery called “open science,” which eschewed patents – and profits – and sought to develop cures that could help the most people possible, rather than only those who could pay for them. The SARS Accelerated Vaccine Initiative, a network of publicly funded Canadian laboratories dedicated to testing SARS vaccine candidates, was his moonshot. Not only would it buck the for-profit system by making all of its findings completely open to all, but he was convinced this decentralized approach could bring a SARS vaccine to market in less than two years….”
Since 2011, it is an open secret that many published results in psychology journals do not replicate. The replicability of published results is particularly low in social psychology (Open Science Collaboration, 2015).
A key reason for low replicability is that researchers are rewarded for publishing as many articles as possible without concerns about the replicability of the published findings. This incentive structure is maintained by journal editors, review panels of granting agencies, and hiring and promotion committees at universities.
To change the incentive structure, I developed the Replicability Index, a blog that critically examined the replicability, credibility, and integrity of psychological science. In 2016, I created the first replicability rankings of psychology departments (Schimmack, 2016). Based on scientific criticisms of these methods, I have improved the selection process of articles to be used in departmental reviews.
1. I am using Web of Science to obtain lists of published articles from individual authors (Schimmack, 2022). This method minimizes the chance that articles that do not belong to an author are included in a replicability analysis. It also allows me to classify researchers into areas based on the frequency of publications in specialized journals. Currently, I cannot evaluate neuroscience research. So, the rankings are limited to cognitive, social, developmental, clinical, and applied psychologists.
2. I am using department’s websites to identify researchers that belong to the psychology department. This eliminates articles that are from other departments.
3. I am only using tenured, active professors. This eliminates emeritus professors from the evaluation of departments. I am not including assistant professors because the published results might negatively impact their chances to get tenure. Another reason is that they often do not have enough publications at their current university to produce meaningful results.
Like all empirical research, the present results rely on a number of assumptions and have some limitations. The main limitations are that (a) only results that were found in an automatic search are included (b) only results published in 120 journals are included (see list of journals) (c) published significant results (p < .05) may not be a representative sample of all significant results (d) point estimates are imprecise and can vary based on sampling error alone.
These limitations do not invalidate the results. Large difference in replicability estimates are likely to predict real differences in success rates of actual replication studies (Schimmack, 2022).
Stanford University
I used the department website to find core members of the psychology department. I counted 19 professors and 6 associate professors. Not all researchers conduct quantitative research and report test statistics in their result sections. I limited the analysis to 13 professors and 3 associate professors who had at least 100 significant test statistics.
Figure 1 shows the z-curve for all 13,147 tests statistics in articles published by these 26 faculty members. I use the Figure to explain how a z-curve analysis provides information about replicability and other useful meta-statistics.
1. All test-statistics are converted into absolute z-scores as a common metric of the strength of evidence (effect size over sampling error) against the null-hypothesis (typically H0 = no effect). A z-curve plot is a histogram of absolute z-scores in the range from 0 to 6. The 1,344 z-scores greater than 6 are not shown because z-scores of this magnitude are extremely unlikely to occur when the null-hypothesis is true (particle physics uses z > 5 for significance). Although they are not shown, they are included in the meta-statistics.
2. Visual inspection of the histogram shows a steep drop in frequencies at z = 1.96 (solid red line) that corresponds to the standard criterion for statistical significance, p = .05 (two-tailed). This shows that published results are selected for significance. The dashed red line shows significance for p < .10, which is often used for marginal significance. Thus, there are more results that are presented as significant than the .05 criterion suggests.
3. To quantify the amount of selection bias, z-curve fits a statistical model to the distribution of statistically significant results (z > 1.96). The grey curve shows the predicted values for the observed significant results and the unobserved non-significant results. The statistically significant results (including z > 6) make up 22% of the total area under the grey curve. This is called the expected discovery rate because the results provide an estimate of the percentage of significant results that researchers actually obtain in their statistical analyses. In comparison, the percentage of significant results (including z > 6) includes 69% of the published results. This percentage is called the observed discovery rate, which is the rate of significant results in published journal articles. The difference between a 67% ODR and a 22% EDR provides an estimate of the extent of selection for significance. The difference of~ 45 percentage points is fairly large. The upper level of the 95% confidence interval for the EDR is 30%. Thus, the discrepancy is not just random. To put this result in context, it is possible to compare it to the average for 120 psychology journals in 2010 (Schimmack, 2022). The ODR (67% vs. 72%) and the EDR (22% vs. 28%) are somewhat lower, suggesting that statistical power is lower in studies from Stanford.
4. The z-curve model also estimates the average power of the subset of studies with significant results (p < .05, two-tailed). This estimate is called the expected replication rate (ERR) because it predicts the percentage of significant results that are expected if the same analyses were repeated in exact replication studies with the same sample sizes. The ERR of 60% suggests a fairly high replication rate. The problem is that actual replication rates are lower than the ERR predictions (about 40% Open Science Collaboration, 2015). The main reason is that it is impossible to conduct exact replication studies and that selection for significance will lead to a regression to the mean when replication studies are not exact. Thus, the ERR represents the best case scenario that is unrealistic. In contrast, the EDR represents the worst case scenario in which selection for significance does not select more powerful studies and the success rate of replication studies is not different from the success rate of original studies. The EDR of 21% is lower than the actual replication success rate of 40%. To predict the success rate of actual replication studies, I am using the average of the EDR and ERR, which is called the actual replication prediction (ARP). For Stanford, the ARP is (60 + 22)/2 = 41%. This is close to the currently best estimate of the success rate for actual replication studies based on the Open Science Collaboration project (~40%). Thus, Stanford results are expected to replicate at the average rate of psychological research.
4. The EDR can be used to estimate the risk that published results are false positives (i.e., a statistically significant result when H0 is true), using Soric’s (1989) formula for the maximum false discovery rate. An EDR of 22% implies that no more than 18% of the significant results are false positives, but the lower limit of the 95%CI of the EDR, 18%, allows for 31% false positive results. Most readers are likely to agree that this is too high. One solution to this problem is to lower the conventional criterion for statistical significance (Benjamin et al., 2017). Figure 2 shows that alpha = .005 reduces the point estimate of the FDR to 4% with an upper limit of the 95% confidence interval of 8%. Thus, without any further information readers could use this criterion to interpret results published in articles by researchers in the psychology department of Stanford University.
Comparisons of research areas typically show lower replicability for social psychology (OSC, 2015), and Stanford has a large group of social psychologists (k = 10). However, the results for social psychologists at Stanford are comparable to the results for the entire faculty. Thus, the relatively low replicability of research from Stanford compared to other departments cannot be attributed to the large contingent of social psychologists.
Some researchers have changed research practices in response to the replication crisis. It is therefore interesting to examine whether replicability of newer research has improved. To examine this question, I performed a z-curve analysis for articles published in the past five year. The results show a marked improvement. The expected discovery more than doubled from 22% to 50%, and this increase is statistically significant. (So far, I have analyzed only 7 departments, but this is the only one with a significant increase yet). The high EDR reduces the false positive risk to a point estimate of 5% and an upper limit of the 95% confidence interval of 9%. Thus, for newer research, most of the results that are statistically significant with the conventional significance criterion of .05 are likely to be true effects. However, effect sizes are still going to be inflated because selection for significance with modest power results in regression to the mean. Nevertheless, these results provide first evidence of positive change at the level of departments. It would be interesting to examine whether these changes are due to individual efforts of researchers or reflect systemic changes that have been instituted at Stanford.
There is considerable variability across individual researchers, although confidence intervals are often wide due to the smaller number of test statistics. The table below shows the meta-statistics of all 16 faculty members that provided results for the departmental z-curve. You can see the z-curve for individual faculty member by clicking on their name.
“The study, published Feb. 23, 2022, in the Proceedings of the National Academy of Sciences (PNAS), analyzed the reasons for 1.6 million downloads of National Academies of Sciences, Engineering, and Medicine (NASEM) consensus reports, considered among the highest credibility science-based literature.
The resulting analysis, which included U.S. downloads only, is the first to look at who is using such information and why. Professor Diana Hicks, Assistant Professor Omar I. Asensio, and Ph.D. students Matteo Zullo and Ameet Doshi, all of Georgia Tech’s School of Public Policy, co-authored the study.
They found that while nearly half of the reports were downloaded for academic purposes, even more were accessed by people outside strictly educational settings, such as veterans, chaplains, and writers. The word “edification” appeared 3,700 times in the data set, signaling a strong desire for lifelong learning among users….”
“In 2017, publishers Elsevier and American Chemical Society filed a copyright lawsuit against research sharing platform ResearchGate, claiming that 50 of their articles were made available without permission. A court in Germany has now prohibited ResearchGate from making those titles available but refused to award damages due to the plaintiffs’ failure to demonstrate acquisition rights.”
Abstract: Preregistration can force researchers to front-load a lot of decision-making to an early stage of a project. Choosing which preregistration platform to use must be therefore be one of those early decisions, and because a preregistration cannot be moved, that choice is permanent. This article aims to help researchers who are already interested in preregistration choose a platform by clarifying differences between them. Preregistration criteria and features are explained and analyzed for sites that cater to a broad range of research fields, including: GitHub, AsPredicted, Zenodo, the Open Science Framework (OSF), and an “open-ended” variant of OSF. While a private prespecification document can help mitigate self-deception, this guide considers publicly shared preregistrations that aim to improve credibility. It therefore defines three of the criteria (a timestamp, a registry, and persistence) as a bare minimum for a valid and reliable preregistration. GitHub and AsPredicted fail to meet all three. Zenodo and OSF meet the basic criteria and vary in which additional features they offer.
Since 2011, it is an open secret that many published results in psychology journals do not replicate. The replicability of published results is particularly low in social psychology (Open Science Collaboration, 2015).
A key reason for low replicability is that researchers are rewarded for publishing as many articles as possible without concerns about the replicability of the published findings. This incentive structure is maintained by journal editors, review panels of granting agencies, and hiring and promotion committees at universities.
To change the incentive structure, I developed the Replicability Index, a blog that critically examined the replicability, credibility, and integrity of psychological science. In 2016, I created the first replicability rankings of psychology departments (Schimmack, 2016). Based on scientific criticisms of these methods, I have improved the selection process of articles to be used in departmental reviews.
1. I am using Web of Science to obtain lists of published articles from individual authors (Schimmack, 2022). This method minimizes the chance that articles that do not belong to an author are included in a replicability analysis. It also allows me to classify researchers into areas based on the frequency of publications in specialized journals. Currently, I cannot evaluate neuroscience research. So, the rankings are limited to cognitive, social, developmental, clinical, and applied psychologists.
2. I am using department’s websites to identify researchers that belong to the psychology department. This eliminates articles that are from other departments.
3. I am only using tenured, active professors. This eliminates emeritus professors from the evaluation of departments. I am not including assistant professors because the published results might negatively impact their chances to get tenure. Another reason is that they often do not have enough publications at their current university to produce meaningful results.
Like all empirical research, the present results rely on a number of assumptions and have some limitations. The main limitations are that (a) only results that were found in an automatic search are included (b) only results published in 120 journals are included (see list of journals) (c) published significant results (p < .05) may not be a representative sample of all significant results (d) point estimates are imprecise and can vary based on sampling error alone.
These limitations do not invalidate the results. Large difference in replicability estimates are likely to predict real differences in success rates of actual replication studies (Schimmack, 2022).
University of British Columbia (Vancouver)
I used the department website to find core members of the psychology department. I counted 34 professors and 7 associate professors. Not all researchers conduct quantitative research and report test statistics in their result sections. I limited the analysis to 22 professors and 4 associate professors who had at least 100 significant test statistics.
Figure 1 shows the z-curve for all 13,147 tests statistics in articles published by these 26 faculty members. I use the Figure to explain how a z-curve analysis provides information about replicability and other useful meta-statistics.
1. All test-statistics are converted into absolute z-scores as a common metric of the strength of evidence (effect size over sampling error) against the null-hypothesis (typically H0 = no effect). A z-curve plot is a histogram of absolute z-scores in the range from 0 to 6. The 1,531 z-scores greater than 6 are not shown because z-scores of this magnitude are extremely unlikely to occur when the null-hypothesis is true (particle physics uses z > 5 for significance). Although they are not shown, they are included in the meta-statistics.
2. Visual inspection of the histogram shows a steep drop in frequencies at z = 1.96 (solid red line) that corresponds to the standard criterion for statistical significance, p = .05 (two-tailed). This shows that published results are selected for significance. The dashed red line shows significance for p < .10, which is often used for marginal significance. Thus, there are more results that are presented as significant than the .05 criterion suggests.
3. To quantify the amount of selection bias, z-curve fits a statistical model to the distribution of statistically significant results (z > 1.96). The grey curve shows the predicted values for the observed significant results and the unobserved non-significant results. The statistically significant results (including z > 6) make up 21% of the total area under the grey curve. This is called the expected discovery rate because the results provide an estimate of the percentage of significant results that researchers actually obtain in their statistical analyses. In comparison, the percentage of significant results (including z > 6) includes 68% of the published results. This percentage is called the observed discovery rate, which is the rate of significant results in published journal articles. The difference between a 68% ODR and a 21% EDR provides an estimate of the extent of selection for significance. The difference of~ 50 percentage points is large. The upper level of the 95% confidence interval for the EDR is 29%. Thus, the discrepancy is not just random. To put this result in context, it is possible to compare it to the average for 120 psychology journals in 2010 (Schimmack, 2022). The ODR (68% vs. 72%) is similar, but the EDR is lower (31% vs. 21%). This suggest that the research produced by UBC faculty members is somewhat less replicable than research in general.
4. The z-curve model also estimates the average power of the subset of studies with significant results (p < .05, two-tailed). This estimate is called the expected replication rate (ERR) because it predicts the percentage of significant results that are expected if the same analyses were repeated in exact replication studies with the same sample sizes. The ERR of 67% suggests a fairly high replication rate. The problem is that actual replication rates are lower (about 40% Open Science Collaboration, 2015). The main reason is that it is impossible to conduct exact replication studies and that selection for significance will lead to a regression to the mean when replication studies are not exact. Thus, the ERR represents the best case scenario that is unrealistic. In contrast, the EDR represents the worst case scenario in which selection for significance does not select more powerful studies and the success rate of replication studies is not different from the success rate of original studies. The EDR of 21% is lower than the actual replication success rate of 40%. To predict the success rate of actual replication studies, I am using the average of the EDR and ERR, which is called the actual replication prediction (ARP). For UBC research, the ARP is 44%. This is close to the currently best estimate of the success rate for actual replication studies based on the Open Science Collaboration project (~40%). Thus, UBC results are expected to replicate at the average rate of psychological research.
5. The EDR can be used to estimate the risk that published results are false positives (i.e., a statistically significant result when H0 is true), using Soric’s (1989) formula for the maximum false discovery rate. An EDR of 21% implies that no more than 20% of the significant results are false positives, but the lower limit of the 95%CI of the EDR, 13%, allows for 34% false positive results. Most readers are likely to agree that this is too high. One solution to this problem is to lower the conventional criterion for statistical significance (Benjamin et al., 2017). Figure 2 shows that alpha = .005 reduces the point estimate of the FDR to 3% with an upper limit of the 95% confidence interval of 7%. Thus, without any further information readers could use this criterion to interpret results published in articles by researchers in the psychology department of UBC.
The next analyses examine area as a potential moderator. Actual replication studies suggest that social psychology has a lower replication rate than cognitive psychology, whereas the replicability of other areas is currently unknown (OSC, 2015). UBC has a large group of social psychologists with enough data to conduct a z-curve analysis (k = 9). Figure 3, shows the z-curve for the pooled data. The results show no notable difference to the z-curve for the department in general.
The only other area with at least five members that provided data to the overall z-curve was developmental psychology. The results are similar, although the EDR is a bit higher.
The last analysis examined whether research practices changed in response to the credibility crisis and evidence of low replication rates (OSC, 2015). For this purpose, I limited the analysis to articles published in the past 5 years. The EDR increased, but only slightly (29% vs. 21%) and not significantly. This suggests that research practices have not changed notably.
There is considerable variability across individual researchers, although confidence intervals are often wide due to the smaller number of test statistics. The table below shows the meta-statistics of all 26 faculty members that provided results for the departmental z-curve. You can see the z-curve for individual faculty member by clicking on their name.
EDR = expected discovery rate (mean power before selection for significance) ERR = expected replication rate (mean power after selection for significance) FDR = false positive risk (maximum false positive rate, Soric, 1989) ARP = actual replication prediction (mean of EDR and ERR)
In 2021, PubMed Central (PMC) continued to grow and evolve in its role as a repository for research support by the National Institutes of Health (NIH) and other partner funding agencies. Around 1.3 million articles have been made publicly accessible in PMC under the NIH Public Access Policy; and the volume of NIH-supported articles added to PMC with associated data content continues to increase annually (59% of articles in 2020 included supplementary material and/or a data availability statement vs. 27% in 2009).
We are exploring the challenges we would like to work on, and how. In the second week of our Strategy Retreat, IOI staff, Community Oversight Council and Steering Committee members discussed work values and expectations and defined what we need from each other. We also started identifying and understanding the top challenges that we should tackle as a group. This paves the way for us to start investigating solutions. …
We are exploring relevant models for better understanding infrastructure and continuing our conversations with community members to plan out the next stages of our work developing the Catalog of Open Infrastructure Services (COIs).
We are investigating methods and tools used to finance sustainable water infrastructure. In an effort to learn from other sectors, we are investigating the efforts of the OECD and IRC to reach the sustainable development goal of ensuring availability and sustainable management of water and sanitation for all….”
PLOS ONE has an open Call for Papers on Physical Oceanography, for which selected publications will be showcased in a special collection. This call for papers aims to highlight the breadth of physical oceanography research across a wide range of regions and disciplines. We welcome submissions including those that feature multidisciplinary research and encourage studies that utilize Open Science resources, such as data and code repositories.
The upcoming collection will be curated by three accomplished researchers in the field, all of whom additionally serve as Editorial Board Members for PLOS: Dr. Maite deCastro (University of Vigo, Spain); Dr. Isabel Iglesias Fernandez (CIIMAR, University of Porto, Portugal); and Dr. Vanesa Magar (CICESE, Mexico).
Here, we chat with Profs. deCastro and Iglesias to learn more about their research, their thoughts on the future of Physical Oceanography and how advances in this field can provide a better understanding of future environmental change.
Tell us about your research.
MdC: My research is clearly aligned with Climate and Renewable Energy, especially on the impact of climate change on marine ecosystems and on wind and wave renewable energy resources. It is also aligned with Food, Bioeconomy, Natural Resources, and Environment, especially in the relationship between climate and species of commercial value.
II: I’d like to say that my research interests are multidisciplinary but always with something in common, and this nexus is physical oceanography. My main research topic is related with estuarine hydrodynamics. I work with numerical models, which are versatile tools that help to unravel the hydrodynamic patterns in these complex areas. Once these models are implemented for a specific region, they can be used for multiple purposes such as representation of sediment, contaminant and marine litter transportation patterns, forecasting the effects of extreme events, anthropic activities or climate change conditions, or even calculating the potential of hydrokinetic energy production. These are some works that I have performed in collaboration with several colleagues and in the scope of national and international research projects.
At the same time, I am also interested in coastal and oceanic dynamics. I have conducted research that related long-term variability sea level anomalies in the North Atlantic with teleconnection patterns; I supervised a study related with wave forecast, and I am collaborating in the generation of a tool to forecast the dispersion patterns of sediment plumes generated by potential deep-sea mining activities in the Atlantic region.
What new finding or growing research topic in the field of physical oceanography are you currently excited about?
MdC: I am very enthusiastic about the research I have carried out in recent years, as it has allowed us to delve into the effect of climate change on historical trends in coastal upwelling, sea surface temperature, and mixed layer, among others, and to analyze its biological impact on species such as sea bream, tuna, and algae. We have also analyzed the future projections of these variables under different climate change scenarios and their possible impacts on bivalves such as mussels, different clam species, and cockles in the Galician estuaries. This approach will allow us to know both the evolution of these ecosystems in the future and to determine what measures will be necessary to mitigate the effect of climate change in order to make these ecosystems more resilient.
II: Ecoengineering. In recent decades, the focus of coastal and estuarine engineering research has shifted from technical approaches towards the integrated combination of technical, ecological, and nature-orientated solutions to reduce environmental impacts. Practical ecoengineering solutions for estuarine regions should be based on numerical modelling tools, which can provide the necessary knowledge of the relevant hydrodynamic processes and an understanding of natural processes, hydrodynamic–ecological interactions, and the impacts of structures on the environment.
At the same time, deep-sea mining is a hot topic. In recent years, deep-sea mining has become an attractive and economically viable solution to provide metals and minerals for the worldwide industry. Although promising, a large proportion of these resources are located in the vicinity of still poorly studied and understood sensitive ecosystems. The generated sediment laden plumes and the trace elements released to the water column that are associated with the extraction procedures can change the biogeochemical equilibrium of the surrounding area. This can alter deep-sea life-support services, damaging the local ecosystems with potential impacts that can persist through decades. Reliable ocean numerical models reproducing the dynamics of deep-sea areas can help to mapping the potential scale of deep-sea mining effects, being one of the key technological advances needed to implement risk assessment and better anticipate possible impacts.
For each of you, your research features an exploration of the effects of wind and the role it plays in ocean and climatological processes. Can you discuss the close link between atmospheric and ocean sciences?
MdC: A part of my most recent research is closely related to the development of renewable energies as an alternative to burning fossil fuels in the fight against climate change. Specifically, my research analyzes future offshore wind and wave energy resources under different climate change scenarios. This research field is an example of the close link between atmospheric and ocean sciences.
II: The atmosphere and the ocean are two different parts of the same system, jointly with the lithosphere, the biosphere and the cryosphere. The atmosphere and the ocean are in contact, constantly exchange mass, momentum and energy between them. The wind is one clear example of this link between the atmosphere and the ocean, generating waves and currents and affecting the sea surface temperature. But there are many others: evaporation, precipitation, heating, cooling, … And these links are the bases of short- (meteorological) and long-term (climatological) processes as winter rainfall, hurricanes or ENSO events, among others.
Many physical oceanographers spend a lot of their time working at a computer – do you ever get to do field work or research cruises?
MdC: At the beginning of my research career, I carried out several oceanographic campaigns in the Galician estuaries to take field measurements that would allow us to characterize their hydrodynamics. These campaigns were carried out jointly with chemists and biologists who analyzed other aspects of the estuaries.
II: Yes! I was in field work in the middle of January and I am expecting to have more campaigns in March and June of this year. Most of the time I am in front of a computer, but numerical models need real data to be calibrated and validated. For that we must go out into the field and measure the physical variables that we need. And although sometimes it hard to start the campaigns at six in the morning, the truth is that it is a breath of fresh air.
There is an undeniable link between anthropogenic pressures on the global environment and changes that we are seeing in marine systems. Can you discuss how you have observed this in your own research and the implications your findings have for the future?
MdC: The enormous increase in global energy consumption, together with the need to avoid the burning of fossil fuels to mitigate climate change, has led the scientific community to make the development of alternative energy sources, such as renewable energies, a priority objective. This has motivated a part of my most recent research where the offshore wind and wave energy resource is analyzed both now and in the near future under different climate change scenarios that take into account different concentrations of greenhouse gas emissions, socioeconomic measures, and land uses. This renewable energy resource analysis is complemented, in some locations, with an economic viability analysis.
II: It is clear that something is happening. Now the effects of the anthropogenic pressures on the global environment are visible. In the Iberian Peninsula we are facing one of the most severe droughts in the last decades. But other recent signals are the floods in western Germany in July 2021, the record-breaking high temperature in Moscow during July 2021, the snowfall in Madrid in January 2021, heavy cyclones and dust storms, or a heavier-than-normal wildfire season. So it is not just something that scientists are saying. It is something that the non-scientific population can see now. And, as the United Nations Secretary General Antonio Guterres has warned, the world is reaching a “point of no return”.
The complex estuarine systems can be considered as one of the most sensitive areas to environmental stressors due to the strong coupling between physics, sediments, chemistry and biology. In this sense, the effects of the climate change conditions in estuaries can be diverse based on changes in river flow, in extreme events frequency, and in water temperature and water level, affecting the circulation, salinity distribution, suspended sediments, dissolved oxygen and biogeochemistry. I used numerical models to forecast the effect of sea level rise inside the estuarine regions. It was demonstrated that the sea level rise can cause more severe floods in some estuaries. However, what should be taken into account is that the sea level rise inside the estuaries will produce a change in the circulation patterns and in the water masses configuration. This will undoubtedly affect the ecological and socio-economic aspects, due to the great value of the estuarine ecosystem services.
Historically, women have had to push for equality, respect and recognition in the field of physics. Do you think that the field is changing to become more inclusive, and what do you think research advisors, university leaders and funding agencies can do to better support women in physical oceanography?
MdC: Personally, I have always felt treated exactly the same as any other colleague throughout my scientific career, both in my closest circle and at an institutional level. I think I’ve had the same opportunities and help. I think that in this sense the field of physics, or at least this is my personal perception, is a privileged field. Despite this, I consider that there are still few women in this field compared to men and any activity aimed at making women feel more attracted to the field of physics is necessary.
II: I must say that I never need to fight more than a “man” to achieve the same respect and recognition for my work neither in my research group, nor in my research institute, country or even internationally. I had the same opportunities as anyone being men or woman. And curiously, we are more women in my research group, which develop research topics that were traditionally associated with “man”, like physics, engineering, mathematics, algorithms, numerical modelling, computational sciences, etc. I know that I am lucky, because other women before me pushed hard for equality and recognition and there are other women in different areas that still need to push to gain respect and visibility.
The term Open Science has been used to highlight the fact that transparency in scientific research goes beyond just Open Access publications. In the field of physical oceanography how do you think that making code and data publicly available can benefit researchers and policy makers?
MdC: In general terms, for the sake of transparency and the progress of the investigation, I consider it important to be able to have all the necessary material (code, data) so that any researcher can reproduce the results of another. We will move faster and save resources if the data generated by other entities are public and if we all have access to each other’s progress instead of repeating what other researchers have already done. All this, of course, is within a framework of respect for the work of each one.
II: In my opinion, the science needs to be open. We are paying science with public funds, and it is not ethical to keep our research only for us on a long term basis. Of course, there must be some nuances regarding data for articles or patents. But I think that, at the end, the generated research should be public available. And it is not only the Open Access publications, which guarantee the transparency and the replicability of the research methodology, but also the numerical codes, the tools and the data generated in the scope of public funded research projects. Only in this way will we manage to advance faster in science, sharing our knowledge with other researchers and supporting the policy makers with proper tools to ensure the safety of populations and the sustainability of ecosystems and services.
About the Guest Editors
Isabel Iglesias holds a PhD in Climatic Sciences: Meteorology, Physical Oceanography and Climatic Change by the University of Vigo (2010). Since 2011 Isabel is working as an Assistant Researcher at the Interdisciplinary Centre for Marine and Environmental Research (CIIMAR) of the University of Porto, Portugal. Her main research topics are related with physical oceanography, atmosphere-ocean interaction, transport (sediments and marine litter), extreme events and climate change. Particularly she has experience in analysing the hydrodynamic behaviour of the water masses and in applying numerical models at oceanic, including surface and deep-sea areas, coastal and estuarine regions. Other areas of expertise include the performance and analysis of physical data obtained in sampling campaigns and the evaluation and analysis of remote sensing data for numerical modelling calibration/validation.
Maite deCastro is a Professor of Applied Physics at the University of Vigo. She obtained her PhD in Physics from the University of Santiago de Compostela (1998). The main focus of her research deals with (a) the study of hydrodynamics, waves and transport phenomena in shallow waters by means of in situ field data and numerical simulations; (b) the analysis of the variability (inter-annual and inter-decadal) of coastal and oceanic sea surface temperature (SST) using numerical and satellite data; (c) the analysis of the water masses around the Iberian Peninsula using salinity and temperature data obtained from the SODA base or ARGO buoys; (d) The effects of meteorological forcing on the ocean using satellite data or reanalysis such as: wind data, Ekman transport, sea level pressure (SLP), SST, teleconnection indices (NAO, EA, EA-WR, SCA, POL…); (e) The analysis of the plume development of rivers using radiance data from the Oceancolor MODIS base. (f) the influence of climate change on oceanographic variables, both present and in the future and, (h) the analysis of present and future wind, solar and wave resources for renewable energy production.
Vanesa Magar holds a BSc in Physics from UNAM, and a master’s in advanced studies in Mathematics and a PhD in Applied Mathematics from the University of Cambridge, UK. She has been working in coastal and physical oceanography since 2002, and in renewable energy research and development since 2008. She joined the Physical Oceanography Department of CICESE as a senior researcher in 2014, where she co-leads the GEMlab (Geophysical and Environmental Modelling Lab) with Dr Markus Gross. Her research interests include wind energy, marine renewable energy, coastal hydrodynamics, and sustainable development issues in relation to renewable energy project development. She is member of the Energy Group of the Institute of Physics (IOP), UK, and a fellow and chartered mathematician from the IMA. She served in the Mexican Geophysical Union director’s board (as Secretary General, Vice President, and President) from 2016 to 2021. Currently, she is part of the Executive Committee of the National Strategic Programme (PRONACE) in Energy and Climate Change of CONACYT (2018- ).
Disclaimer: Views expressed by contributors are solely those of individual contributors, and not necessarily those of PLOS.
Abstract: Paying to publish is an ethical issue. During 2010–14, Indian researchers have used 488 open access (OA) journals levying article processing charge (APC), ranging from US$ 7.5 to 5,000, to publish about 15,400 papers. Use of OA journals levying APC has increased from 242 journals and 2,557 papers in 2010 to 328 journals and 3,634 papers in 2014. We estimate that India is potentially spending about US$ 2.4 million annually on APCs paid to OA journals and the amount would be much more if we add APCs paid to make papers published in hybrid journals open access. It would be prudent for Indian authors to make their work freely available through interoperable repositories, a trend that is growing in Latin America and China, especially when funding is scarce. Scientists are ready to pay APC as long as institutions pay for it and funding agencies are not ready to insist that grants provided for research should not be used for paying APC.
Abstract: To analyse the outcomes of the funding they provide, it is essential for funding agencies to be able to trace the publications resulting from their funding. We study the open availability of funding data in Crossref, focusing on funding data for publications that report research related to Covid-19. We also present a comparison with the funding data available in two proprietary bibliometric databases: Scopus and Web of Science. Our analysis reveals a limited coverage of funding data in Crossref. It also shows problems related to the quality of funding data, especially in Scopus. We offer recommendations for improving the open availability of funding data in Crossref.
When Allemai Dagnatchew (SFS ’22) began her final semester of college, the last thing she wanted to worry about was digital privacy. But within the first few days of spring 2022 classes, she found out that one of her professors mandated use of Perusall, a program that allows instructors to see how many students are doing their class readings.
Dagnatchew’s distrust of Perusall mirrors a larger sentiment college students have felt toward proctoring software and their utilization during the pandemic. The use of online test proctoring and similar programs skyrocketed over the last two years, with students reporting discomfort towards proctoring programs since the beginning of virtual learning.
“[Professors] are excited about these programs, and they don’t think it’s weird at all,” she said. “And that’s what I feel like is odd—that they think this is normal.”
Perusall, for example, gives professors access to the amount of time a student spends on a reading and how many of the assigned pages they’ve viewed. Despite students feeling like their privacy is compromised with this access and the return of most students to in-person learning, schools are still utilizing proctoring and similar invasive technologies.
The use of virtual learning tools has been subject to the fluctuating pandemic and schools’ virtual status, with the Omicron variant causing many colleges to move online for final exams and the beginning of the spring semester. As COVID-19 continues, students have been increasingly subject to excessive monitoring technologies—whether proctoring exams or scanning files—such as Proctorio, ProctorU, and Perusall.
In the next year, researchers should expect to face a sensitive set of questions whenever they send their papers to journals, and when they review or edit manuscripts. More than 50 publishers representing over 15,000 journals globally are preparing to ask scientists about their race or ethnicity — as well as their gender — in an initiative that’s part of a growing effort to analyse researcher diversity around the world. Publishers say that this information, gathered and stored securely, will help to analyse who is represented in journals, and to identify whether there are biases in editing or review that sway which findings get published. Pilot testing suggests that many scientists support the idea, although not all.