Something’s gone wrong in science.
Across disciplines, we see it: retracted medical studies, dangerous science errors in medicine, and a growing wave of flawed scientific studies exposed. But what if the problem runs deeper than misconduct or pressure? What if the crisis lies in the way research is done?
Modern researchers often rely blindly on data analysis software, following statistical protocols they barely understand. The result? Methodological flaws in published studies – and worse, conclusions that affect real lives.
The Methodical Blind Spot in Modern Academia
In the public imagination, science often begins with curiosity: a spark of wonder, a bold question, a step into the unknown. This idealistic view, however, contrasts sharply with the reality found in many university programs around the world. As academic systems have become more rigid, the craft of scientific inquiry—especially the mastery of research methods—has become sidelined or treated as an afterthought.
A closer look at higher education e. g. in the United States, the largest academic research ecosystem globally, reveals a revealing imbalance. While social science disciplines such as psychology, political science, and sociology typically incorporate formal research methodology courses into their undergraduate curricula (often beginning in the second year), many other disciplines fall short—especially those that are fundamentally based on empirical and statistical methods.
It’s a surprising paradox: the very sciences that historically developed and rely most heavily on quantitative methodology—biology, chemistry, physics, and engineering—often do not offer structured methods training within their standard coursework. Instead, these fields rely heavily on lab courses, mentored research experiences, or optional summer fellowships (such as REU programs funded by the National Science Foundation) to expose students to research practice. While these experiences may help students get hands-on exposure to experimental work, they rarely provide the theoretical and technical grounding required to critically assess data or conduct meaningful statistical analysis.
Students in these programs are frequently left to teach themselves the statistical software and quantitative techniques they will later be expected to use in graduate-level research or high-stakes professional settings. Tools like SPSS, R, or Python libraries for data science often remain opaque to them, as they follow online tutorials without ever fully understanding what is being computed, or how subtle missteps might distort results.
This lack of foundational training carries serious consequences. When researchers operate with only superficial knowledge of their methods, the entire scientific process is at risk—from hypothesis formulation and experimental design to data interpretation and public policy influence. And indeed, this isn’t just theoretical. There have been very real and documented cases of flawed research findings causing tangible harm.
One widely cited example is a meta-analysis published in the journal Clinical Psychology Review, which found that methodological flaws in psychiatric studies often lead to misinterpretation of treatment effectiveness—potentially resulting in the widespread use of ineffective or even harmful therapies (Cuijpers et al., 2010).
Another review by the American Psychological Association highlighted how psychology’s replication crisis partly stems from poor methodological rigor, with researchers applying statistical tools without a clear grasp of underlying assumptions (Lilienfeld, 2017).
A broader investigation by The National Academies (2018) emphasized that while undergraduate students in STEM fields increasingly engage in lab-based or mentored research experiences, the absence of structured, reflective, and theory-based method training remains a critical gap in academia. Not only in the US.
Moreover, even when students do receive some exposure to statistical methods, it is often compartmentalized and not integrated into their subject-specific research. This separation of theory from practice can result in blind adherence to formulaic procedures—executing analysis without being able to evaluate the appropriateness or limitations of the chosen method. A simple data entry mistake in a statistical program can lead to completely misleading results. Without methodological literacy, such errors are not only undetected—they are published.
The irony becomes even sharper when we consider that it is primarily the human sciences—psychology, sociology, and political science—that prioritize methods education. These fields often deal with ambiguous, qualitative data and have historically been accused of lacking rigor. Yet they have adopted and formalized methodological training more thoroughly than the 'hard sciences,' where the risks of methodological ignorance are arguably even more consequential.
This institutional mismatch suggests a deeply embedded assumption: that students in empirical sciences will simply “pick up” methods along the way. But this assumption is dangerous. Without structured and critical methods education, science risks becoming a procedural exercise—one in which researchers run data through opaque pipelines, generate publishable results, and move on, all without fully grasping the tools they wield.
The consequences are not limited to academia. Faulty methodologies produce faulty science. And faulty science informs public policy, medical treatment, technological innovation, and educational models. In a world where scientific authority shapes how societies respond to everything from pandemics to climate change, this gap in methodological training is not a minor academic issue—it is a public one.
References
American Psychological Association. (2016). APA Guidelines for the Undergraduate Psychology Major: Version 2.0. https://www.apa.org/ed/precollege/about/undergraduate-major-guidelines.pdf
Choi, C. (2018, September 20). Cornell review finds academic misconduct by food researcher. Associated Press (AP News). https://apnews.com/article/2faa3568bef443409ca65cebfd01d968
Cuijpers, P., van Straten, A., Bohlmeijer, E., Hollon, S. D., & Andersson, G. (2010). The effects of psychotherapy for adult depression are overestimated: a meta-analysis of study quality and effect size. Psychological Medicine, 40(2), 211-223. https://doi.org/10.1017/S0033291709006114
Lilienfeld, S. O. (2017). Psychology’s replication crisis and the grant culture: Righting the ship. Perspectives on Psychological Science, 12(4), 660–664. https://doi.org/10.1177/1745691616687745
National Academies of Sciences, Engineering, and Medicine. (2018). Undergraduate Research Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington, DC: The National Academies Press. https://doi.org/10.17226/24622
Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., &
Wicherts, J. M. (2016). The prevalence of statistical reporting errors
in psychology (1985–2013). Behavior Research Methods, 48(4), 1205–1226.
https://doi.org/10.3758/s13428-015-0664-2
Open Science
Collaboration. (2015). Estimating the reproducibility of psychological
science. Science, 349(6251), aac4716.
https://doi.org/10.1126/science.aac4716
Servick, K. (2018, September 21). Cornell nutrition scientist resigns after retractions and research misconduct finding. Science. https://www.science.org/content/article/cornell-nutrition-scientist-resigns-after-retractions-and-research-misconduct-finding
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384
Time Staff. (2018, September). A prominent researcher on eating habits has had more studies retracted. Time Magazine. https://time.com/5402927/brian-wansink-cornell-resigned
Vidgen, B., & Yasseri, T. (2016). P-values: Misunderstood and misused. Frontiers in Physics, 4, Article 6. https://doi.org/10.3389/fphy.2016.00006
Authored by Rebekka Brandt
