In an era where technology penetrates nearly every aspect of life, science increasingly entrusts its tools of discovery to machines and algorithms. Research methods are no longer driven solely by theoretical innovation but by technical progress: eye-tracking glasses, neural networks, AI-based text analysis, brain imaging. These methods suggest that scientific work is becoming more objective, precise—closer to the truth. But is that really the case?
The assumption that technology brings us closer to truth rests on a fundamental confusion: we often equate measurability with objectivity. What can be measured appears indisputable. But what if our tools not only make knowledge possible—but also limit our perception of reality? What if technological progress is leading us, with seemingly increasing precision, deeper into an illusion?
Paradigms Shape Our Reality
The idea that science progresses linearly toward truth was challenged by Thomas S. Kuhn in his seminal work The Structure of Scientific Revolutions (1962). Kuhn showed that science advances not steadily, but through paradigm shifts—radical changes in the basic assumptions and questions science is willing to ask.
A paradigm determines what is considered a legitimate problem, which methods are accepted, and what is seen as “real.” Within any given paradigm, much is visible—but much else remains hidden. Not because it doesn’t exist, but because it falls outside the accepted framework.
In early 20th-century psychology, for instance, subjective experience was dismissed in favor of observable behavior (behaviorism). Only later did the cognitive revolution reopen the door to inner processes. Today, it is often technological capability that defines what counts as "researchable."
What the machine can measure becomes relevant. What it cannot is ignored.
Machines as Extended Senses—with Blind Spots
Technological research tools like eye-tracking glasses seem cutting-edge, almost magical. They record where a person is looking, for how long, and how pupils react to stimuli. This yields fascinating insights into areas like reading behavior, advertising effectiveness, or social perception.
Yet these insights rely on the assumption that seeing equals meaning. That visual attention reflects cognitive importance. That what we perceive visually is central to how we think.
But nature tells a different story. Bees see ultraviolet. Bats navigate using ultrasound. Sharks detect electric fields. These animals live in perceptual worlds vastly different from ours—not because they are less evolved, but because they access other sensory realities.
If our technology only extends what we already perceive, then it amplifies the visible but remains blind to the invisible. It sharpens our focus, but narrows our horizon.
Why We Equate Technology With Scientific Progress
Technological research is often equated with progress. In the Western modern worldview, technology represents rationality, control, and advancement. Enlightenment thinking embedded the belief that everything meaningful can, eventually, be measured and explained—if we only build the right tools.
This view persists today. In many disciplines, the more high-tech the method, the more trustworthy the results seem. A brain scan is considered "harder" evidence than an in-depth interview—even though both involve layers of interpretation.
Philosopher of technology Don Ihde describes this as instrumental realism: the tendency to assign authority to instruments themselves. Yet these devices are not neutral observers. They are designed by humans, shaped by culture, and limited by construction.
The Illusion of Objectivity in High-Tech Research
"Objectivity" suggests that scientific truth stands apart from the observer. But this idea, too, has a history. In earlier centuries, "objective" referred to what was morally or socially just. Only in the 19th century did it come to describe the measurement ideal of modern science.
Scientific methods are never free of historical and conceptual baggage. Every decision—what to measure, how to define variables, which models to use—is an interpretive act. Technology hides these decisions, but it does not erase them.
Contemporary debates about artificial intelligence and machine learning highlight a similar danger: the more complex a technical system is, the more likely we are to perceive it as infallible. But algorithms are not neutral. They replicate biases, reflect social norms, and encode invisible assumptions behind layers of code.
How Research Reflects Human Perception and Bias
The danger, then, lies not in the technology itself—but in our relationship to it. When we treat technology not as a tool, but as a truth-bearer, we lose critical distance.
Scientific knowledge is always filtered through human perception. But perception is selective, subjective, culturally shaped. What we accept as reality is a construction—by our nervous systems and our societies.
Cognitive scientist Donald D. Hoffman goes even further, arguing that our senses do not reveal reality as it is, but as it is useful for survival. In his book The Case Against Reality, he claims we live behind a user interface, not in direct contact with the underlying "code" of the world.
AI in Medicine: When Technology Meets Human Judgment—A Practical Example
Recent experiments with ChatGPT and similar AI systems illustrate a modern parallel. Clinics and researchers are testing these tools for tasks like patient triage, summarizing medical records, generating reports, or even assisting in diagnosis. At first glance, the promise is compelling: AI can process vast amounts of information in seconds, reduce repetitive tasks, and provide seemingly objective guidance.
Yet, the limitations are profound. These systems generate outputs based on patterns in training data—they do not truly understand pathophysiology or the unique context of individual patients. Biases in the data can lead to subtle, sometimes dangerous errors, while overreliance on AI risks sidelining the clinician's trained judgment.
This is where human expertise remains irreplaceable. The experienced physician’s eye, honed through years of observation and interaction, captures nuances of patient behavior, symptoms, and social context that AI cannot. Technology can assist, but it cannot substitute the critical, empathetic, and interpretive work that defines medicine.
Ultimately, AI in healthcare exemplifies the broader point: advanced instruments may expand our reach, but they do not guarantee objectivity or eliminate the need for reflection. Just as in research, the greatest safeguard against overconfidence in technology is human judgment combined with critical skepticism.
Conclusion: Between Progress and Philosophical Humility
There is no question that technology improves the scope and precision of scientific research. It opens new questions and brings inaccessible phenomena into view. But it must not blind us to the fact that knowledge is always a function of framing—of tools, worldviews, and language.
True scientific inquiry requires more than data. It demands reflection, skepticism, and an awareness of its own boundaries. Especially in an era of technological acceleration, we must ask: What are we seeing—and what are we not?
Perhaps the greatest progress lies not in new devices, but in an old virtue: doubt.
References
Don Ihde. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology. Indiana University Press.
Burn-Murdoch, J. (2013, July 24). Why you should never trust a data visualisation. The Guardian. https://www.theguardian.com/news/datablog/2013/jul/24/why-you-should-never-trust-a-data-visualisation
Frontiers in Psychology. (2021). Special issue: Eye tracking in cognitive research. https://www.frontiersin.org/research-topics/17318/eye-tracking-in-cognitive-research
Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press.
Hirosawa, T., Harada, Y., Mizuta, K., Sakamoto, T., Tokumasu, K., & Shimizu, T. (2024). Evaluating ChatGPT-4's accuracy in identifying final diagnoses within differential diagnoses compared with those of physicians: Experimental study for diagnostic cases. JMIR Formative Research, 8(6), e59267. https://doi.org/10.2196/59267
Hoffman, D. D. (2019). The case against reality: Why evolution hid the truth from our eyes. W. W. Norton & Company.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
New York Times. (2020, November 25). Can eye-tracking tell us what we’re really thinking? https://www.nytimes.com/2020/11/25/science/eye-tracking-cognition.html
Sample, I. (2016, October 27). Weapons of math destruction: Cathy O’Neil adds up the damage of algorithms. The Guardian. https://www.theguardian.com/books/2016/oct/27/cathy-oneil-weapons-of-math-destruction-algorithms-big-data
Science, Technology & Human Values. (2022). Technology, perception, and epistemic bias (Vol. 47, Issue 4). https://journals.sagepub.com/toc/sthv/47/4
Social Studies of Science. (2021). Measuring the unmeasurable? Epistemology and the technological turn. https://journals.sagepub.com/doi/full/10.1177/03063127211021126
Inspired by HBS Puar
Authored by Rebekka Brandt
