When Progress Stalls: Technology Advances, Humanity Stands Still

In the last few decades, technological advancement has been nothing short of spectacular. Smartphones have turned into portable supercomputers. Artificial intelligence writes texts, paints pictures, and can mimic human conversation. We can stream high-definition video across the globe in seconds. And yet, the world’s most urgent problems remain unsolved.

Hunger still kills millions each year. Wars rage on in new and old forms. Climate change accelerates. Inequality widens. In other words: while technology races forward, humanity itself seems to be standing still.

The paradox is striking. We can build algorithms that generate realistic images from a sentence—but we can’t guarantee clean drinking water to every human being. We can track a person’s gaze with eye-tracking glasses—but we cannot eradicate diseases that have been curable for decades. If this is progress, it is an oddly lopsided kind.


The Narrowing of Research

One reason for this imbalance may lie in the structure of modern research. Science has become deeply specialized. Every field splits into subfields, and those subfields split into niches. Experts spend entire careers studying phenomena so narrow that even other scientists in related areas may not understand them.

Specialization has its benefits—it produces precision and depth. But it also has a cost: the loss of the bigger picture. Large-scale, integrative thinking often gets crowded out. Global, human-centered challenges are sidelined in favor of problems that are technically solvable, methodologically fashionable, or easy to measure.


Chasing the Trend: The AI Obsession

The current obsession with artificial intelligence is a perfect example. In virtually every academic field—from computer science to history—researchers are asking the same question: How can AI be applied here?

Medical researchers investigate AI in diagnostics. Economists model AI’s market impact. Linguists study AI-assisted language learning. And even historians explore whether AI can help in teaching. This is not inherently bad—but it is telling. The conversation is not: What is the most pressing problem in our field, and how can we solve it?
It is: Given that AI is popular, how can we use it?

AI has indeed impressive capabilities, but much of what it offers right now is incremental convenience rather than revolutionary change. In many cases, it is a “better Google”—faster search, cleaner summarization, easier drafting of text. Useful, yes. Transformative for humanity’s core problems? Not yet.


The Technology-First Mindset

The underlying problem may be a technology-first mindset in research. Tools have always shaped science, but today, the tool often becomes the goal. Entire projects are built not around a fundamental question, but around a new technology’s capabilities.

This has several consequences:

  1. Short-termism – chasing what’s immediately possible rather than what’s ultimately necessary.
  2. Problem selection bias – choosing questions that fit the available tools rather than tools that fit the most important questions.
  3. Neglect of non-technical domains – human, social, and ethical issues are pushed aside if they don’t have a straightforward technological “solution.”

As a result, we get endless refinement of methods—more resolution, more computing power, more data—but relatively few breakthroughs that tangibly improve lives on a global scale.

Picture: thanks to Alexander Sinn on Unsplash

The Disconnect Between Capability and Application

It’s not that humanity lacks the means to address its biggest problems. Food production is sufficient to feed everyone on the planet, but distribution, politics, and economics get in the way. Renewable energy technology exists, but fossil fuels remain dominant due to vested interests and infrastructure inertia. Preventable diseases still claim millions of lives despite available treatments.

The bottleneck is not technological innovation—it is political will, social organization, and human cooperation. In these areas, progress has been slow, patchy, and often regressive.


The Cost of Neglecting Human Problems

When science and research neglect directly human problems, the consequences are stark:

  • Global crises become chronic – issues like poverty or displacement are treated as permanent features of the human condition rather than solvable challenges.
  • Public trust in science erodes – when everyday people see research producing gadgets and apps but not solutions to existential threats, cynicism grows.
  • Inequality deepens – technological advances often benefit the wealthy and connected first, widening existing gaps.

This is not to say that every scientist or technologist is ignoring human needs. But as a system, research is skewed toward what is measurable, fundable, and fashionable—not necessarily what is most needed.


Rethinking the Purpose of Research

If research is to fulfill its highest purpose, it must realign its priorities. This means:

  • Asking human-first questions: What would most improve quality of life for the most people?
  • Creating incentives for interdisciplinary, problem-driven research rather than purely tool-driven research.
  • Valuing social and political solutions as highly as technical ones.
  • Measuring success not only in citations and patents, but in human outcomes.

Questions to Consider

To provoke reflection, here are a few questions worth asking in any research field:

  • If technology keeps advancing at this pace, but human well-being does not, is that truly progress?
  • How can research institutions encourage work on problems that are messy, complex, and not easily solved by new tools?
  • What would a “moonshot” for global hunger, climate adaptation, or disease eradication look like—and why is it not being pursued with the same urgency as the latest AI model?
  • Are we innovating for humanity, or innovating for the sake of innovation?

Conclusion: Linking Back to the Illusion of Technological Truth

In a previous discussion on the illusion of objectivity through technology, we explored how new tools can deepen our understanding while also narrowing our vision. This essay looks at the broader consequence: a research culture that is chasing technological novelty while neglecting the human problems that matter most.

It is not enough for technology to get better. It must get better at solving the right problems. Otherwise, we risk living in a world where we can map every neuron in the brain, model every molecule in the atmosphere—and still fail to feed the hungry, heal the sick, or secure peace.


References

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

Nature. (2022, March 8). Grand challenges in global health. https://www.nature.com/articles/d41586-022-00625-4

Powell, M., & Strong, P. (2020). Specialization and its discontents. Social Studies of Science, 50(4), 529–547. https://doi.org/10.1177/0306312720935507

The Guardian. (2021, July 29). AI will not solve the world’s biggest problems. https://www.theguardian.com/commentisfree/2021/jul/29/ai-biggest-problems-technology

The New York Times. (2023, February 12). We keep making tech smarter. Is humanity any wiser? https://www.nytimes.com/2023/02/12/opinion/technology-progress-humanity.html

World Food Programme. (2024). Hunger map 2024. https://www.wfp.org/publications/hunger-map-2024


Inspired by HBS Puar 

Authored by Rebekka Brandt 

LinkedIn