—
0:00
IEEE Spectrum Top Computing Stories of 2025: AI Agents, Reversible Computing & Beyond
Table of Contents
- The Computing Landscape of 2025: AI’s Double-Edged Sword
- Programming Languages 2025: Python Reigns as AI Reshapes Coding
- Why IT Managers Keep Failing Software Projects
- Biocomputers for Sale: Human Brain Cells on a Chip
- LLM Capabilities Are Doubling Every Seven Months
- Reversible Computing Escapes the Laboratory
- Apache Airflow’s Resurrection: From Dead to 40 Million Downloads
- Electronic Health Records: A $100 Billion Cautionary Tale
- Lunar Data Centers: Innovation or Lunacy?
- What These Stories Mean for the Future of Computing
📌 Key Takeaways
- AI agent capabilities double every 7 months: But success rates on hard tasks remain around 50%, raising questions about reliability at scale.
- Reversible computing goes commercial: Vaire Computing’s prototype promises up to 4,000x energy efficiency gains — critical as AI’s power demands soar.
- Biocomputers are now purchasable: Cortical Labs sells a $35,000 chip powered by 800,000 living human neurons for drug discovery research.
- IT management failures persist: Despite 20+ years of warnings and trillions lost, the same managerial mistakes keep killing software projects — and AI won’t fix them.
- EHR adoption backfired spectacularly: $100 billion spent, 520 million records breached, and doctors now spend 4.5 hours per day on screens instead of patients.
The Computing Landscape of 2025: AI’s Double-Edged Sword
Every year, IEEE Spectrum curates the most consequential stories shaping the technology world, and 2025’s computing selection reveals an industry grappling with profound contradictions. Artificial intelligence continued to dominate headlines, but the narrative has matured well beyond uncritical hype. Researchers and engineers are now wrestling with a fundamental tension: AI systems are improving at a breathtaking exponential rate, yet their reliability on complex tasks remains stubbornly mediocre.
The eight stories that IEEE Spectrum selected for their top computing coverage in 2025 paint a vivid picture of an industry in transition. From quantum computing challenges to breakthroughs in biological computation, from the perennial dominance of Python to the audacious plan of putting data centers on the moon, each story illuminates a different facet of where computing stands — and where it’s headed. What makes this year’s selection particularly compelling is the thread of unintended consequences running through nearly every story: technology promises that delivered unexpected downsides, and legacy failures that technology alone cannot solve.
In this comprehensive analysis, we break down each of IEEE Spectrum’s top computing stories, explore their implications for the broader technology ecosystem, and examine what they collectively reveal about the trajectory of computation in the years ahead. Whether you’re a software engineer, technology executive, researcher, or simply curious about the forces shaping our digital future, these stories offer essential context for understanding the computing landscape of 2025 and beyond.
Programming Languages 2025: Python Reigns as AI Reshapes Coding
Topping IEEE Spectrum’s computing stories is their annual programming language rankings — and yes, Python still sits firmly at number one. But the real story isn’t the rankings themselves; it’s the epistemological crisis unfolding beneath them. The traditional methods for measuring programming language popularity — tracking forum questions, repository activity, job postings — are becoming increasingly unreliable as AI coding assistants transform how developers work.
The shift is seismic. Developers who once posted questions on StackExchange now ask ChatGPT or GitHub Copilot instead. This migration away from public forums means the data trails that language ranking systems have relied upon for years are evaporating. IEEE Spectrum’s 2025 methodology had to adapt, incorporating new signals while acknowledging widening uncertainty margins. The question the rankings now pose is more provocative than the answers they provide: In a world where AI writes an increasing share of our code, how will programming languages themselves evolve? Will human-readable abstractions still matter, or will AI agents eventually generate optimized assembly code directly?
Python’s continued dominance reflects its deep entrenchment in AI and machine learning workflows, data science pipelines, and educational curricula. Its readability — once considered a nice-to-have — has become a strategic advantage in an era where AI-generated code needs to be human-auditable. As organizations grapple with the implications of AI foundation models, Python’s role as the lingua franca of machine learning ensures its relevance for years to come. Languages like Rust, TypeScript, and Go continue to gain ground in specific domains, but none threatens Python’s overall position.
Why IT Managers Keep Failing Software Projects
Perhaps the most sobering entry in IEEE Spectrum’s 2025 computing list is Robert Charette’s devastating analysis of IT management failures. Charette, who first wrote about this topic for IEEE Spectrum in 2005, returned twenty years later to deliver a stark verdict: nothing has changed. The same preventable managerial mistakes that doomed software projects two decades ago continue to destroy them today — except now the cumulative cost has reached into the trillions of dollars.
Charette’s over-3,500-word investigation is backed by multiple case studies and damning statistics. The failures aren’t technical; they’re organizational. Projects die because of scope creep that goes unchecked by leadership, unrealistic timelines imposed by executives who don’t understand development complexity, and a systemic reluctance to kill failing projects early. The Standish Group’s CHAOS reports have documented these patterns for decades, yet the IT industry keeps repeating them with remarkable consistency.
What makes Charette’s 2025 update particularly pointed is his argument that AI will not rescue IT management from itself. While AI tools can assist with code generation, testing, and even project estimation, they cannot fix the fundamentally human problems of poor communication, political decision-making, and organizational dysfunction that drive most project failures. The pattern is a cautionary tale for organizations rushing to adopt AI: technology is only as effective as the management structures that deploy it. Without addressing the root causes of IT failure — which are managerial, not technical — AI will simply accelerate the same dysfunctional processes.
Transform complex technology reports into interactive experiences your team will actually engage with.
Biocomputers for Sale: Human Brain Cells on a Chip
In one of the most science-fiction-meets-reality stories of 2025, Australian startup Cortical Labs announced it is selling a biocomputer powered by 800,000 living human neurons on a silicon chip. For $35,000, researchers can purchase what amounts to a miniature brain in a box — one that can learn, adapt, and respond to stimuli in real time. It’s a development that sounds like it belongs in a Philip K. Dick novel, but it’s very real and already generating commercial interest.
Cortical Labs proved the concept by teaching lab-grown brain cells to play Pong — and the neurons often outperformed standard AI algorithms in learning efficiency. But the real commercial application is drug discovery. Pharmaceutical researchers can use these biocomputers to test whether experimental drugs restore function to impaired neural cultures, potentially accelerating the drug development pipeline by providing a more biologically relevant testing platform than traditional cell cultures or animal models.
The implications extend far beyond drug testing. Biological computing represents a fundamentally different paradigm from silicon-based computation. Neurons are inherently parallel processors, they consume remarkably little energy compared to GPUs, and they exhibit forms of plasticity and adaptation that artificial neural networks only approximate. While biological computers won’t replace data centers anytime soon, they open research pathways that could eventually inform how we design artificial systems. The ethical considerations are equally significant: at what point does a collection of living neurons cross the threshold into something that requires moral consideration? As biological computing matures, these questions will demand answers.
LLM Capabilities Are Doubling Every Seven Months
The nonprofit research organization Model Evaluation & Threat Research (METR) proposed an intuitive but powerful metric for evaluating large language model performance: tracking the duration of tasks that AI agents can complete. By this measure, LLM capabilities are doubling every seven months — an exponential growth rate that, if sustained, would mean the most advanced models could handle month-long human tasks by 2030.
The seven-month doubling time is striking because it suggests a pace of improvement even faster than Moore’s Law in its prime. But METR’s research comes with a critical caveat: on the longest and most challenging tasks, AI agents succeed only about 50 percent of the time. This creates what might be called the “fast but unreliable worker” paradox. An AI system that can attempt tasks of increasing complexity but produces correct results only half the time presents a genuine management challenge. How do you integrate such a tool into production workflows? The answer likely involves sophisticated verification layers, human-in-the-loop checkpoints, and a fundamental rethinking of quality assurance processes.
For organizations tracking the Stanford AI Index and governance trends, METR’s findings underscore the urgency of developing robust evaluation frameworks. As AI agents take on longer and more consequential tasks, the cost of failure escalates proportionally. A coding assistant that occasionally introduces bugs is manageable; an AI agent that completes 95% of a month-long research project correctly but makes a critical error in the remaining 5% could be catastrophic. The path forward requires not just improving model capabilities but also building the infrastructure to verify, validate, and course-correct AI-generated work.
Reversible Computing Escapes the Laboratory
One of the most technically fascinating stories in IEEE Spectrum’s 2025 computing selection involves a principle that connects software to the fundamental physics of hardware: reversible computing. The concept rests on a thermodynamic truth first articulated by Rolf Landauer in 1961 — erasing a bit of information necessarily dissipates energy as heat. The only way to avoid this energy cost is to never erase information, instead performing computations that can be run backward to recover their inputs.
For three decades, reversible computing remained firmly in the academic sphere. That changed in 2025 when startup Vaire Computing unveiled a commercial prototype chip that recovers energy in an arithmetic circuit. The team claims their approach could eventually deliver a staggering 4,000x improvement in energy efficiency over conventional chips. To put that in perspective, a data center consuming 100 megawatts today could theoretically achieve the same computational output on just 25 kilowatts — roughly the power consumption of a typical American home.
The catch, of course, is significant. Reversible computing requires entirely new gate architectures, new electronic design automation tools, and the integration of MEMS resonators on chip. It’s not a drop-in replacement for existing technology; it’s a fundamental rearchitecting of how computation is performed at the physical level. But with AI’s energy demands threatening to overwhelm power grids worldwide — some projections suggest AI data centers could consume 3-4% of global electricity by 2030 — reversible computing offers a compelling long-term path to sustainable computation. Vaire’s prototype represents the critical transition from “interesting theory” to “we’re actually building this.”
Make cutting-edge research accessible — turn any PDF or report into an interactive experience in seconds.
Apache Airflow’s Resurrection: From Dead to 40 Million Downloads
Not every great computing story involves cutting-edge research or billion-dollar investments. Sometimes the most inspiring narratives are about community, persistence, and the power of open source. Apache Airflow — the workflow orchestration software originally built by Airbnb — was essentially dead by 2019. The codebase had stagnated, the community had dwindled, and most engineers had moved on to newer alternatives.
Then one enthusiastic open-source contributor stumbled upon Airflow while working in IoT and recognized its potential. He rallied the community, organized contributors, and pushed the project toward a major overhaul. By late 2020, the team shipped Airflow 2.0 with significant architectural improvements. The results have been extraordinary: Airflow now boasts 35 to 40 million downloads per month and over 3,000 contributors worldwide. In 2025, Airflow 3.0 launched with a modular architecture that can run anywhere, solidifying its position as the de facto standard for programmatic workflow orchestration.
The Airflow story resonates because it demonstrates something that pure AI innovation stories often miss: the irreplaceable value of human community in software development. No algorithm or AI agent could have achieved what Airflow’s community did — recognizing latent value in abandoned software, organizing volunteers across time zones, maintaining backward compatibility while reimagining the architecture, and building the trust required for enterprise adoption. As organizations increasingly depend on AI tools for code generation, Airflow’s revival reminds us that the social infrastructure of open source — governance, mentorship, code review, documentation — remains fundamentally human work.
Electronic Health Records: A $100 Billion Cautionary Tale
In 2004, President George W. Bush set an ambitious goal: transition the United States to electronic health records by 2014, promising transformed healthcare and enormous cost savings. Twenty years and over $100 billion later, the US has achieved widespread EHR adoption — and created a different kind of nightmare. IEEE Spectrum’s deep dive into the EHR debacle is one of the most important technology cautionary tales of 2025.
The numbers are damning. Doctors now spend an average of 4.5 hours per day staring at screens instead of looking at patients, clicking through poorly designed software systems optimized for billing rather than care delivery. The average hospital uses 10 different EHR vendors internally, creating fragmented systems that don’t communicate with one another. Since 2009, data breaches have exposed 520 million patient records. And healthcare costs — far from bending downward as promised — have hit $4.8 trillion annually, representing 17.6 percent of GDP.
The rush to adopt EHRs before the technology was ready meant ignoring critical warnings about systems engineering, interoperability standards, and cybersecurity requirements. The result is a cautionary tale about what happens when policy mandates outpace technological readiness. The irony is particularly bitter: AI scribes are now being developed to solve the problems that the last generation of technology created, essentially building AI to help doctors escape the screen time that digitization imposed on them. For technology leaders in any industry, the EHR story is a sobering reminder that deploying technology without proper standards, interoperability requirements, and user-centered design can create problems far worse than the ones it was meant to solve.
Lunar Data Centers: Innovation or Lunacy?
The most provocative entry in IEEE Spectrum’s 2025 computing list asks a question that sounds absurd on its face: Is it lunacy to put a data center on the moon? Company Lonestar Data Holdings answered by actually doing it — sending a 1-kilogram, 8-terabyte mini data center to the lunar surface aboard an Intuitive Machines lander earlier in 2025.
The business case, while unconventional, has a certain logic. Lunar data centers would be immune to terrestrial disasters — undersea cable cuts, hurricanes, wars, and electromagnetic pulse events. The moon’s permanently shadowed craters maintain temperatures around -173°C, potentially simplifying cooling (though the lack of atmosphere makes thermal radiation challenging). Nearby sunlit peaks could provide solar power. And here’s the geopolitical kicker: the moon isn’t subject to any nation’s jurisdiction, creating a potential loophole in data sovereignty laws that would allow hosting data under any country’s legal framework.
Governments are already interested — Florida and the Isle of Man have reportedly begun storing data on the lunar surface. But the practical challenges are enormous. A 1.4-second communication latency rules out real-time applications. Bandwidth is severely limited. And if anything breaks, fixing it requires an actual moon mission. For most computing applications, terrestrial data centers will remain overwhelmingly more practical. But as a hedge against existential risks — preserving critical data against catastrophic Earth events — lunar storage occupies a genuine, if narrow, niche. The story captures the audacious spirit that drives the technology industry at its best and most eccentric.
What These Stories Mean for the Future of Computing
Taken together, IEEE Spectrum’s top computing stories of 2025 reveal several converging themes that will define the next decade of technological development. First, the energy-computation nexus is becoming critical. Reversible computing, biological computing, and even lunar data centers all respond, in different ways, to the growing tension between computational demand and energy supply. As AI models grow larger and more numerous, finding sustainable ways to power computation isn’t just an engineering challenge — it’s becoming a civilizational imperative.
Second, the stories collectively illustrate that technology alone cannot solve organizational and systemic problems. IT management failures persist despite decades of better tools. EHR adoption created new problems because the implementation ignored human factors. AI agents are improving exponentially but remain unreliable enough that deploying them without human oversight is risky. The lesson is consistent: technology amplifies the quality of the systems and organizations that deploy it. Good management plus good technology produces excellent results; bad management plus good technology produces expensive failures faster.
Third, the open-source ecosystem remains a vital engine of innovation. Apache Airflow’s resurrection demonstrates that community-driven development can outperform even well-funded commercial alternatives when the community is properly organized and motivated. For organizations evaluating their technology strategies, investing in open-source ecosystems — through contributions, sponsorship, and adoption — offers returns that compound over time.
Finally, the boundary between biological and digital computing is beginning to blur. Cortical Labs’ neuron-powered chips represent just the first commercial step toward computation that leverages the extraordinary efficiency of biological systems. Combined with advances in quantum computing and reversible logic, the computing architectures of 2035 may look radically different from today’s silicon monoculture. For technology professionals and leaders, the takeaway is clear: staying informed about these converging trends isn’t optional — it’s essential for making sound strategic decisions in a rapidly evolving landscape.
Stay ahead of computing trends — create interactive knowledge experiences from any source document.
Frequently Asked Questions
What are the top computing stories from IEEE Spectrum in 2025?
IEEE Spectrum’s top computing stories of 2025 cover AI agents doubling capabilities every seven months, Python remaining the top programming language, reversible computing going commercial, biocomputers powered by human neurons, IT management failures costing trillions, electronic health record challenges, Apache Airflow’s revival, and the controversial plan to put data centers on the moon.
How fast are AI agent capabilities growing in 2025?
According to research by Model Evaluation & Threat Research (METR), AI agent capabilities are doubling every seven months as measured by the length of tasks they can complete. If this trend continues, models could handle month-long human tasks by 2030, though current success rates on the hardest tasks remain around 50 percent.
What is reversible computing and why does it matter?
Reversible computing avoids erasing information during computation, which eliminates the energy lost as heat during bit erasure. Startup Vaire Computing has built the first commercial prototype, claiming potential 4,000x energy efficiency improvements over conventional chips — a critical advancement as AI energy demands surge.
Can you really buy a biocomputer with human brain cells?
Yes. Australian startup Cortical Labs sells a biocomputer powered by 800,000 living human neurons on a silicon chip for $35,000. It can learn, adapt, and respond in real time. The primary application is drug discovery, where researchers test experimental compounds on living neural cultures.
Why is Python still the number one programming language in 2025?
Python remains number one in IEEE Spectrum’s 2025 rankings due to its dominance in AI/ML workflows, data science, and web development. However, the rise of AI coding assistants is making it harder to track language usage accurately, as developers increasingly ask chatbots instead of posting on forums like StackExchange.
What went wrong with electronic health records in the US?
Despite over $100 billion invested since 2004, EHR adoption created fragmented systems that don’t interoperate — the average hospital uses 10 different vendors. Doctors spend 4.5 hours daily on screens instead of patients, 520 million records have been breached since 2009, and healthcare costs have risen to $4.8 trillion rather than decreasing as promised.