Quantum Computing Is Beginning to Take Shape — Here Are Three Recent Breakthroughs

Quantum computing, though somewhat overshadowed by AI of late, may be nearing its own day in the sun. Just a few years ago, many researchers agreed that quantum computers would not become genuinely useful for decades. That timeline is steadily shrinking, raising the possibility of real-world applications — like quantum encryption and drug discovery — in the relatively near future.
“The last couple of years have been very, very exciting,” Scott Aaronson, a computer scientist at the University of Texas at Austin, told Discover.
Between hardware improvements, efficiency gains, and demonstrations of so-called “quantum advantage” over classical computers, quantum computers are progressing rapidly. Here are three of the latest breakthroughs.
1. Quantum Computers Are Becoming More Stable
The field has been plagued from day one by the fact that quantum computers are inherently unstable. In contrast to classical computers, which process information using binary bits (that is, 1s and 0s), quantum computers rely on qubits, which leverage the bizarre principles of quantum mechanics for more powerful processing.
Qubits can exist in a state of superposition, according to the National Institute of Standards and Technology (NIST), representing both 1 and 0 simultaneously. That allows them to perform computations that exceed the capacity of classical computers. But these states are fragile — temperature swings, electromagnetic fields, and vibrations can all cause qubits to slip back into classical behavior, or decohere.
Decoherence leads to computational errors, so error correction is the central challenge of quantum computing. The problem is that the process of error correction itself involves lots of qubits performing lots of operations, which introduces yet more opportunity for errors.
“As long as your error rate is too high,” Aaronson said, “all your attempts to error-correct just make things worse.”
In late 2024, however, researchers at Google reversed that trend. Their Willow chip, a 105-qubit superconducting quantum processor, demonstrated that, given the right error-correction techniques, quantum computers become more, rather than less accurate, as the number of qubits increases.
Most importantly, the system crossed a critical threshold, according to a study in Nature, correcting errors faster than new ones were introduced, paving the way for what’s known as fault-tolerant quantum computing. “At that point,” Aaronson told Discover, “you should be able to stabilize a qubit indefinitely.”
More recently, other hardware platforms have begun to show promise. Quantinuum, a Colorado-based company, has developed trapped-ion devices, which use electrically charged atoms suspended in electromagnetic fields as qubits, according to a 2025 arXiv paper. These systems are much slower than superconducting chips like Google’s, but they boast far higher accuracy. Meanwhile, Aaronson added, a Boston-based company called QuEra has yielded similarly “amazing results” with its neutral-atom approach, which uses lasers to trap and manipulate arrays of atoms as qubits.
These diverse hardware strategies are all improving in tandem, increasing the odds that at least one will achieve large-scale, fault-tolerant quantum computing.
“It’s surprising to me that you still have these very, very different architectures with complementary strengths and weaknesses,” Aaronson said to Discover. “We don’t know yet which of them will be the best or the least expensive way to scale up.”
Read More: Quantum Computing Approach Generates First Ever Truly Random Number
2. Outperforming Classical Computers
The ultimate goal for quantum computing, of course, is to solve problems beyond the reach of classical computers. Google claimed to have done so for the first time in 2019, but the task had no practical application, and subsequent work showed that it could, in fact, be performed by a classical computer.
Various research teams have since staked their own claim to so-called “quantum advantage” or “quantum supremacy,” and these pronouncements are typically met with skepticism. Impressive though the calculations may be, how can we be sure someone won’t once again find a way to replicate them classically?
Nevertheless, Aaronson points to a recent demonstration of quantum advantage that, to his mind, offers real-world applications that couldn’t easily be had without quantum computing.
“At the very least,” he added, “you have to work very hard to get comparable results classically.”
In November 2025, Quantinuum reported in arXiv that it had used its trapped-ion devices to simulate the Fermi-Hubbard model, a foundational problem in physics. The simulation involved numbers that would be near impossible to calculate classically in a reasonable timeframe, but which could help scientists develop advanced materials like room-temperature superconductors — “arguably the greatest challenge in condensed matter physics,” as one group of researchers put it.
“We’re actually getting reasonable candidates for verifiable quantum supremacy that we can do on current devices,” Aaronson told Discover. “As they scale up the devices, they’re going to be able to do more and more interesting simulations.”
3. Efficient Error Correction
Current quantum computers are limited to, at most, thousands of qubits. Researchers have long estimated that fully error-corrected devices would require millions, a daunting figure that would push full-fledged quantum supremacy far into the future. But based on a paper published last month, which Aaronson called a “bombshell,” those estimates were far too high.
The new arXiv paper, led by researchers at Caltech and the California-based startup Oratomic, outlined a scheme for fault-tolerant quantum computing that could reduce the required number of qubits by as much as two orders of magnitude compared to earlier estimates, down to just 10,000. That would dramatically accelerate the timeline to commercial viability.
In other words, quantum supremacy could be closer than previously thought. But that prospect comes with potential pitfalls.
Also in recent weeks, researchers at Google described a more efficient implementation of Shor’s algorithm — the famous quantum procedure for factoring large numbers — that would require far fewer qubits to break elliptic curve encryption, a widely used cryptographic system. To avoid giving would-be attackers an instruction manual, the team published its results in the form of a “zero-knowledge proof,” proving the feasibility of the approach without revealing details.
The implications are sobering for platforms that use this kind of public-key encryption, including Bitcoin signatures.
“When you put together the Google thing with the Caltech thing, […] Bitcoin could be vulnerable to a quantum computer with only about 25,000 or 30,000 [qubits],” Aaronson told Discover. “A year ago, the best estimate would have been in the millions.” He added that Google’s findings provide a strong incentive to upgrade to quantum-resistant encryption.
None of these breakthroughs means that quantum computing will transform the world — for better or worse — tomorrow. Error rates remain high, processors must be scaled up, and many proposed applications are rather speculative. But taken together, they mark a shift. After several tantalizing decades, Aaronson added, quantum computers are beginning to perform “like the theory said [they] would 30 years ago.”
Read More: Computation Signals A Quantum Leap For Precision Measurement
Article Sources
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
