Physicists at Silicon Quantum Computing have developed what they say is the most accurate yet quantum computing chip ever created after the creation of a new type of architecture.
The Sydney startup says its silicon-based atomic quantum computing chips give it an edge over other technologies. quantum processors (KPU). That's because the chips are based on a new architecture dubbed “14/15,” which places phosphorus atoms in silicon (so named because they are the 14th and 15th elements in the periodic table). They outlined their findings in a new study published December 17 in the journal Nature.
SQC has achieved accuracy levels of 99.5% to 99.99% in a quantum computer with nine nuclear qubits and two atomic qubits, leading to the world's first demonstration of silicon-based atomic quantum computing in individual clusters.
Accuracy ratios indicate how well error correction and mitigation methods work. The company says it has achieved the highest level of errors in its custom architecture.
It may not sound as exciting as quantum computers with thousands of qubits, but the 14/15 architecture scales beautifully, the scientists say in the study. They added that demonstrating maximum precision across multiple clusters provides proof of what could theoretically lead to fault-tolerant QPUs with millions of functional qubits.
The secret sauce is silicon (with the phosphorus side).
Quantum computing works on the same principle as binary computing—energy is used to perform the calculations. But instead of using electricity to flip switches, as is done in traditional binary computers, quantum computing involves creating and manipulating qubits—the quantum equivalent of the bits of a classical computer.
Qubits come in different shapes. Scientists at Google and IBM are building superconducting qubit systems that use gate circuits, and some labs, such as PsiQuantum, have developed photonic qubits—qubits that are particles of light. Others, including IonQ, work with trapped ions by trapping individual atoms and holding them in a device called laser tweezers.
The general idea is to use quantum mechanics to manipulate something very small in a way that makes useful calculations based on its potential states. SQC officials say their process is unique because QPUs are designed using the 14/15 architecture.
They create each chip by placing phosphorus atoms on wafers of pure silicon.
“This is the smallest feature size on a silicon chip.” Michelle SimmonsCEO of SQC, told Live Science in an interview. “It's 0.13 nanometers, and it's essentially the same bond length that we have in the vertical direction. This is two orders of magnitude lower than what TSMC usually makes as a standard. That's a pretty significant increase in accuracy.”
Increasing the number of qubits of tomorrow
For scientists to achieve quantum computing at scale, each platform must overcome or mitigate various obstacles.
One of the universal barriers to all quantum computing platforms is error correction (QEC). Quantum computing takes place in an extremely fragile environment where qubits are sensitive to electromagnetic waves, temperature fluctuations and other influences. This causes the superposition of many qubits to “collapse” and they become immeasurable – with quantum information being lost during calculations.
To compensate for this, most quantum computing platforms allocate a certain number of qubits to eliminate errors. They operate in a similar way to check or parity bits in a classical network. But as the number of qubits increases, so does the number of qubits required for QEC.
“We have long nuclear spin coherence times, and we have very little of what we call 'bit-flip errors.' So our error correction codes themselves are much more efficient. We don't have to correct bit-flip and phase errors,” Simmons said.
In other silicon-based quantum systems, bit-flip errors are more noticeable because qubits tend to be less stable when manipulated with coarser precision. Because SQC chips are designed with high precision, they are able to mitigate certain error cases encountered on other platforms.
“We really only need to correct these phase errors,” Williams added. “So the error correction codes are much smaller, so all the overhead you do for error correction is
significantly reduced.”
The race to defeat Grover's algorithm
The standard for testing the accuracy of quantum computing systems is a procedure called Grover's algorithm. It was developed by a computer scientist Love Grover in 1996 to demonstrate whether a quantum computer could demonstrate an “advantage” over a classical computer at a certain search function.
Today it is used as a diagnostic tool to determine the performance of quantum systems. Essentially, if a laboratory can achieve quantum computing accuracy levels in the range of 99.0% or higher, it is considered to have achieved fault-tolerant error-correcting quantum computing.
In February 2025, SQC published the study in a journal. Nature in which the team demonstrated 98.9% accuracy of Grover's algorithm with its 14/15 architecture.
In this regard, SQC has outperformed firms such as IBM and Google; although they have shown competitive results with tens or even hundreds of qubits compared to four SQC qubits.
IBM, Google and other established projects are still testing and iterating their roadmaps. However, as the number of qubits increases, they have to adapt their error correction techniques. QEC has proven to be one of the most intractable bottlenecks.
But SQC scientists say their platform is so “error-flawed” that it managed to beat Grover's record without any error correction on top of the qubits.
“If you look at the Grover result we got at the beginning of the year, we have a Grover album with the highest fidelity of 98.87% of the theoretical maximum, and we're not doing any error correction at all,” Simmons said.
Williams says the “clusters” of qubits featured in the new 11-qubit system can be scaled to represent millions of qubits, although infrastructure bottlenecks could still slow progress.
“Obviously, as we move to larger systems, we'll be fixing bugs,” Simmons said. “Every company should do this. But the number of qubits we need will be much smaller. Consequently, the physical system will be smaller. The power requirements will be less.”

.jpg.png?w=150&resize=150,150&ssl=1)




