If you’ve been paying attention to tech media over the past decade, you’ll probably have heard about Moore’s Law and how it’s apparently dying. Unfortunately, it’s hard to describe what Moore’s Law is and how exactly it’s dying in a standard news piece. Here’s everything you need to know about Moore’s Law, what it means for processors, why people are saying it’s dying, and how companies are finding workarounds.
A descriptive law of how the chip industry has worked for decades
Moore’s Law was coined by Intel co-founder Gordon Moore in 1965, and it predicts that every two years, the number of transistors (basically the smallest component in a processor) will double. So if you’re building the biggest chip you possibly can one year, you should be able to make a chip that has twice the transistors two years later. If the industry can muster a processor with one million transistors in one year, in two years’ time, a two million transistor chip should be possible.
This largely has to do with the way chips are manufactured through something called a process node. Every single new process is supposed to be denser than the last, which is how the industry has been able to meet Moore’s Law projections for decades. You might be wondering why density is necessary to keep increasing transistors; why not just make a bigger chip every year? Well, a single chip can only be so large. The largest chips ever made in large volume are at most 800mm2, which can easily fit in the palm of your hand. So, higher density is necessary to get more transistors into a chip.
For most of computing history, fabrication companies (colloquially called fabs) were able to launch new process nodes every year or two and keep Moore’s Law chugging along. Additionally, new nodes also improved frequency (sometimes simply called performance) and power efficiency, so using the latest or second-latest process was usually what companies wanted unless they were making something basic. Moore’s Law was just an unquestioned thing that happened and was taken for granted.
How Moore’s Law is dying
The industry expected the gravy train of new nodes every year or so to continue forever, but it all came crashing down in the 21st century. One worrying sign was the end of Dennard scaling, which predicted more compact transistors would be able to hit higher clock speeds, but that stopped being true around the 65nm mark in the mid-2000s. At such tiny sizes, transistors were exhibiting new behavior that no physicist could have foreseen.
But the end of Dennard scaling was nothing compared to the crisis nearly every fab in the world encountered around 32nm in the early 2010s. Shrinking transistors down below 32nm was extremely difficult, and for years, Intel was the only company to successfully transition to the 22nm node, the next full upgrade after 32nm. It wasn’t until the mid-2010s that Intel’s competitors were able to catch up, but by then, the industry had changed substantially.
Source: Yole Developpment
The chart above illustrates the number of companies over the years that were able to make industry-leading nodes in a given year and generation. This number had been declining for years but seemed to stabilize in the late 2000s to early 2010s. Then, when companies began to realize how difficult it would be to progress beyond 32nm, they threw in the towel. Fourteen cutting-edge fabs got to the 45nm node, but only six of them got to 16nm. Today, only three of those fabs are still at the cutting edge: Intel, Samsung, and TSMC. Many, however, expect either Samsung or Intel to join the ranks of the fallen eventually.
Even companies that can develop these new nodes can’t match the generation-to-generation gains of older nodes. It’s getting harder to make chips denser; TSMC’s 3nm node actually failed to shrink cache, which is disastrous. And while density gains are declining each generation, production is getting more expensive, causing the cost per transistor to stagnate ever since 32nm, which makes it more difficult to sell processors at lower prices. Performance and efficiency improvements are also not as good as they used to be.
All of this together is what signifies the death of Moore’s Law for people. It’s not just about failing to double transistors every two years; it’s about rising prices, hitting walls in performance, and not being able to boost efficiency as easily as before. This is an existential problem for the entire computing industry.
How companies are meeting the expectations of Moore’s Law even while it’s dying
While the death of Moore’s Law is undeniably a growing problem, every year brings innovation from key players, many of which are finding ways to totally bypass manufacturing issues that have plagued the industry for years. While Moore’s Law talks about transistors, the spirit of Moore’s Law can be kept alive by merely meeting traditional generation-to-generation performance improvements, and the industry has lots of tools at its disposal, tools that didn’t exist even a decade ago.
AMD and Intel’s chiplet technology (which Intel calls tiles) has not only proven to meet the performance expectations of Moore’s Law but even the transistor expectations too. While it’s true that a single chip can only be so big, you could theoretically add lots and lots of chips to a single processor. A chiplet is essentially a little chip that’s paired up with other chiplets to make a complete processor. AMD’s adoption of chiplets in 2019 allowed the company to double the number of cores it offered in desktops and servers.
Additionally, chiplets can be specialized, and this is where the technology really shines in the face of a dying Moore’s Law. Since cache isn’t really shrinking on newer nodes, why not put all the cache on chiplets using older, cheaper nodes and the processor cores on chiplets with the latest node? That’s what AMD has been doing with its 3D V-Cache and its memory cache dies (or MCDs) in high-end RX 7000 GPUs like the RX 7900 XTX. Some of the best CPUs and best GPUs from AMD wouldn’t be possible without chiplets.
Nvidia, on the other hand, has proudly proclaimed the death of Moore’s Law and has staked everything on AI. By accelerating workloads through AI-capable Tensor cores, performance can easily double or more, so Nvidia hasn’t touched chiplets at all. However, AI is certainly a more software-intensive solution. DLSS, Nvidia’s AI-powered resolution upscaling technology, requires effort from both game devs and Nvidia to implement in games, and DLSS isn’t perfect either.
The only other option apart from these two is to simply improve the architecture of processors and get more performance from the same number of transistors. This path has historically been very difficult for companies to go down, and while new generations of processors bring architectural improvements, the performance uplift is typically in the single-digit percentages. Regardless, it might be necessary for chip designers to focus more on architectural upgrades from now on because this isn’t just a phase.