I just came across a slightly mind-blowing snippet in Forbes Magazine about a study conducted by MIT and Santa Fe Institute comparing the effectiveness of different forecasting methodologies for predicting how rapidly technology will advance.
The study concluded that the two most accurate methodologies are the well-known Moore’s Law and lesser-known Wright’s Law. As we all know, the former predicts that the number of transistors on an IC will double every 18 months to two years. The latter, better known as “The Learning Curve Effect” plots unit production against price per unit, theorizing that as production of products produced doubles, costs decrease by the same proportion.
Why did I find this particularly remarkable? According to Forbes’ contributor, Jim Handy, while the Learning Curve has been accurate in predicting prices of PV cells and DRAMS, his take is that it’s “it’s a bit of a stretch” to use Moore’s Law as an economic index, since “Moore only briefly mentions price in his first paper”, and focused on transistors per chip. Moore’s basis is purely on technological feasibility, cost be damned! (Just ask anyone working to further Moore’s Law based solely on CMOS scaling.) Up until this moment, I hadn’t given much thought to the fact that cost per transistor has little to no bearing on Moore’s Law.
Findings indicate that it’s Wright’s Law and not Moore’s Law that is a more accurate methodology for in predicting technology progress. Along those lines, the best way to bring down the costs of 3D ICs is to start building them in high volume. And for those forecasters who predict that adoption will occur in high end computing first (datacenters, etc) and move to consumer applications as costs come down, Wright’s Law validates your theory.
What do you think? Which school of thought will bear out in the race to 3D IC adoption? Will it be a means to further Moore’s Law, or will Wright’s Law prevail? Post your comments below. ~ F.v.T.