What happens after Moore’s law….

Now that it seems clear that Moore’s Law will reach its end, what will happen? Whether the predictions are of minor or major changes, different processing paradigms will surely emerge. Software presents problems distinct from the challenges of hardware. The key to predicting the future may lie in our recognition that economics and energy limit human innovation. However, innovation has a history as old as humankind, and the innovative possibilities for computing are limited only by our imaginations.

A friend recently pointed me towards an article on Moore’s Law. More accurately, it was an article that was again predicting the imminent demise of this paradigm.

One could argue that there are strong and weak formulations of Moore’s Law. In its strong form, the prediction is limited to an increase in the number of transistors that exist on a single microchip. In its weak or broader form, the argument is more closely related to an economic costs effect. The former prediction heavily relies on the type of technology being deployed. Silicon- and germanium-based transistor technologies will face severe limits in the coming years, and, therefore, this version of the law can be safely predicted to end.

Thus, we could argue that, technically, the pessimistic predictions of the end of Moore’s Law are correct. However, in its broader form we could argue that the end is not near. It is true that the size of transistors has a limit, and it is true that we are quickly nearing that limit. Nevertheless, the reality is that we are also changing the architecture and processing.

If we were limited to analysing Moore’s Law based solely on the number of transistors and, hence, the limitations of a literal perspective, then we certainly are fast approaching the end. However, that does not mean that we will not get more transistors onto a chip; rather, it means that the architecture will adapt and the process will move away from a doubling period of 18–24 months. Under that scenario, we would move into different processing paradigms.

COMPUTATION IS ABOUT ENERGY AND ECONOMICS

The reality of computation is that it is all about innovation. There is a limit to what a human mind can do, although, working as a collective, our human minds can do far more than any single mind. Perhaps there is a limit to human innovation, but I do not believe that we are close to that limit yet, if it even exists. I prefer to be an optimist on the matter, and I foresee no limitation to human innovation in my lifetime or in the lifetimes of my children or grandchildren. Beyond that, none of us can predict, and I am hopeful nonetheless.

Moore’s Law is an economic predictive model. We can build bigger chips now. Using software, we can merge multiple CPUs, GPUs, co-processor chips, and cards into a mesh that acts as a computational model far superior to any single machine. The limits to our doing this are not related to the technology because the technology already exists. The limits we face are more general, and they concern economics and energy.

The human mind continues to demonstrate that it has more computational power than any system we have developed. It is a living self-programming machine that runs with extreme efficiency. Despite all of the human errors in programming and all of the human failures in biology, we know that our computational limits are well beyond our present abilities.

Humans have a general problem with pessimism that has pervaded societies for as long as they have existed. Yet, innovation has led the drive towards more diverse and wealthier societies. The driving force of innovation and the creation of new methodologies are not new, and there is no end in sight. Even so, many people seem to believe that we face an end to human achievements. Malthus recanted in the end, but many of us will not. In the face of development and progress, it is easier to sell a story of gloom and doom.

Things will change. There is no doubt about it. More importantly, we cannot even begin to predict the nature of the computational power that will exist in 2050. It could be a technology derived from our existing computational systems or it could be some new quantum system. There is no way to know the future before we get there, or very close to there. It seems likely that we will have a new, and as yet, unknown disruptive technology carrying our path of growth forward. This new technology cannot be predicted today beyond knowing that it will allow us to continue on a path of growth. To some people, this leads to theories of ‘what if’: What if no one develops anything? What if innovation just stops? What if there is nothing left to discover?

There is nothing new in this attitude!

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement’.
~Lord Kelvin, 1900

It is an attitude that has pervaded and persisted in science for generations:

The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. Our future discoveries must be looked for in the sixth place of decimals’.
~Albert Michelson, 1894

This is the attitude that led to long-term stagnation in the finance industry. Yet, even there, we see change, which will be long-term change because of the influence of innovation. Innovation will continue in the form of disruptive technologies that no one expects or predicts, that seem to have come out of nowhere, created by the people we least expect to change the world.

The combined computational power is not causing our current limitations. The limitations are actually in our software. The speed of a modern computer is not controlled by the growth in its clock cycle. It derives from a combination of factors that are not the primary limiters of modern computers. Hardware continues to grow according to Moore’s Law, and it is expected to do so into the foreseeable future. Software is a different matter. The progressive improvement of modern computer software is the creative product of the human mind. When many minds work together, more complexity enters the system, although there is a limit.

One key aspect of bitcoin that is not generally known or considered is its ability to allow for distributed and parallelised computations. In a traditional computer, since the 1970s or 1980s, complex algorithms and systems are simplified through iterative processes, such as looping. But, quantum computers do not work this way. The nature of a quantum computer is to simultaneously solve all possible states, or so we hope. Bitcoin script is aligned with this type of computation. Instead of traditional looping, bitcoin scripting is targeted towards massively paralleled computations.

One way to achieve this can be implemented by using hash puzzles and other computational puzzles that can be secured through Boolean statements that link the puzzle to be solved with the addition of a payment address. Transactions that substitute each possible value of a variable can be created. If we wanted to trial a variable from 0 to 1 billion instead of sequentially cycling through each of the states, we could run them simultaneously in parallel. Each transaction would be executed with the one that correctly solves a puzzle leading to the payment. This leads to the possibility of complete economic outsourcing for computation. Not just storage, mind you, but each sub-routine and each calculation or computation.

Therefore, Moore’s Law is not about the number of transistors; it concerns the economic growth of a system. The effectiveness of the utilisation of a system is a separate matter, and the creation of larger and faster computers is limited by the software they run. We are now headed towards more economical systems, in which we will see the costs of computers continue to decrease over time. Most importantly, the results of Moore’s Law relate to the energy economy. This does not mean that we will use less energy to run our machines; it means that the amount of energy we use to complete an individual calculation will decrease. What truly matters is not the number of transistors we exhibit; it is the extent of our computing efficiency. Even now, the drive towards Exascale technologies moves us into the future. The creation of systems that run multiple cores will certainly change the nature of computing.

THE NATURE OF THE COMPUTATION IS CHANGING

Computer science and, in particular, computational theory, is a field that will be actively explored and researched in coming years. One reason that we began extensive experimentation with technologies like CuDa and the Xeon Phi architecture was our hope of taking one transaction processing and the bitcoin blockchain. Many of the papers we are now writing concern the use of highly parallelised code structures. Using a combination of puzzles and calculated ECC addresses, we can engage multiple parties or processes simultaneously to work out computational solutions with the expectation that they will be paid. This is not a standard pay-for-work situation; it is a distributed proof or work within a transaction.

The changes mirror and parallel several others that I have recently seen. In the move from 8-bit to 16-bit to 32-bit and now 64-bit architectures, we have had to change and adapt our software along the way. The next change in software will be one of highly parallelised systems running across ultra-wide buses. The next generation of computer architecture is already available running on 512-bit registers. The change in code is immense, but we can hope it is sufficient to keep us going for several more years.

In the aforementioned article from The Economist, there is the argument of a definite limit to clock speed and thermal design. Soon, even the number of transistors will change. But, this does not mean that computational power will stagnate or diminish. The article jokes, ‘Moore’s law has been predicted to end for as long as it has been in existence’. However, many facts in the article should be corrected. For example, the cost of computer chips has not been increasing—statements that it is increasing are not true in either absolute or relative terms.

The most cogent concern relates to issues with the software we create. Software does not scale anywhere near the rate that hardware does. For all the advances we have made, the one that we have not made (to any great extent) is in software. Software is slower now and becoming bloated; many of the skills needed to create more efficient software have been lost because we use hardware to fix software problems.

Perhaps it is not technology as much as it is economics that allows the downward slide. Market forces dictate much of this effect. The uptake of newer types of architecture has been delayed, although by consumers more than by manufacturers. Similar to the pain felt in the slow change from 16-bit to 32-bit architecture, the change from our existing 64-bit registers to the newer 512-bit systems has been difficult, to say the least.

The article in The Economist did not mention many of the technological changes already available. The main point of this is that knowledge of these areas is limited. This is not to say that changes cannot be implemented or implemented quickly, but the software to make those changes must be developed. As stated above, I do not see a technological end to the increase of processing power in my lifetime. I see technological limits caused by problems of software. The level of complexity is growing faster and faster.

In our leap forward, we will create more specialised chips and better software. The chips are the easy part for us; the software is the part that will always take us a longer time to work out. For example, multi-core chips already exist, and the difficulty is in figuring out how best to use them.

The comment that some mathematical tasks cannot be computed is technically incorrect. It is true that some large processing requires large chips, but the fact is that there is very little that we know of that cannot be done on a 512-bit chip. How well we do this is a different matter. So, the introduction of multi-core machines must be managed in new and novel ways, and, in this regard, programming has become a different beast. We have moved from something that people could do with a small skillset into a highly specialised technical area. I hate to admit it, but, today, the best I can do is dabble.

I am an adequate programmer for selected tasks. Algebraic mathematics, functional design stages, and complete back-end work are all within my technical bailiwick. Recently, and, by that, I mean in the last three years, my ability to code across parallel systems and with many threads has improved, although I remain extremely inefficient. For this reason, we have been hiring many talented developers who are far better coders than I could ever hope to be, demonstrating the nature of specialisation. So, in the future, Moore’s Law will not hold us back; the boundaries of the human intellect will determine whether we stagnate or grow.

Thus, to make these new systems work effectively and efficiently, we must code and develop at a level and pace far superior than our current activity. This is the heart of my worries. The costs and prices of computers and computational power are decreasing daily. The skillset needed to successfully develop new systems is not increasing as fast as it must to keep up. In the end, what we have is hope that human ingenuity and innovation can solve the problems as they arise.



Never miss a story from Craig Wright (Bitcoin SV is the original Bitcoin)