IBM, Cray, Intel, and HP are each pouring millions of dollars into R&D of exascale computing. Unfortunately, those same companies have invested approximately zero in quantum computing. They haven’t even funded a no-brainer like an X-prize for quantum computing. The sad truth is that about 99.9% of current research into quantum computing in the USA is being funded by military and spy agencies.
It’s fine and natural for those companies to try to extend what they already know—multicores. What I find troubling is that they are not investing in quantum computing at the same time. This despite the fact that commercially viable exascale computing will be extremely difficult to achieve—as difficult or even more so than quantum computing, IMHO.
Failure in achieving commercially viable exascale computing is a very real possibility. Dreams of parallel computing have flopped monumentally before; for example, with Japan’s Fifth Generation computer project. Even Linus Torvalds, whose common sense is legendary, has expressed some reservations about exascale computing using multicore processors. Even more damning is the 2008 Kogge report, a report commissioned by DARPA and authored by a slew of supercomputing luminaries, which concluded that power consumption, especially the power used to “move data around”, will make multicore exascale computers impractical. Is “deep computing” in deep trouble?
Some people might say, Oh well, even if we fail to achieve commercially viable exascale computing, our efforts in that direction will still produce numerous spinoffs. Well, I think quantum computing has the potential to produce even more spinoffs than exascale computing. Why? Let me put it this way. Exascale computing is like trying to reach for some extremely high fruit in the tree of multicore processors, because all the lowest lying fruit have already been picked from that tree. Quantum computing, on the other hand, is like picking low lying fruit from a tree that has never been picked before.
Here we have quantum computing, most likely a greener (more energy efficient) technology than multicore processors, and one which, if successful, could produce thousands of new jobs for the USA. But are those companies funding it? Nah. Instead, those geriatric companies are trying to build a horse buggy 10 times the usual length, even when everyone tells them that maybe they should also look into internal combustion engines.
It appears that QCs can perform only certain special tasks better than classical computers. Hence, QCs will probably never totally replace classical computers, just complement them. Therefore, I believe in the need for research into exascale computing. And I sincerely hope that the HPC community will succeed in their quest for exascale computing. But let us be perfectly frank. According to many supercomputer experts, there are many, many reasons why (commercially viable) exascale computing may fail. Oh, let me count the ways.
A Toaster by any other name is still a …
Today’s fastest supercomputers have about 2e5 cores. At 10 watts/core, that means they consume 2e6 Watts, as much power as a whole town. Using the same technology with 1e8 cores would consume 1e9 Watts, the power produced by a medium-sized nuclear power plant. This is clearly indefensible. Hence, to achieve commercially viable exascale computing, the current technology must be made dramatically more energy efficient. The exascale computer advocates have set themselves a limit of 1e7 Watts per installation. I personally believe that 1e7 Watts is still too high. Even the current 2e6 Watts per installation is already too high, in my opinion.
Making supercomputers more energy efficient will require revolutions in chip design, interconnections and cooling. Furthermore, “The currently available main memories (DRAM) and disk drives (HDD)… consume way too much power. New technologies are needed.” there too.
And forget about obtaining any dramatic boosts in performance by further miniaturizing chip components. We’ve pretty much reached the end of that road, unless you want to reduce transistors to a few atoms, and then you might as well be doing quantum computing.
Current top500 supercomputers have ~1e5 cores. The planned exascale computers will have 1e8 cores. With that many cores, one faces the prospect of cores failing constantly. The software running exascale computers will have to constantly detect and correct for hardware failures, or else the computer will only run for short periods of time of random length. So radically new “autonomous” software will be required. (Deja vu? QCs may also require error correction! But note that with QCs, one doesn’t correct for hardware failures. One corrects instead for noise induced errors, which is preferable if you want your hardware to last longer.)
Writing and debugging parallel programs is already notoriously difficult. Typically, one has to break the program into as many semi-autonomous tasks as possible, and design by hand how all those tasks are going to communicate with each other. Then one has to worry about the possibility that the computer will hang due to network collisions. These problems will be exacerbated by increasing the number of cores from 1e5 to 1e8. (Deja vu? QCs are also hard to program)
A Prestige-Race. Price Tag Puts It Out of Reach For Almost Everyone
Just one of today’s fastest supercomputers (Jaguar, Nebulae, Roadrunner) costs more than 100 million dollars (and that doesn’t include the additional costs of infrastructure, maintenance, staff and electric bills). One can expect exaFLOPS computers to cost at least that much. Who is going to be able to afford them, except maybe China? European countries and Japan are currently instituting austerity measures. The USA is reeling after 2 wars, the financial disaster of 2008 with the ensuing bailouts and stimulus packages, record unemployment, 48 out of 50 states that can’t balance their budgets, and the Gulf of Mexico Oil Spill. Few private companies or universities can afford to spend 100 million on just one computer, especially considering the fact that most computers quickly become obsolete. Volunteer distributed computing like BOINC shields the server managers from this huge price tag. But with current technology, an exaFLOPS BOINC will expect its volunteers to consume 1e9 Watts, which is kind of unconscionable. As I already pointed out here, even now, high performance computing is a race in which only rich countries can participate. India, South America, and Africa are too poor to compete, and even mighty Japan has been falling behind in this race for at least a decade.
Can Only Solve Narrow Class of Problems
Exascale computer software will have to distribute workload approximately evenly among 1e8 cores, or else the computer’s advantage will be nullified. There is only a narrow class of practical problems for which this is feasible. (Deja vu? Quantum computers also excel at solving only a narrow class of problems).
Wladawsky-Berger says in Ref.1,
“One of their most compelling conclusions is that with exascale computing, we are reaching a tipping point in predictive science, an area with potentially major implications across a whole range of new, massively complex problems.”
Hey! The same is true about quantum computing. Quantum computerists also plan to do MCMC (Markov Chain Monte Carlo) to make predictions based on probabilities. Using statistical techniques when faced with large data sets is a no-brainer. But note that mathematical complexity theory tells us that QCs can do MCMC much faster (in a time-complexity sense) than classical computers (and that means faster than a zillion cores, or a biological computer, which is a classical computer too).
Opinion – Challenges to exascale computing
by Irving Wladawsky-Berger (retired from IBM), ISGTW (International Science Grid This Week), April 7, 2010
- Future exaflop supercomputers will consume too much power without new software by BY ANNE-MARIE CORLEY, Ieee Spectrum, JUNE 2010
- Exascale supercomputers: Can’t get there from here? by Sam Moore, in Blogs/Tech talk/Ieee Spectrum, OCTOBER 07, 2008
- Why the computer is doomed, by Omar El Akkad, Saturday’s Globe and Mail, Jan. 29, 2010
- Exascale Computing Requires Chips, Power and Money,by Alexis Madrigal, Wired Feb 22, 2008