I am inevitable

WARNING! The free lunch is over.

“The free lunch is over” is a famous article written by Herb Sutter which reminds us that microprocessor serial-processing speed is reaching a physical limit and it is time to focus on multi-threading and products that better support multi-threading such as multi-core processors.

Prior to the concept of parallelism or concurrency, CPU designers have achieved and concentrated on performance gains in three main areas

  1. Clock speed
  2. Execution Optimisation
  3. Cache

But the performance gains in these areas is diminishing. According to (Intel) CPU trends before 2003, we should have had 10GHz chips by 2006 but a quick check around us tells that even in 2021 a 10GHz chip is still not reality and we can say that progress in clock speed has almost hit a plateau after 2003. This is primary because of the PPA (Power / Performance / Area) trade off.

Does this mean Moore’s law is over? The answer is no because there is still an explosion of transistors used. But like all exponential progressions Moore’s law has to end and this calls for a change in the ways and methods to improve the performance of CPU.

In recent years CPU designers have given importance to hyper-threading, multi-core and cache.

  1. Hyper-threading : It can produce a performance boost with multi-threaded programs. The limiting factor with this approach is that there is only one integer math unit, one FPU and one cache.
  2. Multicore : This approach will boost a multi-threaded program reasonably well but not a single threaded program. Also it is a myth that 2 X 3GHz = 6Ghz, it’s always < 6Ghz.
  3. Cache : Multiple levels of cache, and smartly sharing it between multiple cores improves the performance considerably. In the words of Herb Sutter “Cache is King” and “accessing main memory is expensive“. But an increase in cache memory may not be feasible because of space (real estate) problems. Also, price and speed have an inverse relation with memories.

So what does this change in the hardware (multi-cores, hyper-threading) mean for the way we write software? The answer is well written concurrent / parallel / multi-threaded programs. From the above points it has been clear that performance enhancing through hardware changes is limited from the point of view of Moore’s law or the good old more frequency and smaller node implies high performance case but there are accelerators, SoCs like Apple M1 where quantum computing once device physics becomes the limiting factor. But in the big picture, the time has come to put an effort from our side also and use our lazy brains. Hence, “ The free lunch is over “ and concurrent / parallel / multi threaded programming has become INEVITABLE !

What are the consequences of the parallelism?

  1. The primary consequence is that if you want to fully exploit the CPU throughput, parallelism is necessary and most of the applications are increasingly becoming parallel.
  2. Programming languages and systems will increasingly be forced to deal well with concurrency and parallelism. As a result many languages like Go, Rust and libraries like OpenMP are popping up.
  3. Causing a hard time for the developers since working with multi-thread and multi-core is not an easy task, many developers encounter many problems.
  4. Probably rethinking of the computer architecture from ground up.

Is parallelism really inevitable ?

NITK - IT undergrad

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store