Multicore processors sound like the
hit of the moment. However, these technologies are quite old. The first
multicore processors had been designed on 2004 approximately. Over 10 years!
That’s significant for a “modern” technology. By this, most people believe that
computers can run much more programs in lower time. And their intuition could
be right unless the fact that they don’t. Why this happens? The answer is
simple: programs are not designed to be run on more than 1 processor at once.
This problem has created users’
disappointment because we, as users, want speed, effectiveness, reliability,
and so many other characteristics from a computer. And this expectations
increase on quantity and quality every year, due to marketing, and some other
techniques sellers use to convince us to buy a brand-new computer every year
(at least).
As we can read on Suttler’s
article, multicore processors have substituted the “Moore’s law”. Hardware
engineers are not spending more time on researching how to increase clock speed
because, no matter how simple or complex the processor architecture is, the
processor dissipates more energy on heat proportionally to the clock speed. In
other words, the faster the clock is, the more heat the processor dissipates.
And, of course, heat can damage performance, even destroy the processor. This
is one of the main reasons that clock speed is not a task to improve.
Nowadays, we use computers that
have multicore processors. And this fact works hand-to-hand with hyperthreading
and cache-size-increasing trends. As Suttler says: if we increase the size of
cache blocks, we can avoid more “miss” events and change them for “hit” events.
Then, the second technique in order to use all functionality given by multicore
processors is: parallelism.
However, parallelism is not a
common way to program, and this is due to its complexity. But, this is not an
excuse to keep programing on single-core based way, because, when it comes to
performance, the only way to speed up programs nowadays is by learning to
parallelize our programs, and allow the multicore processors to use they
physical advantages, where and when possible.
Programmers can keep programming under
single-core layouts. But, if they want to have a better performance of their
own codes, they must learn how to split processes, and exploit most of the resources
of the new systems.
If we can manage 2 cores, there is
a change that we can handle more cores, and give our users the possibility to
do much more things at a time.
Notes:
A “hit” event occurs when the piece
of data that the processor is waiting for is already loaded on cache.
A “miss” event occurs when the
piece of data that the processor is waiting for is not located on cache, and
the system must search for it on the principal memory or the hard drive, and then
fetch it from that location.
Suttler, H (Aug, 2009). The Free Lunch is Over. Retrieved on August 23, 2015, from http://www.gotw.ca/publications/concurrency-ddj.htm
No hay comentarios.:
Publicar un comentario