Processors have come a long way. The entire mathematical-juggling functions of the entire information technology age rests on processors running in boxes called computers. These processors determine the very existence of the computer. It is the brain. It is the singular most powerful piece of the entire box. The world of processors has, over the years, emerged so rapidly and for those who have been keeping tab on the trend, it can be very much said that processors and operating systems have been jointly evolving. Before I go any further, I need to state that it is a jungle of names in this article. These names may sound like coming straight from science fiction (Ageia PhysX, Aeroflex Gaisler LEON3, Opteron, Celeron, and before you think too deep there is no Optimus Prime and Megatron, those are characters in the Transformers film trilogy). But suffice it to say that science fiction is all about technology, and we in the technology world are right in the middle of it.
WHAT IS SMP?
Let us get started by defining terms. The term SMP, what does it mean? It means Symmetric Multi-Processing. And what does that mean in itself? Let’s settle the word symmetry first. Take a look at the diagram below as a good example of what that means.
The diagram “This is a box” above is a box. The box has now been divided into two halves. Each of the halves, namely half A and half B are identical. That is what symmetry means. It means identical, functioning the same way, similar, homogenous. The tree is also an example of symmetry divided along equal lines, no preferences.
The meaning of multiprocessing is easier to grasp. It means doing multiple tasks at the same time, or even the same task in different aspect of the brain at the same time, and not necessarily delivering the result at the same time, but sufficiently close enough to look like it was delivered at the same time. See the diagram below:
So, we can put up a diagram of what symmetric multi-processing is all about. It means identical, or homogenous, or similar processing running at the same time.
How does this work? By combining multiple processor hardware and software (not mainstream desktop application software) architecture, two or more processors that have been designed to be symmetrical (identical) are connected together to a shared primary memory, with access to input and output (I/O) devices. This is what you see happening in all the computer systems on which an operating system sits (Windows 3.0 till now Windows 8, including the Open Source flavours).
In this elaborate architectural dance of processors and motherboard, none of the processors are dedicated to any special function, treated with any special preference or affinity in the operating system. Nevertheless, each processor in this highly integrated multiprocessor system is busy executing programs individually, working on data individually but able to share common resources that feed them with data such as the I/O devices, memory, IRQ (interrupt system) and the connecting point is the system bus on the motherboard of your computer.
Not so far away from the processor is something knows as the CACHE. It is a repository of processed results stored for purposes of speedy retrieval if a calculation previously done is requested again, thus saving traffic from the system bus. Caches come in what is known as L1, L2 and L3. The L1 cache is within the CPU meaning the processor manages that cache. The L2 cache is equally managed by the CPU, but the memory for it is external to the CPU. The L3 is external and managed externally as well, and typically shared by multiple CPUs.
One of the major challenges with SMP was in the area of scalability, programming and performance. On the performance end though, SMP allows any of the processors to work on a task irrespective of how far away in memory it is, as long as the task is not already in execution in another processor. This, however, cannot be done without programming and the programming required is of two types, namely, the programming for the CPU itself, and the programming for the interconnection between the CPUs.
The practical purpose of this SMP which for many years was the underpinning for desktops, laptops and servers was for multithreaded applications, many of which dominated the market. The designs varied and spanned superscalar, VLIW, SIMD or multithreading that allows the inter-processor communication in handling tasks. Processors that were built on the SMP included: Intel Xeon, Pentium D, Pentium Pro, Pentium 2, Pentium 3, Intel Pentium 2 Xeon, Intel Pentium 3 Xeon, Core Duo, Core 2 Duo, AMD Athlon64 X2, Quad FX, Opteron 200, Opteron 2000, Sun Microsystems UltraSPARC, Fujitsu SPARC64 III, SGI MIPS, Intel Itanium, Hewlett Packard PA-RISC, DEC Alpha, IBM POWER, PowerPC G4, PowerPC G5.
WHAT IS MULTI-CORE PROCESSOR?
A multi-core processor is a single processor physically but housing two or more independent CPUs that we now refer to as a “core” and this core executes the program instruction, just as if a physical processor was executing the program instruction. Diagrammatically, it would be represented thus:
From the very genesis, processors were designed to house only one core as shown above in the diagram on the left. The other two diagrams reveal a two core (dual core) processor and a 4 core (quad core) processor. Even as much as 48 cores! These come very useful in applications such as general purpose (overkill I would say), but more of graphic applications such as video editing, digital signal processing, network, etc applications. Examples here include AMD Phenom 2 X4, AMD Phenom 2 X6, Intel i5, Intel i7, Intel i7 Extreme Edition 980X, Intel Xeon E7-2820, AMD FX-8350, and Intel Xeon E7-2850. The scenario is such that you have a single processor package hosting as many as from 2 to 12 cores all multiprocessing. Depending on how it is designed, some cores may share a single cache or may not share, and the core can be communicating with each other to know what to process or not, idle or not. This necessitated architectures that were similar to the single processor SMP where VLIW (Very Long Instruction Word), vector processing, SIMD (Single Instruction Multiple Data), multi-threading, or superscalar.
How does this benefit anyone you may ask? The fact that the processor cores are very close to each other is a significant speed boost, unlike when multiple physical processors where deployed and signals have to travel through the circuitry. This gain in clock speed is very significant. The second gain that was evident in this technology was power consumption; a multiple physical processor system consumes more power, not only to run itself but for running signal between circuitry to another processor and L3 caches, etc. As the mobile age beckons and devices and handhelds whose power source is battery driven begins to require high-end performances, producing this much needed higher performance with energy efficiency has become a necessity, this is definitely an added value. Another save was the reduction in cache numbers, as cores share cache. Previously in physical processor designs, the law of diminishing returns sets in when cache numbers hit a certain figure. The dark side was it threw the software vendors to go back to the drawing board and begin drafting for multi-core support. A little looking around the internet you find everyone has adjusted accordingly from Apple Mac OS X Snow Leopard to iOS 4, Microsoft to game engine designers, and now even my android powered tablet is quad core, and it is definitely groovy for me.
So what choice do I have to make? Can I just go with the trend that everyone is moving to multi-core systems? Or do I just stay with multi-processing systems? It all boils down to the individual. Suffice it to say though, from a technical point of view, the difference is on what exactly are you doing with your system? I am not making any excuses for the reason SMP has to die, but progressively speaking multi-core is the better choice for high-end users and servers.
|No. of Core||
No. of Core