Tuesday, September 11, 2007

SIMD
In computing, SIMD (Single Instruction, Multiple Data) is a technique employed to achieve data level parallelism, as in a vector or array processor. First made popular in large-scale supercomputers (as opposed to MIMD parallelization), smaller-scale SIMD operations have now become widespread in personal computer hardware. Today the term is associated almost entirely with these smaller units.

History
A separate class of processors exist for this sort of task, commonly referred to as Digital Signal Processors, or DSPs. The main difference between SIMD-capable CPU's and DSP is that the latter are complete processors with their own (often difficult to use) instruction set, whereas SIMD-extentions rely on the general-purpose portions of the CPU to handle the program details, and the SIMD instructions handle the data manipulation only. DSPs also tend to include instructions to handle specific types of data, sound or video for instance, whereas SIMD systems are considerably more general purpose. DSP's generally operate in Scratchpad RAM driven by DMA transfers initiated from the host system - and are unable to access external memory. Some DSP include SIMD instruction sets. The inclusion of SIMD units in general purpose processors has supplanted the use of DSP chips in computer systems, though they continue to be used in embedded applications. A sliding scale exists - the Cell's SPU's and the Ageia Physics Processing Unit could be considered half way between CPU's & DSP, in that they are optimized for numeric tasks & operate in local store, but they can autonomously control their own transfers so are in effect true CPU's.

DSPs
An application that may take advantage of SIMD is one where the same value is being added (or subtracted) to a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red, green and blue portions of the color. To change the brightness, the R G and B values are read from memory, a value is added (or subtracted) from it, and the resulting value is written back out to memory.
With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "get this pixel, now get the next pixel", a SIMD processor will have a single instruction that effectively says "get lots of pixels" ("lots" is a number that varies from design to design). For a variety of reasons, this can take much less time than "getting" each pixel individually, as in a traditional CPU design.
Another advantage is that SIMD systems typically include only those instructions that can be applied to all of the data in one operation. In other words, if the SIMD system works by loading up eight data points at once, the add operation being applied to the data will happen to all eight values at the same time. Although the same is true for any superscalar processor design, the level of parallelism in a SIMD system is typically much higher.

Advantages

Many SIMD designers are hampered by design considerations outside their control. One of these considerations is the cost of adding registers for holding the data to be processed. Ideally one would want the SIMD units of a CPU to have their own registers, but many are forced for practical reasons to re-use existing CPU registers - typically the floating point registers. These tend to be 64-bits in size, smaller than optimal for SIMD use, as well as leading to problems if the code attempts to use both SIMD and normal floating point instructions at the same time - at which point the units fight over the registers. Such a system was used in Intel's first attempt at SIMD, MMX, and the performance problems were such that the system saw very little use. However, recent x86 processor designs from Intel and AMD (as of November 2006, or several months prior) have eliminated the problems of shared SIMD and floating-point math registers, by providing a new, separate bank of SIMD registers. Still, in most cases the programmer doesn't know which processor model his code will be run on.
Packing and unpacking data to/from SIMD registers can be time-consuming in some applications, reducing the efficiency gained. If each datum (say, an 8-bit value) needs to be gathered/dispersed separately rather than loading an entire register in one operation, it is advisable to reorganize the data if possible, or consider not using SIMD at all.
Though recently there has been a flurry of research activities into techniques for efficient compilation for SIMD, much remains to be done. For that matter, the state-of-the-art for SIMD, from a compiler perspective, is hardly comparable to that for vector processing.
Because of the way SIMD works, the data in the registers must be well-aligned. Even for simple stream processing like convolution this can be a challenging task.
Not all algorithms suit vectorization. Chronology
Small-scale (64 or 128 bits) SIMD has become popular on general-purpose CPUs, starting in 1989 with the introduction of the Digital Equipment Corporation VAX Vector instructions in the Rigel [1] system and continuing through 1994 and later with HP's PA-RISC MAX instruction set. SIMD instructions can be found to one degree or another on most CPUs, including the IBM's AltiVec and SPE for PowerPC, HP's MVI for Alpha, Intel's MMX and iwMMXt, SSE, SSE2, SSE3 and SSSE3, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS, Sun's MAJC, HP's MAX for PA-RISC, ARM's NEON technology, MIPS' MDMX(MaDMaX) and MIPS-3D. The Cell Processor's SPU's instruction set is heavily SIMD based.
The instruction sets generally include a full set of vector instructions, including multiply, invert and trace. These are particularly useful for processing 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression.
Modern Graphics Processing Units are often very wide SIMD implementations, capable of branches, loads, and stores on 128 or 256 bits at a time.

Software
Though it has generally proven difficult to find sustainable commercial applications for SIMD processors, one that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications such as conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from HDTV formats, etc.), deinterlacing, noise reduction, adaptive video compression, and image enhancement.
A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The Sony PlayStation 2 was unusual in that its vector-float units could function as autonomous DSPs executing their own instruction streams, or as coprocessors driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
One of the more recent processors to use vector processing is the Cell Processor developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (each with independent RAM and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications.
A larger scale SIMD processor comes from Stream Processors, Inc a company headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.

No comments: