INTRODUCTION:
The processor (CPU, for Central Processing Unit) is the computer’s brain. It allows the processing of numeric data, meaning information entered in binary form, and the execution of instructions stored in memory. The first microprocessor (Intel 4004) was invented in 1971. It was a 4-bit calculation device with a speed of 108 kHz. Since then, microprocessor power has grown exponentially.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Operation
The processor (called CPU, for Central Processing Unit) is an electronic circuit that operates at the speed of an internal clock thanks to a quartz crystal that, when subjected to an electrical currant, send pulses, called “peaks”. The clock speed (also called cycle), corresponds to the number of pulses per second, written in Hertz (Hz). Thus, a 200 MHz computer has a clock that sends 200,000,000 pulses per second.
With each clock peak, the processor performs an action that corresponds to an instruction or a part there of. A measure called CPI (Cycles Per Instruction) gives a representation of the average number of clock cycles required for a microprocessor to execute an instruction. A microprocessor power can thus be characterized by the number of instructions per second that it is capable of processing. MIPS (millions of instructions per second) is the unit used and corresponds to the processor frequency divided by the CPI.
One of the primary goals of computer architects is to design computers that are more cost effective than their predecessors. Cost-effectiveness includes the cost of hardware to manufacture the machine, the cost of programming, and costs incurred related to the architecture in debugging.Both the initial hardware and subsequent programs. If we review the history of computer families we find that the most common architectural change is the trend toward ever more complex machines. Presumably this additional complexity has a positive trade off with regard to the cost effectiveness of newer models.
The Microprocessor Revolution:-
The engine of the computer revolution is the microprocessor. It has led to new inventions, such as FAX machines and personal computers, as well as adding intelligence to existing devices, such as wristwatches and automobiles. Moreover, its performance has improved by a factor of roughly 10,000 in the 25 years since its birth in 1971.
This increase coincided with the introduction of Reduced Instruction Set Computers (RISC). The instruction set is the hardware “language” in which the software tells the processor what to do. Surprisingly, reducing the size of the instruction set — eliminating certain instructions based upon a careful quantitative analysis, and requiring these seldom-used instructions to be emulated in software — can lead to higher performance, for several reasons:-
REASONS FOR INCREASED COMPLEXITY
Speed of Memory vs. Speed of CPU:-
.from the 701 to the 709 [Cocke80]. The 701 CPU was about ten times as fast as the core main memory this made any primitives that were implemented as subroutines much slower than primitives that were instructions. 709 more cost-effective than the 701. Since then, many “higher-level” instructions have been added to machines in an attempt to improve performance.
Microcode and LSI Technology:-
Microprogrammed control allows the implementation of complex architectures more cost-effectively than hardwired control.Advances in integrated circuit memories made in the late 60’s and early 70’s have caused microprogrammed control to be the more cost-effective approach in almost every case. Once the decision is made to use microprogrammed control, the cost to expand an instruction set is very small; only a few more words of control store.
Examples of such instructions are string editing, integer-to-floating conversion, and mathematical operations such as polynomial evaluation.
Code Density:-
With early computers, memory was very expensive. It was therefore cost effective to have very compact programs.
Attempting to obtain code density by increasing the complexity of the instruction set is often a double-edged the cost of 10% more memory is often far cheaper than the cost of squeezing 10% out of the CPU by architectural “innovations”.
Marketing Strategy:-
Unfortunately, the primary goal of a computer company is not to design the most cost-effective computer; the primary goal of a computer company is to make the most money by selling computers. In order to sell computers manufacturers must convince customers that their design is superior to their competitor’s.In order to keep their jobs, architects must keep selling new and better designs to their internal management.
Upward Compatibility:-
Coincident with marketing strategy is the perceived need for upward compatibility. Upward compatibility means that the primary way to improve a design is to add new, and usually more complex, features. Seldom are instructions or addressing modes removed from an architecture, resulting in a gradual increase in both the number and complexity of instructions over a series of computers.
Support for High Level Languages:-
As the use of high level languages becomes increasingly popular, manufacturers have become eager to provide more powerful instructions to support them. Unfortunately there is little evidence to suggest that any of the more complicated instruction sets have actually provided such support.The effort to support high-level languages is laudable, but we feel that often the focus has been on the wrong issues.
Use of Multiprogramming:-
The rise of timesharing required that computers be able to respond to interrupts with the ability to halt an executing process and restart it at a later time. Memory management and paging additionally required that instructions could be halted before completion and later restarted.
RISC(Reduced Instruction Set Computing)
The acronym RISC (pronounced risk), for reduced instruction set computing, represents a CPU design strategy emphasizing the insight that simplified instructions that “do less” may still provide for higher performance if this simplicity can be utilized to make instructions execute very quickly. Many proposals for a “precise” definition have been attempted, and the term is being slowly replaced by the more descriptive load-store architecture.
Being an old idea, some aspects attributed to the first RISC-labeled designs (around 1975) include the observations that the memory restricted compilers of the time were often unable to take advantage of features intended to facilitate coding, and that complex addressing inherently takes many cycles to perform. It was argued that such functions would better be performed by sequences of simpler instructions, if this could yield implementations simple enough to cope with really high frequencies, and small enough to leave room for many registers, factoring out slow memory accesses. Uniform, fixed length an instruction with arithmetic’s restricted to registers was chosen to ease instruction pipelining in these simple designs, with special load-store instructions accessing memory.
The RISC Design Strategies:-
The basic RISC principle: “A simpler CPU is a faster CPU”.
The focus of the RISC design is reduction of the number and complexity of instructions in the ISA.
A number of the more common strategies include:
1) Fixed instruction length, generally one word.
This simplifies instruction fetch.
2) Simplified addressing modes.
3) Fewer and simpler instructions in the instruction set.
4) Only load and store instructions access memory;
no add memory to register, add memory to memory, etc.
5) Let the compiler do it. Use a good compiler to break complex high-level language statements into a number of simple assembly language statements.
Typical characteristics of RISC:-
For any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism.
Other features, which are typically found in RISC architectures, are:
Uniform instruction format, using a single word with the opcode in the same bit positions in every instruction, demanding less decoding;
Identical general purpose registers, allowing any register to be used in any context, simplifying compiler design (although normally there are separate floating point registers);
Simple addressing modes. Complex addressing performed via sequences of arithmetic and/or load-store operations.
• Fixed length instructions which
(a) are easier to decode than variable length instructions, and
(b) use fast, inexpensive memory to execute a larger piece of code.
• Hardwired controller instructions (as opposed to microcoded instructions). This is where RISC really shines as hardware implementation of instructions is much faster and uses less silicon real estate than a microstore area.
• Fused or compound instructions which are heavily optimized for the most commonly used functions.
• Pipelined implementations with goal of executing one instruction (or more) per machine cycle.
• Large uniform register set
• minimal number of addressing modes
• no/minimal support for misaligned accesses.
RISC Examples:-
• Apple iPods (custom ARM7TDMI SoC)
• Apple iPhone (Samsung ARM1176JZF)
• Palm and PocketPC PDAs and smartphones (Intel XScale family, Samsung SC32442 – ARM9)
• Nintendo Game Boy Advance (ARM7)
• Nintendo DS (ARM7, ARM9)
• Sony Network Walkman (Sony in-house ARM based chip)
Advantages of RISC
* Speed
* Simpler hardware
* Shorter design cycle
* User (programmers benifits)
Disadvantages Of RISC
q A more sophisticated compiler is required
q A sequence of RISC instructions is needed to implement complex instructions.
q Require very fast memory systems to feed them instructions.
q Performance of a RISC application depend critically on the quality of the code generated by the compiler.
CISC(complex instruction set computer)
A complex instruction set computer (CISC, pronounced like “sisk”) is a computer instruction set architecture (ISA) in which each instruction can execute several low-level operations, such as a load from memory, an arithmetic operation, and a memory store, all in a single instruction.
Performance:-
Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.
ADVANTAGES OF CISC
* A new processor design could incorporate the instruction set of its predecessor as a subset of an ever-growing language–no need to reinvent the wheel, code-wise, with each design cycle.
* Fewer instructions were needed to implement a particular computing task, which led to lower memory use for program storage and fewer time-consuming instruction fetches from memory.
* Simpler compilers sufficed, as complex CISC instructions could be written that closely resembled the instructions of high-level languages. In effect, CISC made a computer’s assembly language more like a high-level language to begin with, leaving the compiler less to do.
DISADVANTAGES OF CISC
* The first advantage listed above could be viewed as a disadvantage. That is, the incorporation of older instruction sets into new generations of processors tended to force growing complexity.
* Many specialized CISC instructions were not used frequently enough to justify their existence. The existence of each instruction needed to be justified because each one requires the storage of more microcode at in the central processing unit (the final and lowest layer of code translation), which must be built in at some cost.
* Because each CISC command must be translated by the processor into tens or even hundreds of lines of microcode, it tends to run slower than an equivalent series of simpler commands that do not require so much translation. All translation requires time.
* Because a CISC machine builds complexity into the processor, where all its various commands must be translated into microcode for actual execution, the design of CISC hardware is more difficult and the CISC design cycle correspondingly long; this means delay in getting to market with a new chip.
Comparison of RISC and CISC
This table is taken from an IEEE tutorial on RISC architecture.
CISC Type Computers |
RISC Type |
||||
IBM 370/168 |
VAX-11/780 |
Intel 8086 |
RISC I |
IBM 801 |
|
Developed |
1973 |
1978 |
1978 |
1981 |
1980 |
Instructions |
208 |
303 |
133 |
31 |
120 |
Instruction size (bits) |
16 – 48 |
16 – 456 |
8 – 32 |
32 |
32 |
Addressing Modes |
4 |
22 |
6 |
3 |
3 |
General Registers |
16 |
16 |
4 |
138 |
32 |
Control Memory Size |
420 Kb |
480 Kb |
Not given |
0 |
0 |
Cache Size |
64 Kb |
64 Kb |
Not given |
0 |
Not given |
However, nowadays, the difference between RISC and CISC chips is getting smaller and smaller. RISC and CISC architectures are becoming more and more alike. Many of today’s RISC chips support just as many instructions as yesterday’s CISC chips. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the 601 is considered a RISC chip, while the Pentium is definitely CISC.
RISCs are leading in:-
* New machine designs
* Research funding
* Publications
* Reported performance
* CISCs are leading in:
* REVENUE
Performance
* The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction.
* RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.
* Hybrid solutions:
* RISC core & CISC interface
* Still has specific performance tuning
Future Aspects
Today’s microprocessors are roughly 10,000 times faster than their ancestors. And microprocessor-based computer systems now cost only 1/40th as much as their ancestors, when inflation is considered. The result: an overall cost-performance improvement of roughly 1,000,000, in only 25 years! This extraordinary advance is why computing plays such a large role in today’s world. Had the research at universities and industrial laboratories not occurred — had the complex interplay between government, industry, and academia not been so successful — a comparable advance would still be years away.
Microprocessor performance can continue to double every 18 months beyond the turn of the century. This rate can be sustained by continued research innovation. Significant new ideas will be needed in the next decade to continue the pace; such ideas are being developed by research groups today.
Conclusion
The research that led to the development of RISC architectures represented an important shift in computer science, with emphasis moving from hardware to software. The eventual dominance of RISC technology in high-performance workstations from the mid to late 1980s was a deserved success.
In recent years CISC processors have been designed that successfully overcome the limitations of their instruction set architecture that is more elegant and power-efficient, but compilers need to be improved and clock speeds need to increase to match the aggressive design of the latest Intel processors.
REFERENCES:
Books:
1. Computer system Architecture by M. Morris Mano
2. Processor Archicture by jurij silc, Borut Robic
3. George Radin, “The 801 Minicomputer”, IBM Journal of Research and Development, Vol.27 No.3, 1983
4. John Cocke and V. Markstein, “The evolution of RISC technology at IBM”, IBM Journal of Research and Development, Vol.34 No.1, 1990
5. Dileep Bhandarkar, “RISC versus CISC: A Tale of Two Chips”, Intel Corporation, Santa Clara, California
Encyclopedia:
1. Encarta
2. Britanica
Cite This Work
To export a reference to this article please select a referencing style below: