Wednesday, June 23, 2010

performance of computer system


"How well is the computer doing the work it is supposed to do?"[2]
Computer performance is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used.
Depending on the context, good computer performance may involve one or more of the following:



Computer performance metrics include availabilityresponse timechannel capacitylatencycompletion timeservice timebandwidththroughputrelative efficiencyscalability,performance per wattcompression ratioInstruction path length and speed upCPU benchmarks are available.[1]


Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions.



The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be
- compared relative to other systems or the same system before/after changes

Technical performance metrics

There is a wide variety of technical performance metrics that indirectly affect overall computer performance.
Because there are too many programs to test a CPU's speed on all of them, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.
Some important measurements include:
  • Instructions per second – Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth).
  • FLOPS – The number of floating-point operations per second is often important in selecting computers for scientific computations.
  • Performance per watt – System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. [1][2]
  • Some system designers building parallel computers pick CPUs based on the speed per dollar.
  • System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP[disambiguation needed][clarification needed])
  • Computer programmers who program directly in assembly language want a CPU to support a full-featured instruction set.
  • Low power – For systems with limited power sources (e.g. solar, batteries, human power).
  • Small size or low weight - for portable embedded systems, systems for spacecraft.
  • Environmental impact – Minimizing environmental impact of computers during manufacturing and recycling as well as during use. Reducing waste, reducing hazardous materials. (seeGreen computing).





While clock rates are a valid way of comparing the performance of different speeds of the same model and type of processor, other factors such as pipeline depth and instruction sets can greatly affect the performance when considering different processors.


CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU.


 If performance is critical, the only benchmark that matters is the target environment's application suite.


Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed service level agreements. Benchmarks tend to emphasize mean scores (IT perspective) rather than low standard deviations (user perspective).






source: http://en.wikipedia.org/wiki/Benchmark_(computing)http://en.wikipedia.org/wiki/Computer_performance,

No comments: