Elenco Electronics MO-1251 Manual de usuario Pagina 79

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 100
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 78
Pipelining
Pipelining gets its name
from
the fact that several processors
are connected serially, like sec-
tions
of
pipe, with
the output of
the first tied to
the
input
of the
second,
whose
output is tied to
the input of
the third. and so
forth.
Data
that enters
the
pipeline is acted upon by each
processor
as it travels through
the
pipeline.
Analogous to
pipelining
is the
operation of an assembly line. An
assembly line is divided into sev-
eral
stages, at each of which a
particular operation takes place.
One station might install the en-
gine into the body, for example,
while the next puts on
a
door.
As
the
partially
assembled car
travels down the
line, more and
more items are added until, fi-
nally,
a finished product emerges
at the end. By
advancing the
ve-
hicle as each step is completed
and replacing it with
a vehicle
from
the previous station, no as-
sembly
station is
idle.
Therefore,
it is possible
to turn out a steady
stream of finished cars at a rate
that is much faster
than
if
each
had been individually
crafted.
The same
strategy can be ap-
plied
to many computer func-
tions-
floating -point arithmetic,
for example. 1b
see how pipelin-
ing
can improve
performance,
let's
compare
the methods used
by a microprocessor
and a math
coprocessor
to multiply two
numbers.
Typically, the CPU accom-
plishes the Job
(through soft-
ware)
by (1) splitting each
number into
an exponent and a
mantissa, (2) adding
the expo-
nents,
(3) storing the result in
temporary memory, (4)
multiply-
ing
the
mantissas, (5) fetching
the exponent, and (6)
combining
the two into
a
final
answer. Only
after the entire
sequence has run
its course
can the CPU
start on
another
calculation.
The
math coprocessor, on
the
other hand, multiplies
the two
numbers
using a multi -stage
pipeline, as shown
in Fig. 3. The
two
numbers
enter the pipeline's
first
stage,
where
they are identi-
fied
as to exponent
and man-
tissa.
The elements
are then
shifted to
the second stage,
INPUT
A
INPUT
B
COMPARATOR
AND SELECTOR
LARGER
EXPONENT
FACTOR
FACTOR
RIGHT
SHIFTER
i
MULIIPLIER
SUPPRESS
ZEROS
ADDER
CORRECTED
EXPONENT
LEFT
SHIFTER
NORMALIZED
FRACTION
EXPONENT
DIFFERENCE
Fig.
3. BLOCK
DIAGRAM of a plpelined math coprocessor.
Coprocessor
Or Accelerator?
One term loosely bandied
about the PC
industry is the
coprocessor
accelerator board,
a name generally
given to adapt-
er cards
that contain high-
speed CPU chips
to
increase
sys-
tem
throughput. Where the
term
originated
is anybody's
guess, but most of
those boards
are not
coprocessors;
they're
simply accelerators.
The
difference is in the way an
accelerator
board operates. To
install an
accelerator card in
most cases you must
remove the
system's old
CPU and run
a
jumper
cable from
the acceler-
ator card
to the now -empty CPU
socket.
Notice
that the two
pro-
cessors don't work together
si-
multaneously;
at best you
may
switch between them to
accom-
modate
temperamental soft-
ware applications.
By contrast.
to achieve true
parallel process-
ing, both
CPU's must be accessi-
ble by software at
the all times -
something the typical
acceler-
ator card simply
cannot do.
where
the mantissas are weight-
ed for
processing.
The third stage
multiplies the mantissas
and
the
fourth strips the
fraction
of
un-
necessary zeros; the
fifth and
final stage adds the exponents
and converts the results to con-
ventional
scientific notation.
The coprocessor really picks
up
speed, because after the first pair
of
numbers is
shifted
from the
first stage to the second, another
pair can enter the now -empty
first stage. In fact, with each suc-
cessive
movement
of
data be-
tween stages, another pair of
numbers can enter the pipeline.
causing it to fill. After the fifth
shift, results begin pouring out
of the pipeline.
Of course, pipelining is not
limited to Intel's math
coprocessors; a pipeline can
also
be created using several trans -
puters. However, pipelining
is
only efficient when the number of
similar computations to be per-
formed is large, as in floating -
point arithmetic. Instructions
are still executed one step at a
time,
and
problems that
rely on
the results of one operation be-
fore invoking another are at a dis-
advantage. For that reason,
designers developed
a
number
of
other architectures for increas-
ing performance.
85
Vista de pagina 78
1 2 ... 74 75 76 77 78 79 80 81 82 83 84 ... 99 100

Comentarios a estos manuales

Sin comentarios