Denote by
When all processor execute the subtasks simultaneously,
The speedup of a parallel system can be defined as:
The communication overhead,
Suppose a part of a given task
The speedup is:
Amdahl's law:
- When
$S=p$ , it is a linear speedup, which is an ideal speedup. - When
$S<p$ , it is the true speedup ratio. - When
$S>p$ , it is a superlinear speedup.
Parallel efficiency represents the average execution efficiency/utilization of each processor when a parallel computer executes a parallel algorithm.
- When
$\xi=1$ , it is a linear acceleration ratio. - When
$\xi<1$ , it is a true case. - When
$\xi<<1$ , it is a problematically inefficient parallel algorithm.
Denote by
Unless the vast majority of the program is parallelized, high speedups cannot be achieved.
与上文的 speedup 表达式实际等同。
Amadahl's Law does not take in account the size of the problem. But for many problems, the proportion of serial parts of the program decreases as the problem size increases.
Ignore
As the problem size increses, the serial load proportion decreases and is no longer the bottleneck of the problem.
Ignore
- When
$G(p)=1$ , it is Amdahl's Law. - When
$G(p)=p$ , it is Gustafson's Law. - When
$G(p)>p$ , Sun-Ni speedup is higher than Amdahl and Gustafson corresponding to computer load increasing faster than storage requirement.
The scalability of a computer system is its ability to scale up in performance with an increase in the number of processors in a defined application context.
- A program is said to be strongly scalable if the constant efficiency can be maintained when increasing the number of processors without increasing the size of the problem.
- A program is said to be weakly scalable if increasing the number of processors, you can only increase the size of the problem at the same rate to keep the efficiency value unchanged.