4
Strong Scaling and Amdahl's Law

Strong scaling is a measure of how, for a fixed overall problem size, the time to solution decreases as more processors are added to a system. An application that exhibits linear strong scaling has a speedup equal to the number of processors used.

Strong scaling is usually equated with Amdahl's Law, which specifies the maximum speedup that can be expected by parallelizing portions of a serial program. Essentially, it states that the maximum speedup S of a program is:

$$S=\frac{1}{(1−P)+P/N}$$

Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.

In reality, most applications do not exhibit perfectly linear strong scaling, even if they do exhibit some degree of strong scaling. For most purposes, the key point is that the larger the parallelizable portion P is, the greater the potential speedup. Conversely, if P is a small number (meaning that the application is not substantially parallelizable), increasing the number of processors N does little to improve performance. Therefore, to get the largest speedup for a fixed problem size, it is worthwhile to spend effort on increasing P, maximizing the amount of code that can be parallelized.

Weak Scaling and Gustafson's Law

Weak scaling is a measure of how the time to solution changes as more processors are added to a system with a fixed problem size per processor; i.e., where the overall problem size increases as the number of processors is increased.

Weak scaling is often equated with Gustafson's Law, which states that in practice, the problem size scales with the number of processors. Because of this, the maximum speedup S of a program is:

$$S=(1-P)+NP$$

Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.

the scaled speedup is calculated based on the amount of work done for a scaled problem size (in contrast to Amdahl’s law which focuses on fixed problem size). Or say, it is not the problem size that remains constant as we scale up the system but rather the execution time.

Can you provide an example of an algorithm that has strong scalability but not weak scalability? Can you also provide an example the other way around?

hzhaoc
  • 41
  • 3
  • 1
    More processors typically means more available memory. Am I allowed to exploit that, especially in the strong scaling case? – Ian Bush Sep 29 '21 at 11:04
  • Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. – Community Sep 29 '21 at 12:49
  • 4
    The post (other than the title) does not actually contain a question. You should edit it so that there is an actual question at the end. – Wolfgang Bangerth Sep 29 '21 at 14:35
  • Including a question mark in your question might help clarify what you're asking. – Richard Sep 29 '21 at 18:55

0 Answers0