sliver__

4.2 Multicore Programming 본문

CS/운영체제

4.2 Multicore Programming

sliver__ 2023. 11. 8. 23:12
728x90

Trend in system design is to place multiple computing  cores on a single processing chip where each core appears as a separate CPU  to the operating system.

We refer to such systems as multicore,  and multithreaded programming provides a mechanism for more efficient  use of these multiple computing cores and improved concurrency.

 

Example) Consider an application with four threads.

Below is Single core case.

Concurrent execution on a single-core system.

Below is multiple cores.

Parallel execution on a multicore system.

We have to know about distinction between concurreny and paralleslism.

Concurrency : support more than one task by allowing all the tasks to make progress

Parallelism : can perform more than one task simultaneously.

 

 

Programming Challengs 

Five   areas   present   challenges   in   programming   for   multicore  systems:

 

  • Identifying tasks. This involves examining applications to find areas that can be divided into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel on individual cores.

    Balance. While identifying tasks that can run in parallel, programmers must also ensure that the tasks perform equal work of equal value. In some instances, a certain task may not contribute as much value to the overall process as other tasks. Using a separate execution core to run that task may not be worth the cost.

    Data splitting. Just as applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores.

    Data dependency. The data accessed by the tasks must be examined for dependencies between two or more tasks. When one task depends on data from another, programmers must ensure that the execution of the tasks is synchronized to accommodate the data dependency. We examine such strategies in Chapter 6.

    Testing and debugging. When a program is running in parallel on multiple cores, many different execution paths are possible. Testing and debugging such concurrent programs is inherently more difficult than testing and debugging single-threaded applications.

 

!! AMDAHL'S LAW

Amdahl’s Law is a formula that identifies potential performance gains from adding additional computing cores to an application that has both serial (nonparallel) and parallel components. 
If S is the portion of the application that must be performed serially on a system with N processing cores, the formula appears as follows:

Amdahl's law
Amdahl’s Law in several different scenarios

Amdahl’s Law is that as N approaches infinity, the speedup converges to 1∕S.

 

Types of Parallelism

Data parallelism focuses on distributing subsets of the same data  across multiple computing cores and performing the same operation on each core. 

Task parallelism involves distributing not data but tasks (threads) across multiple computing cores.

Data and task parallelism.

 

728x90

'CS > 운영체제' 카테고리의 다른 글

4.5 Implicit Threading  (0) 2023.11.13
4.4 Thread Libraries  (1) 2023.11.09
4.1 Thread & Concurrency overview  (0) 2023.11.06
3.8 Communication in Client–Server Systems  (0) 2023.11.06
3.7 Examples of IPC Systems  (0) 2023.11.06
Comments