Foundations of Multithreaded, Parallel, and Distributed Programming读书介绍
类别 | 页数 | 译者 | 网友评分 | 年代 | 出版社 |
---|---|---|---|---|---|
书籍 | 664页 | 2020 | Addison Wesley |
定价 | 出版日期 | 最近访问 | 访问指数 |
---|---|---|---|
USD 91.00 | 2020-02-20 … | 2020-06-05 … | 1 |
Editorial Reviews
Product Description
Foundations of Multithreaded, Parallel, and Distributed Programming covers-and then applies-the core concepts and techniques needed for an introductory course in this topic. The book emphasizes the practice and application of parallel systems, using real-world examples throughout.
Greg Andrews teaches the fundamental concepts of multithread...
作者简介Editorial Reviews
Product Description
Foundations of Multithreaded, Parallel, and Distributed Programming covers-and then applies-the core concepts and techniques needed for an introductory course in this topic. The book emphasizes the practice and application of parallel systems, using real-world examples throughout.
Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance.
From the Back Cover
Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance.
Features
* Emphasizes how to solve problems, with correctness the primary concern and performance an important, but secondary, concern
* Includes a number of case studies which cover such topics as pthreads, MPI, and OpenMP libraries, as well as programming languages like Java, Ada, high performance Fortran, Linda, Occam, and SR
* Provides examples using Java syntax and discusses how Java deals with monitors, sockets, and remote method invocation
* Covers current programming techniques such as semaphores, locks, barriers, monitors, message passing, and remote invocation
* Concrete examples are executed with complete programs, both shared and distributed
* Sample applications include scientific computing and distributed systems
Preface
Chapter 1: The Concurrent Computing Landscape1
1.1 The Essence of Concurrent Pr-ogrammiilg 2
1.2 Hardware Architectures .
1.2.1 Processors and Caches 4
I 2.2 Shared-Me1nor.y Multiprocessors 6
1.2.3 Distributed-Memory Multicomputers and Networks8
1.3 Applications and Progralnlning Styles 10
1.4 Iterative Parallelism: Matrix Multiplication 13
1.5 Recursive Parallelism: Adaptjve Quadrature17
1.6 Producers and Consumers: Unix Pipes 19
1.7 Clients and Servers: File Systems . 27
1.8 Peers: Distributed Matrix Multiplication 23
1.9 Summary of Programming Notation 26
1.9.1 Declarations26
19.2 Sequential Statements 27
1.9.3 Concurrent Statements, Processes, and Procedures 29
1.9.4 Colnments 31
Historical Notes 31
References 33
.Exercises 34
Part 1 : Shared-Variable Programming
Chapter 2: Processes and Synchronization 41
2.1 States. Actions. Histories. and Properlies42
2.2 Parallelization: Finding Patterns in a File . 44
2.3 Synchronization: The Maximum of an Array 48
. 2.4 Atomic Actions and Await Statements 51
2.4.1 Fine-Grained Atomicity 51
2.4.2 Specifying Synchronization: The Await Statement 54
2.5 Produce~/Consurner Synchronization 56
2.6 A Synopsis of Axiomatic Semantics 57
2.6. 1 Fol.mai Logical Systems 58
2.6.2 A Programming Logic 59
2.6.3 Semantics of Concurrent Execution 62
2.7 Techniques fool- Avoiding Interference 65
2.7.1 Disjoint Variables 65
2.7.2 Weakened Assertions 66
2.7.3 Global Invariants 68
. 2.7.4 S y~ichronizatjon 69
. 2.7.5 An Example: The An-ay Copy f roblern Revisited 70
2.8 Safety and Liveness Properties 72
2.8.1 Proving Safety Properties 73
2.8.2 ScheduJiog Policies and Fairness 74
Historical Notes77
References80
Exercises 81
Chapter 3: Locks and Barriers 93
3.1 The Critical Secrion Problem 94
3.2 Critical Sections: Spin Locks 97
3.2.1 Test and Set 98
3.2.2 Test and Test and Set 100
. 3.2.3 Implementing Await Statements 101
3.3 Critical Sections: Fair Solutions 104
3.3.1 Tfie Tie-Breaker Algorithm 104
3.3.2 The Ticket Algorithm 108
3.3.3 The Bakery Algorithm 11 I.
3.4 Barrier Synchronizatjon 115
3.4.1 Shqred Counter 116
3.4.2 Flags and Coordinators 117
3.4.3 Symmetric Barriers120
3.5 Data Paallel Algorithms 124
3.5.1 Parallel Prefix Computations 124
3.5.2 Operations on Linked Lists127
3.5.3 Grid Compntations: Jacobi Iteration129
3.5.4 Synchronous Multiprocessors 131
3.6 Paral.l.el Computing with a Bag of Tasks 132
3.6.1 Matrix Multiplication133
3.6.2 Adaptive Quadrature 134
. Historical Notes 135
References139
Exercises 141
Chapter 4: Semaphores 153
4 . I Syntax and Semantics 154
4.2 Basic Problems and Techniques156
4.2.1 Critical Sections: Mutual Exclusion156
4.2.2 B tiers: Signaling Events15G
4.2.3 Producers and Consumers: Split Binary Semapl~ores 158
4.2.4 Bounded Buffers: Resource Counting160
4.3 The Dining Philosophers 164
4.4 Readers and Writers 166
4.4.1 ReaderslWriters as an Exclusion Problem 167
4.4.2 Readerstwriters Using Condition Synchronization 169
4.4.3 The Technique of Passing the Baton 171
4.4.4 Alternative Scheduling Policies 175
4.5 Resource Allocation and Scheduling178
. 4.5.1 Problem Definition and General Solution Pattern 178
4.5.2 Shortest-Job-Next Allocation . 180
4.6 Case Study: Pthreads 184
4.6.1 Thread CI-eation 185
4.6.2 Semaphores186
4.6.3 Example: A Simple Producer and Consumer 186
his to^. ical Notes188
References190
E~ercises 191
Chapter 5: Monitors 203
5.1 Syntax and Semanlics 204
5 . 11 Mutual Exclusion 206
5 . 1.2 Condition Variables 207
. 5.1.3 Signaling Disciplines 208
5.1.4 Additional Operations on Condition Variables 212
5.2 Synchronization Techniques 213
. 5.2.1 Bounded Buffers: Basic Condition Synchronization 213
5.2.2 Readers and Writers: Broadcast Signal 215
5.2.3 Shortest-Job-Next Allocation: Priority Wait 21 7
5.2.4 Interval Tii11er: Covering Conditiolls 218
. 5.2.5 The Sleeping Barber: Rendezvous 221
5.3 Disk Scheduling: Program Structures 224
5.3.1 Using a Separate Monitor 228
5.3.2 Using an Intermediary 230
. 5.3.3 Using a Nested Monitor 235
5.4 Case Study: Java 237
5.4.1 The Tl~reads Class 238
5.4.2 Synchonized Methods 239
5.4.3 Parallel ReadersIWriters 241
. 5.4.4 Exclusive ReadersNriters 243
. 5.4.5 True ReadersIWriters 245
5.5 Case Study: f theads 246
5.5.1 Locks and Condition Variables 246
5.5.2 Example: Summing the Elements of a Matrix 248
Historical Notes 250
References 253
. Exercises 255
Chapter 6: Implementations 265
. 6 .I A Single-Pi-ocessor Kernel 7-66
6.2 A Multiprocessor. Kernel 270
6.3 Implementing Semaphores in a Kernel 276
6.4 Impleinenting Monitors in a Kernel 279
6.5 Implementing Monitors Using Semaphores 283
HistoricaI Notes 284
'XF, References -.
Exercises 137
Part 2: Distributed Programming
. Chapter 7: Message Passing 29s
<
剧情呢,免费看分享剧情、挑选影视作品、精选好书简介分享。