Revisión | bae8bb6bc698192bbb3eca5810776a9c86192817 (tree) |
---|---|
Tiempo | 2022-07-12 20:11:43 |
Autor | Albert Mietus < albert AT mietus DOT nl > |
Commiter | Albert Mietus < albert AT mietus DOT nl > |
Started with (8) BusyCores-concepts (analyse)
@@ -2,9 +2,9 @@ | ||
2 | 2 | |
3 | 3 | .. _BC-concepts: |
4 | 4 | |
5 | -======================= | |
6 | -Concepts for Busy Cores | |
7 | -======================= | |
5 | +============================= | |
6 | +Concurrent Computing Concepts | |
7 | +============================= | |
8 | 8 | |
9 | 9 | .. post:: |
10 | 10 | :category: Castle DesignStudy |
@@ -20,61 +20,92 @@ | ||
20 | 20 | As Castle is a new language we have the opportunity to select such a concept and incorporate it into the language ... |
21 | 21 | |
22 | 22 | In this blog, we explore a bit of theory. I will focus on semantics and the possibilities to implement them |
23 | - efficiently. The syntactic details come later. | |
23 | + efficiently. The exact syntax will come later. | |
24 | 24 | |
25 | +Basic terminology | |
26 | +================= | |
27 | + | |
28 | +As there is many theory available and even more practical expertise, but a limited set of “common words”, let describe | |
29 | +some basic terms. As always, we use Wikipedia as common ground, and add links for a deep-dive. | |
30 | + | |
31 | +.. include:: BusyCores-sidebar-concurrency.irst | |
25 | 32 | |
26 | 33 | |
27 | 34 | |
28 | 35 | TODO |
29 | -**** | |
36 | +************************************************************ | |
30 | 37 | |
31 | -.. todo:: All below is draft and needs work | |
38 | +.. todo:: All below is draft and needs work!!!! | |
32 | 39 | |
33 | 40 | |
34 | ----------- | |
35 | - | |
36 | -CUT & PAST | |
37 | - | |
38 | ----------- | |
39 | - | |
40 | -Some concepts | |
41 | -============= | |
42 | - | |
43 | -Before we dive into the needs for Castle, lets define --shortly-- the available, theoretical concepts. Routinely, we add | |
44 | -wikipedia links for a deep-dive. | |
45 | - | |
46 | -.. include:: BusyCores-sidebar-concurrency.irst | |
47 | 41 | |
48 | 42 | Concurrency |
49 | 43 | ----------- |
50 | -Concurrency_ is the ability to “compute” multiple things at the same time, instead of doing them one after the other. It requires another mindset, but isn’t that complicated. | |
51 | -A typical example is a loop: suppose we have a sequence of numbers and we like to compute the square of each one. Most developers will loop over those numbers, get one number, calculate the square, store it in another list, and continue with the next element. It works, but we have also instructed the computer to do it in sequence — especially when the task is bit more complicated, the compiler does know whether the ‘next task’ depends on the current one, and can’t optimise it. | |
52 | 44 | |
53 | -A better plan is to tell the compiler about the tasks; most are independently: square a number. There is also one that has to be run at the end: combine the results into a new list. And one is bit funny: distribute the sequence-elements over the “square-tasks” — clearly, one has to start with this one, but it can be concurrent with many others too. | |
45 | +Concurrency_ is the ability to “compute” multiple things at the same time. | |
46 | +|BR| | |
47 | +Designing concurrent software isn’t that complicated but; demands another mindset the when we write software that does | |
48 | +one thing afer the other. | |
49 | + | |
50 | +A typical example is a loop: suppose we have a sequence of numbers and we like to compute the square of each one. Most | |
51 | +developers will loop over those numbers, get one number, calculate the square, store it in another list, and continue. | |
52 | +It works, but we have also instructed the computer to do it in sequence — especially when the | |
53 | +task is bit more complicated, the compiler does know whether the ‘next task’ depends on the current one, and can’t | |
54 | +optimise it. | |
55 | + | |
56 | +A better plan is to tell the compiler about the tasks; most are independently: square a number. There is also one that | |
57 | +has to be run at the end: combine the results into a new list. And one is bit funny: distribute the sequence-elements | |
58 | +over the “square-tasks” — clearly, one has to start with this one, but it can be concurrent with many others too. | |
59 | +|BR| | |
60 | +This is *not* a parallel algorithm. When not specifying the order, we allow parallel execution. We do not demand it, | |
61 | +sequential execution is allowed too. | |
62 | + | |
54 | 63 | |
55 | 64 | |
56 | 65 | Parallelisme |
57 | 66 | ------------ |
58 | -Parallelisme_ is about executing multiple tasks (apparently) at the same time. We will focus running multiple | |
59 | -concurrent task (of the same program) on as many cores as possible. And when we assume we have a thousand cores we need | |
60 | -(at least) a thousand independent tasks — at any moment— to gain maximal speed up. This is not trivial! | |
67 | + | |
68 | +Parallelisme_ is about executing multiple tasks (apparently) at the same time. We will focus running multiple concurrent | |
69 | +task (of the same program) on “as many cores as possible”. When we assume a thousand cores, we need a thousand | |
70 | +independent tasks (at least) to gain maximal speed up. A thousand at any moment! | |
61 | 71 | |BR| |
62 | -It’s not only about doing a thousand things at the same time (that is not to complicated, for a computer), but also — probably: mostly — about finishing a thousand times faster… | |
72 | +It’s not only about doing a thousand things at the same time (that is not to complicated, for a computer), but also — | |
73 | +probably: mostly — about finishing a thousand times faster… | |
63 | 74 | |
64 | 75 | With many cores, multiple program-steps can be executed at the same time: from changing the same variable, acces the |
65 | 76 | same memory, or compete for new memory. And when solving that, we introduce new hazards: like deadlocks_ and even |
66 | 77 | livelocks_. |
67 | 78 | |
68 | -Locking | |
69 | - | |
70 | - | |
71 | 79 | |
72 | 80 | Distributed |
73 | 81 | ----------- |
82 | + | |
74 | 83 | A special form of parallelisme is Distributed-Computing_: compute on many computers. Many experts consider this |
75 | 84 | as an independent field of expertise; still --as Multi-Core_ is basically “many computers on a chips”-- its there is an |
76 | -analogue [#DistributedDiff]_, and we should the know-how that is available there to design out “best ever language”. | |
85 | +analogy [#DistributedDiff]_, and we should use the know-how that is available, to design out “best ever language”. | |
77 | 86 | |
87 | +Messages & shared-data | |
88 | +---------------------- | |
89 | + | |
90 | +Communication between two (concurrent) tasks (or processes, CPUs, computers) needs the passing of data (in one or two | |
91 | +direction). Roughly, there are two ways to do so: | |
92 | + | |
93 | +Shared-Data | |
94 | + Memory (variables) that can be written and/or read by both. As the acces is typical not acces, a bit of | |
95 | + | |
96 | + | |
97 | +Controll | |
98 | +======== | |
99 | + | |
100 | +Models | |
101 | +====== | |
102 | + | |
103 | +Probably the oldest model to described concurrency is the Petri-net_; which has the disadvantage that is synchronous | |
104 | +(all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_. | |
105 | + | |
106 | +Actors | |
107 | + | |
108 | +A very introduce | |
78 | 109 | |
79 | 110 | -------- |
80 | 111 |
@@ -104,3 +135,8 @@ | ||
104 | 135 | .. _livelocks: https://en.wikipedia.org/wiki/Deadlock#Livelock |
105 | 136 | .. _Critical-Sections: https://en.wikipedia.org/wiki/Critical_section |
106 | 137 | .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing |
138 | + | |
139 | +.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model | |
140 | +.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory | |
141 | + | |
142 | +.. _Petri-Net: https://en.wikipedia.org/wiki/Petri_net |