Cost Semantics for Parallelism
Cost semantics is to discuss: How long do programs run (abstractly)?
The idea of cost semantics for parallelism is that we have the concrete ability to compute simultaneously.
Simple Example of Product TypesPermalink
For sequential computation we have:
e1↦seqe′1(e1,e2)↦seq(e′1,e2) e1 val;e2↦seqe′2(e1,e2)↦seq(e1,e′2)For parallel computation we have:
e1↦pare′1;e2↦pare′2(e1,e2)↦par(e′1,e′2)Deterministic ParallelismPermalink
e↦∗seqv iff e⇓v iff e↦∗parvIt means that we are getting the same answer, just (potentially) faster.
Given a closed program e, we can count the number of ↦seq (or ↦w “work”) and the number of ↦par (or ↦s “span”)
Cost SemanticsPermalink
We annotate e⇓w,sv to keep tract of work and span.
e1⇓w1,s1v1;e2⇓w2,s2v2e1,e2)⇓w1+w2,max(s1,s2)(v1,v2) e1⇓w1,s1(v1,v2);[v1/x][v2/y]e2⇓w2,s2vlet(x,y)=e1 in e2⇓w1+w2+1,s1+s2+1v If e⇓w,sv then e↦wseqv and e↦sparv If e↦wseqv then∃s e⇓w,sv If e↦sparv then∃w e⇓w,sv If e⇓w,sv and e⇓w′,s′v then w=w′,s=s′Brent’s PrinciplePermalink
In general, it is a principle about how work and span predict evaluation in some machine.
For example, for a machine that has p processors:
If e⇓w,sv then e can be run to v in time O(max(wp,s))Machine with StatesPermalink
Local TransitionsPermalink
γΣa1,...,an{a1↪s1⊗...an↪sn}a’s are names for the tasks, and
s:=e|join[a](x.e)|join[a1,a2](x,y.e)Where join[a](x.e) means to “wait for task a to complete, and then plug it’s value in for x”, and join[a1,a2](x,y.e) means to wait for two tasks.
Suppose one of e1,e2 is not val, we have the following, which is also called fork
:
And we have join
:
Similarly for let
:
Global TransitionsPermalink
- Select 1≤k≤p tasks to make local transitions
- Step locally
- Each creates or garbage collectos processes (global synchronization by α−renaming)
SchedulingPermalink
How to we “Select 1≤k≤p tasks to make local transitions” e.g DFS, BFS
Comments