Notes on Concepts and Notations for Concurrent Programming
The Three Main Issues
1. How to express concurrent execution
2. How processes synchronize
3. How processes communicate
Introduction
Processes
> Multiprogramming: processes sharing one or more processor
> Multiprocessing: each process runs on its own processor and they
share a
common memory in a multiprocessor
> Distributed processing: each process runs on its own processor
and
the
processors are connected by a communications network
Process Interaction
> Interprocess communication is based on the use of shared variables or on
message passing
> Synchronization is a set of constraints on the ordering of events
> An execution of a concurrent program can be viewed as a sequence of
atomic actions, each resulting from the execution of an indivisible
operation
> Proof outline: Program text interleaved with assertions. When
concurrent execution is possible, the proof of a sequential process is
valid only if concurrent execution of other processes cannot invalidate
the assertions that appear in the proof. One way to establish this is to
assume that the code between any two assertions in a proof outline is
executed atomically
> Synchronization mechanisms control interference in two ways: 1.
delay the execution of a process until a given condition is true,
2. ensure that a block of statements is an indivisible
operation.
Specifying Concurrent Execution
Coroutines
> Coroutines are like subroutines, but allow transfer of control in a
symmetric rather than strictly hierarchical way.
> Control is transferred by means of the resume statement.
> Execution of resume transfers control to the named routine, saving
enough state information for control to return later to the instruction
following the resume.
> Control is returned to the original routine by executing another resume
rather than by executing a procedure return.
> One coroutine can transfer control to any other coroutine that it
chooses.

> Coroutines are an acceptable way to organize concurrent programs that
share a single processor. However, they are not adequate for true
parallel processing because their semantics allow for execution of only
one routine at a time.
The fork and join statements
> The fork statement specifies that a designated routine should start
executing.
> The invoking and invoked routine proceed concurrently.
> To synchronize with completion of the invoked routine, the invoking
routine can execute a join statement.
> Because fork and join can appear in conditionals and loops, a detailed
understanding of the program execution is necessary to understand which
routines will be executed concurrently.
> When used in a disciplined manner, the statements are practical and
powerful.
The cobegin statement
> The cobegin statement is a structured way of denoting concurrent
execution of a set of statements: cobegin S1 | S2 | .. | Sn coend
> Execution of a cobegin statement terminates only when execution of all
the Si's have terminated.
> Although cobegin is not as powerful as fork/join it is sufficient for
specifying most concurrent computations.
> The syntax of the cobegin statement makes explicit which routines are
executed concurrently, and provides a single-entry, single-exit control
structure.
Process Declarations
> The structure of a concurrent program is much clearer if the declaration
of a routine states whether it will be executed concurrently.
> The process declaration provides such a facility.
> In some languages, a collection of process declarations is equivalent to
a single cobegin statement, where each of the declared processes is a
component of the cobegin. This means there is exactly one instace of each
declared process.
> Some languages provide an explicit mechanism, such as fork or something
similar, for activating instances of process declarations.
Synchronization Primitives Based On Shared Variables
> Two useful types of synchronization: mutual exclusion and condition
synchronizaiton.
> Mutual exclusion: Mutually exclusive execution of critical
sections.
> Condition synchronization: Delaying a process attempting to
execute a particular operation for which a shared data object is in an
inappropriate state.
Busy-Waiting
> Processes set and test shared variables.
> Works reasonably well for condition synchronization but not for mutual
exclusion.
> Synchronization protocols that use only busy-waiting are difficult to
design, understand, and prove correct.
> Busy-waiting wastes processor cycles.
> The busy-waiting approach burdens the programmer with deciding both what
synchronization is required and how to provide it.
Semaphores
> A semaphore is a nonnegative integer-valued variable on whcih two
operations are defined: P and V.
> Given a semaphore s, P(s) delays until s>0 and then executes s:=s-1. The
test and decrement are executed as an indivisible operation.
> V(s) executes s:=s+1 as an indivisible operation.
> Semaphores can be implemented by using busy-waiting. More commonly,
however, they are implemented by system calls to a kernel.
> A kernel implements processes on a processor. At all times, each
process is either ready to execute or is blocked, waiting to complete a P
operation. The kernel maintains a ready list. Descriptors for processes
that are blocked on a semaphore are stored on a queue associated with that
semaphore; they are not stored in the ready list, and hence the
processes will not be executed.
> Although semaphores can be used to program almost any kind of
synchronization, P and V are rather unstructured primitives, and so it is
easy err when using them. Execution of each critical section must
begin with a P and end with a V (on the same semaphore), otherwise
there will be disastrous effects.
> A second difficulty with using semaphores is that both condition
synchronization and mutual exclusion are programmed using the same pair of
primitives. This makes it difficult to identify the purpose of P and V
operations without looking at the other operations on the corresponding
semaphore.
Conditional Critical Regions
> The conditional critical region proposal provides a structured
notation for specifying synchronization.
> Shared variables are explicitly placed into groups, called
resources. Each shared variable may be in at most one resource
and may be accessed only in conditional critical region (CCR)
statements.
> Mutual exclusion is provided by guaranteeing that execution of different
CCR statements, each naming the same resource, is not overlapped.
> Condition synchronization is provided by explicit boolean conditions in
CCR statements.
> A resource r containing variables v1, v2, ..., vn is declared as
resource
r: v1, v2, ..., vn
> The variables in r may only be accessed within CCR statements
that name r. Such statements have the form resource r when B do S
where B is a boolean expression and S is a statement list.
> A CCR statement delays the executing process until B is true; S is then
executed. The evaluation of B and execution of S are uninterruptible by
other CCR statements that name the same resource. Thus B is guaranteed
to be true when execution of S begins.
> Because conditions in CCR statements can contain references to local
variables, each process must evaluate its own conditions.
> On a multiprogrammed processor, this evaluation results in numerous
context switches.
> If each process is executed on its own processor and memory is shared,
however, CCR statements can be implemented quite cheaply by using
busy-waiting.
Monitors
> A monitor is formed by encapsulating both a resource definition
and operations that manipulate it.
> This allows a resource subject to concurrent access to be viewed as a
module. Consequently, a programmer can ignore the implementation details
of the resource when using it, and can ignore how it is used when
programming the monitor that implements it.
Last updated: March 31 '03