Previous: Introduction Up: Introduction Next: This Tutorial

The CC++ Programming Language

Is parallel programming difficult? Many programmers complain that architecture dependencies, confusing notations, difficult correctness verification, and burdensome overhead make parallel programming seem tedious and arduous. These barriers cast doubt on the practicality of parallel programming, despite its many potential benefits.

Compositional C++ (CC++) was designed to alleviate the frustrations of parallel programming by adding a few simple extensions to the sequential language C++. It is a strict superset of the C++ language so any valid C or C++ program is a valid CC++ program. Conversly, many classes of parallel CC++ programs can be simply rewritten as equivalent sequential programs. For these classes of programs, the developement path can be very similar to that of the equivalent sequential program. This compatibility with the C and C++ languages facilitates the transition to the task of parallel programming for users knowledgeable in those languages.

CC++ extends C++ with the following eight constructs:

par
blocks enclose statements that are executed in parallel.
parfor
denotes a loop whose iterations are executed in parallel.
spawn
statements create a new thread of control executing in parallel with the spawning thread.
sync
data items are used for synchronization.
atomic
functions control the level of interleaving of actions composed in parallel.
processor objects
define the distribution of a computation.
global
pointers link distributed parts of the computation.
data transfer
functions describes how information is transmitted between address spaces.

Despite the simplicity and the small number of extensions, the conjunction of these constructs, when combined with C++, results in an extremely rich and powerful parallel programming notation.

The richness of this language is reflected in its suitability to a wide spectrum of applications. In fact, CC++ integrates many seemingly disparate fields:

Sequential and parallel programming
Parallel blocks are notationally similar to sequential blocks.
Shared and distributed memory models
CC++ can be used on shared or distributed memory architectures as well as across networks which may be heterogenous.
Granularity
CC++ can be used in computations involving a variety of granularities, ranging from fine-grain parallelism with frequent synchronization between small threads to coarse-grain parallelism with sparse synchronization between large, distributed processes.
Task and data parallelism
Task and data parallel applications can be expressed in CC++, as well as programs that combine the two.
Synchronization techniques
The synchronization mechanisms provided by CC++ are powerful enough to express any of the traditional imperative synchronization and communication paradigms.

All of the object-oriented features of C++ are preserved in CC++. These features (especially generic classes and inheritance mechanisms) promote the reuse of code, structure, and verification arguments. Thus, CC++ provides an excellent framework for the developement of libraries and for the use of software templates in program construction. This reuse is especially important and useful in the context of parallel programming.

paolo@cs.caltech.edu