next up previous
Next: Global Temporary Files (GTF) Up: Parallel Implementation of the Previous: Direct Opacity Sampling

Parallel Algorithms

There are currently a large number of significantly different types of parallel machines in use, ranging from clusters of workstations or PCs to teraflop parallel supercomputers. These systems have very different performance characteristics that need to be considered in parallel algorithm design. For the following discussion we assume this abstract picture of a general parallel machine: The parallel system consists of a number of processing elements (PEs), each of which is capable of executing the required parallel codes or sections of parallel code. Each PE has access to (local) memory and is able to communicate data with other PEs through a communication device. The PEs have access to both local and global filesystems for data storage. The local filesystem is private to each PE (and inaccessible to other PEs), and the global filesystem can be accessed by any PE. A PE can be a completely independent computer such as a PC or workstation (with single CPU, memory, and disk), or it can be a part of a shared memory multi-processor system. For the purposes of this paper, we assume that the parallel machine has both global and local logical filesystem storage available (possibly on the same physical device). The communication device could be realized, for example, by standard Ethernet, shared memory, or a special-purpose high speed communication network.

In the following description of the 2 algorithms that we consider here we will make use of the following features of the line databases:



 
next up previous
Next: Global Temporary Files (GTF) Up: Parallel Implementation of the Previous: Direct Opacity Sampling
Peter Hauschildt
2001-04-16