We have discussed the methods that we have implemented in `PHOENIX` to parallelize the calculation of wavelength dependent spectra (for
both spectral synthesis and model atmosphere generation). While the
algorithms are simple in the case of static stellar atmospheres,
for moving atmospheres, e.g., the expanding atmospheres of novae
and supernovae or stellar winds, the radiative transfer equation is
coupled between different wavelengths. Therefore, we have developed a
``pipelined'' approach that is used in expanding atmosphere models to
parallelize the spectrum calculation. Combined with the ``spatial''
and ``line'' data and task parallelization reported in paper I, this
new parallelization option can dramatically increase the speed of very
detailed and sophisticated NLTE and LTE stellar atmosphere calculation
with `PHOENIX`. The parallelization has become a standard feature of
the production version of `PHOENIX` and we are routinely using all 3
parallelization options simultaneously to calculate model atmospheres
for a large variety of objects from Brown and M dwarfs to novae and
supernovae on parallel supercomputers. This has drastically increased
our productivity with a comparatively small time and coding investment.
It also forms the basis to much larger calculations that will be required
to appropriately analyze the much improved data that can be expected
from future ground- and space-based observatories.

Our wavelength parallelization combines the methods described in paper
I by combining a number of worker nodes (which employ the task and data
parallel algorithms discussed in paper I) into symmetric ``wavelength
clusters'' which work on different wavelength and that communicate
results (if necessary) between them. This scheme is relatively simple
to implement using the `MPI` standard and can be used on all parallel
computers, both distributed and shared-memory systems (including clusters
of workstations). It has the advantage of minimizing communication and
it allows us to tailor the code's memory usage to the memory available
on each individual node.

The behavior of the wavelength parallelization can be understood easily
and the speedups are as expected. The parallel scalability of `PHOENIX` is comparable to or even better than that of many commercially available
scientific applications. The potential of parallel computing for stellar
atmosphere modeling is enormous, both in terms of problem size and speed
to model construction. The aggregate memory and computing power of
parallel supercomputers can be used to create extremely detailed models
that are impossible to calculate on vector supercomputers or workstations.

*Acknowledgments:*
We thank the referee, John Castor, for helpful comments that helped to
greatly improve the manuscript. We also thank David Lowenthal for helpful
discussions on parallel computing. This work was supported in part by
NASA ATP grant NAG 5-3018 and LTSA grant NAG 5-3619 to the University
of Georgia, and by NSF grant AST-9417242, NASA grant NAG5-3505 and an
IBM SUR grant to the University of Oklahoma. Some of the calculations
presented in this paper were performed on the IBM SP2 of the UGA UCNS,
at the San Diego Supercomputer Center (SDSC), the Cornell Theory Center
(CTC), and at the National Center for Supercomputing Applications (NCSA),
with support from the National Science Foundation, and at the NERSC with
support from the DoE. We thank all these institutions for a generous
allocation of computer time.