Astronomy is sometimes described as a ``passive science'' since it depends on observations of distant objects, rather than on laboratory experiments. Due to both laboratory measurements of important astrophysical atomic and nuclear data, and advances in computational power which allow us to perform ``numerical experiments'' that situation has changed in the last 50 years, and astronomy has matured into the modern subject of astrophysics. Still, our ability to understand the nature of astronomical objects is hampered by the fact that astronomical observations detect radiation that has been emitted primarily from the surface of objects. Thus in order to determine the structure of stellar objects one must solve the radiation transport equation and compare ``synthetic spectra'' with observations. The numerical solution of the radiation transport problems is an important prerequisite for the calculation of model stellar atmospheres. The simulated spectrum is then compared to observed spectra of objects such as stars, where the radiation is mostly emitted from the outer layers. In the case of very low mass stars and brown dwarfs, the atmosphere is also crucial in determining the interior structure of these objects since it serves as a boundary condition for the equations of stellar structure in a nearly fully convective ``star''.
Additionally, in objects such as supernovae, where the the explosion causes the ``atmosphere'' (here used to paraphrase ``the region where the spectrum forms'') to expand rapidly, a time series of spectra reveal the entire structure of the object as the ejected material expands and thins and the atmosphere moves inward in the material. In the case of expanding objects such as hot stars (many with strong stellar winds), novae, and supernovae; the radiation transport equation must be solved simultaneously with the hydrodynamical equations, an even more difficult computational problem than static stars. We focus here on the computation of model atmospheres and the numerical solution of the radiation transport equation in expanding media with known velocity fields. This is a frequently encountered situation, e.g., when the hydrodynamic behavior is known a priori, or can be calculated separately from the radiation transport by using a nested iteration scheme. The feedback between detailed synthetic spectrum calculations and hydrodynamic simulations is often the primary tool for testing a specific hydrodynamical model.
Our group has developed the very general non-LTE (NLTE) stellar atmosphere computer code PHOENIX [1,2,3,4,5,6,7,8,9] which can handle very large model atoms as well as line blanketing by hundreds of millions of atomic and molecular lines. This code is designed to be both portable and very flexible: it is used to compute model atmospheres and synthetic spectra for, e.g., novae, supernovae, M and brown dwarfs, O to M giants, white dwarfs and accretion disks in Active Galactic Nuclei (AGN). The radiative transfer in PHOENIX is solved in spherical geometry and includes the effects of special relativity (including advection and aberration) in the modeling. The PHOENIX code allows us to include a large number of NLTE and LTE background spectral lines and solves the radiative transfer equation for each of them without using simple approximations like the Sobolev approximation. Therefore, the profiles of spectral lines must be resolved in the co-moving (Lagrangian) frame. This requires many wavelength points (we typically use 150,000 to 300,000 points). Since the CPU time scales linearly with the number of wavelength points, the CPU time requirements of such calculations are large. In addition, (NLTE) radiative rates for both line and continuum transitions must be calculated and stored at every spatial grid point for each transition, which requires large amounts of storage and can cause significant performance degradation if the corresponding routines are not optimally coded.
In strict LTE the radiation and matter are assumed to be in equilibrium with each other everywhere throughout the atmosphere. In LTE the source function is assumed to be given by the Planck function (see below). In NLTE, the radiation is no longer assumed to be in equilibrium with the matter and hence the full coupling between matter and radiation must be calculated in order to calculate the source function.
We concentrate here on the calculation of model atmospheres for expanding media and, in addition, describe some of the important parts of the numerous numerical algorithms used in PHOENIX: the numerical solution of the radiation transport equation, the non-LTE rate equations, and the parallelization of the code. An important problem in these calculations is to find a consistent solution of the very diverse equations that describe the various physical processes. We have developed a scheme of nested iterations that enables us to separate many of the variables (e.g., separating the temperature correction procedure from the calculation of the NLTE occupation numbers). This allows us to compute far more detailed stellar atmosphere models than was previously possible. We will give an outline of these methods in this paper.
In order to take advantage of the enormous computing power and vast aggregate memory sizes of modern parallel supercomputers, both potentially allowing much faster model construction as well as more sophisticated models, we have developed a parallel version of PHOENIX. Since the code uses a modular design, we have implemented different parallelization strategies for different modules (e.g., radiative transfer, NLTE rates, atomic and molecular line opacities) in order to maximize the total parallel speed-up of the code. In addition, our implementation allows us to change the distribution of computational work onto different nodes both via input files and dynamically during a model run, which gives a high degree of flexibility to optimize performance for both a number of different parallel supercomputers (we are currently using IBM SP2s, SGI Origin 2000s, HP/Convex SPP-2000s, and Cray T3Es) and for different model parameters. Since PHOENIX has both large CPU and memory requirements we have developed the parallel version of the code using a MIMD approach. We use the MPI message passing library [10] for portability and simultaneously use both task and data parallelism in order to optimize the parallel speed-up [8,9].