For the model grid calculations, we use our multi-purpose stellar atmosphere code PHOENIX [, version 9.1,]fe2nova,parapap,nova_cno,parapap2. In the calculations presented in this paper, we will mostly use the static, plane-parallel LTE mode of PHOENIX, although we will compare LTE results with NLTE calculations to estimate the systematic errors made by LTE analyses. Details of the numerical methods are given in the above references, so we do not repeat the description here. We will, however, describe the changes in the input physics and data compared to our earlier ``base'' and ``Extended'' model grids for very low mass stars.
Both atomic and molecular lines are treated with a direct opacity sampling method. We do not use pre-computed opacity sampling tables, but instead dynamically select the relevant LTE background lines from master line lists at the beginning of each iteration for every model and sum the contribution of every line within a search window to compute the total line opacity at arbitrary wavelength points. The latter feature is crucial in NLTE calculations in which the wavelength grid is both irregular and variable from iteration to iteration due to changes in the physical conditions. This approach also allows detailed and depth dependent line profiles to be used during the iterations.
Although the direct line treatment seems at first glance computationally prohibitive, it can lead to more accurate models. This is due to the fact that the line forming regions in cool dwarfs span a huge range in pressures and temperatures so that the line wings form in very different layers than the line cores. Therefore, the physics of the line formation is best modeled by an approach that treats the variation of the line profile and the level excitation as accurately as possible. In addition, it is relatively straightforward to write an efficient computer code for the direct opacity sampling treatment by making use of data-locality, vectorization & super-scalar execution, and caches found in modern workstations and supercomputers. On many machines, interpolation in the large tables required by the ODF method takes more time per wavelength point than direct opacity sampling because of the longer memory access times compared to the execution speed of floating point instructions. To make this method computationally more efficient, we employ modern numerical techniques, e.g., vectorized and parallelized block algorithms with high data locality [Hauschildt, Baron, & Allard(1997)Hauschildt, Baron, and Allard], and we use high-end workstations or parallel supercomputers for the model calculations.
In the calculations presented in this paper, we have have included a constant statistical velocity field, , which is treated like a microturbulence. The choice of lines is dictated by whether they are stronger than a threshold , where is the extinction coefficient of the line at the line center and is the local b-f absorption coefficient. This typically leads to about lines which are selected from master line lists (with 47 million atomic and up to 350 million molecular lines). The profiles of these lines are assumed to be depth-dependent Voigt or Doppler profiles (for very weak lines). Details of the computation of the damping constants and the line profiles are given in [Schweitzer et al.(1996)Schweitzer, Hauschildt, Allard, and Basri]. We have verified in test calculations that the details of the line profiles and the threshold do not have a significant effect on either the model structure or the synthetic spectra. In addition, we include abound 2000 photo-ionization cross sections for atoms and ions [Mathisen(1984)Mathisen,Verner & Yakovlev(1995)Verner and Yakovlev].