Grids
General Considerations
The OpenGGCM numerical MHD grid is of the type "stretched-cartesian", meaning that it is an orthogonal grid with a variable spacing in each of the coordinate directions. It allows the grid to be adapted to the anticipated solution. Grid cells can be made smaller in areas where higher resolution is desired and they can also made larger in regions where coarse resolution is sufficient. The latter regions are in particular the distant tail and the solar wind regions outside of the tail. Although this strategy is less optimal compared to AMR (Adaptive Mesh Refinement) in terms of the total number of cells required to achieve a given resolution, it has a number of advantages:
- no computational overhead with coarsening/refinement
- very good load balancing
- no artificial boundaries in the computing domain
- very little overhead compared to a equidistant grid
- only requires 3 1d arrays to store grid coordinates (common notation is: gx(1..nx), gy(1..ny), gz(1..gz))
- still large savings over a equidistant cartesian grid (by a factor of 100-1000)
- no singular points as with cylindrical/spherical grids
In each coordinate direction the code requires x(i) and (d/di)(x(i)) where i (the grid index) is considered a continuous variable.
In practice it is more convenient to specify dx(x), i.e., the cell size as a function of space, as opposed as a function of grid index. From a specification of dx(x) one needs to do an integration to get to x(i) and (d/di)(x(i)).
For the x-coordinate the OpenGGCM uses the approach of specifying dx(x) and numerical integration to obtain x(i) and (d/di)(x(i)).
For the y/z-coordinates y(i) (outlined for y here) is specified by an analytic function gy(iy)=dy*y*(1.0+alpha*(y**beta))**gamma where alpha, beta, and gamma are appropriate constants depending on the grid size and numer of grid points. gy(iy) is designed to provide the highest resolution at the center, with a flat part that has (d/diy)(y(iy)) approximately dy and increasing (d/diy)(y(iy)) towards either end. Note that decreasing dy will make the cells smaller in the center but also shrink the area to which the smaller cell size applies. To keep that area constant one must also increase the number of grid points in that direction (often dramatically). In the x-coordinate the function dx(x) is constructed as:
dx = fak*(hm + (h0-hm)*tanhm(b1*(x-x1)) + (hn-hm)*tanhp(b2*(x-x5)) - (hm-hmm)*(#one-tanhm(b3*(x-(x3-dmm)))-tanhp(b3*(x-(x3+dmm)))) ) tanhm(x)=(1-tanh(x))/2 tanhp(x)=(1+tanh(x))/2
The value of fak is determined as to match the length of the box; the other constants define the shape of the function. There is no way to specify the minimum resolution a priori, but the pre-compilation will report it.
The shape of the dx curve is the summation of a constant value (hmm), a negative tanh curve (height of h0-hm, width controlled by b1, centered at x1), a positive tanh curve (height of hn-hm, width controlled by b2, centered at x5), and a pair of tanh curves (height of hn-hmm, width controlled by b3, negative curve centered at x3-dmm and positive curve centered at x3+dmm).
Restrictions and performance considerations
The number of gridpoints in each coordinate must be divisible by the number of processors in the same direction, for example, NX/NPX must be an integer. Furthermore, NY/NPY and NZ/NPZ must still be divisible by 2. Pre-compilation will fail if this is not the case.
The leading dimension of the F77 arrays is NL=((NX/NPX)+4). Most processors, including IA32, IA64, MIPS, and Power3/4 (that is, INTEL, AMD, SGI, IBM), drop significantly in performance if NL is divisible by 2, and even worse if NL is divisible by 16. This is due to "cache thrashing". A safe choice for NX is NPX times an odd number.
Example grids
The following are eamples of how to specify grids in the "runme" files. These examples should cover most cases, and where they don't they are good starting points. Note that the resolution in the x-coordinate is simply proportional to NX. In the y/z directions increasing NY and/or NZ widens the high-resolution area, whereas lowering DY0/DZ0 narrows it. The example grids are labelled with 4 letter codes for use by the CCMC. Note that you probably need to change the change NPX/NPY/NPZ depending on your processor configuration within the above constraints.
Small grids. Useful for some runs but generally not sufficient resolution for substorms or FTEs:
- (FCNG) fairly high cost, no specific region emphasis, 455 x 120 x 120 = 6,552,000 cells
npx 7 # number of processors in x npy 4 # number of processors in y npz 4 # number of processors in z nx 455 # number of cells in x ny 120 # number of cells in y nz 120 # number of cells in z dy0 0.25 # minimum cell size in y dz0 0.25 # minimum cell size in z xx1 -24.01 # sunward extension of box xx2 350.01 # anti-sunward extension of box yy2 48.01 # box size in +-y zz2 48.01 # box size in +-z ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " # definition and constants for x-grid ggx2 $v="-x1 -26 -x3 10 -dmm 8 -x5 80 " ggx3 $v="-h0 2.0 -hn 43 -hm 1.7 -hmm 0.70 " ggx4 $v="-b1 0.15 -b2 0.025 -b3 0.3 "
- (FCMP) fairly high cost, emphasis on dayside magnetopause, 455 x 120 x 120 = 6,552,000 cells
npx 7 # number of processors in x npy 4 # number of processors in y npz 4 # number of processors in z nx 455 # number of cells in x ny 120 # number of cells in y nz 120 # number of cells in z dy0 0.25 # minimum cell size in y dz0 0.25 # minimum cell size in z xx1 -24.01 # sunward extension of box xx2 350.01 # anti-sunward extension of box yy2 48.01 # box size in +-y zz2 48.01 # box size in +-z ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " # definition and constants for x-grid ggx2 $v="-x1 -26 -x3 -4 -dmm 8 -x5 50 " ggx3 $v="-h0 2.0 -hn 43 -hm 1.7 -hmm 0.35 " ggx4 $v="-b1 0.15 -b2 0.025 -b3 0.3 "
- (FCMT) Fairly high cost, emphasis on near- and mid-tail, 426 x 144 x 144 = 8,833,536 cells
npx 6 # number of processors in x npy 4 # number of processors in y npz 4 # number of processors in z nx 426 # number of cells in x ny 144 # number of cells in y nz 144 # number of cells in z dy0 0.40 # minimum cell size in y dz0 0.25 # minimum cell size in z xx1 -24.01 # sunward extension of box xx2 350.01 # anti-sunward extension of box yy2 48.01 # box size in +-y zz2 48.01 # box size in +-z ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " # definition and constants for x-grid ggx2 $v="-x1 -10 -x3 40 -dmm 16 -x5 70 " ggx3 $v="-h0 1.0 -hn 5 -hm 0.60 -hmm 0.60 " ggx4 $v="-b1 0.15 -b2 0.075 -b3 0.3 "
Large grids. Useful and/or needed for runs that target mesoscale phenomena (FTEs, boundaries) or low diffusion (substorms):
- (HCFL) High cost, emphasis on dayside and flanks, 522 x 240 x 240 = 30,067,200 cells, dx_min=0.1 RE
npx 6 # number of processors in x npy 1 # number of processors in y npz 1 # number of processors in z nx 522 # number of cells in x ny 240 # number of cells in y nz 240 # number of cells in z dy0 0.25 # minimum cell size in y dz0 0.25 # minimum cell size in z xx1 -24.01 # sunward extension of box xx2 350.01 # anti-sunward extension of box yy2 48.01 # box size in +-y zz2 48.01 # box size in +-z ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " # definition and constants for x-grid ggx2 $v="-x1 -20 -x3 0 -dmm 16 -x5 30 " ggx3 $v="-h0 2.0 -hn 5 -hm 0.30 -hmm 0.30 " ggx4 $v="-b1 0.15 -b2 0.075 -b3 0.3 "
- SUB1 for substorm runs, 37.8M cells
npx 14 npy 2 npz 2 nx 630 ny 200 nz 300 dy0 0.25 dz0 0.16 xx1 -20.01 xx2 500.01 yy2 36.01 zz2 36.01 ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " ggx2 $v="-x1 -26 -x3 -10 -dmm 8 -x5 80 " ggx3 $v="-h0 2.0 -hn 43 -hm 1.0 -hmm 1.00 " ggx4 $v="-b1 0.15 -b2 0.100 -b3 0.3 "
- FTE1 grid for FTE studies, 53.55M cells
npx 10 npy 5 npz 2 nx 510 ny 350 nz 300 dy0 0.18 dz0 0.18 xx1 -18.01 xx2 300.01 yy2 38.01 zz2 38.01 ggx1 $v="-n $NX -x0 $XX1 -xn $XX2 " ggx2 $v="-x1 -12 -x3 15 -dmm 8 -x5 50 " ggx3 $v="-h0 2.0 -hn 43 -hm 1.7 -hmm 0.5 " ggx4 $v="-b1 0.15 -b2 0.025 -b3 0.3 "
Comments on optimizing grids, and a red flag
- There is no need to simulate a whole lot of solar wind propagation. In most cases, "xx1" can be closer to Earth, say "-18.01" for common SW conditions. The only restriction is that the bow shock (BS) does not touch the inflow boundary. However, even if the BS touches the boundary and then goes back, the code will not crash, but likely give some bad results for that period. Moving xx1 closer to Earth saves significant resouces because NX can be reduced while keeping the same resolution.
- The BS standoff only depends on the SW magnetosonic Mach number Mms. As Mms decreases the the shock moves out, and, of course, for Mms=1 it goes to infinity. The code is not designed to handle a sub-magnetosonic inflow condition.
- Many SW monitors, in particular ACE, underestimate the SW density when the SW speed is either very high or very low. That leads to unrealistic low Mms. It is probably safe in many cases to floor the density or to add to it. SW density under 2 cm**-3 is very rare! Such low values should be viewed with suspicion and the instrument PI should be contacted to find out if these values are real. In many cases (especially during storms) the SW measurements are contaminated by SEP and only measurements provided directly from the PI should be trusted. One should also be aware of the SW alpha content and raise the SW density accordingly, provided those measurements are available.
- The y/z dimensions are also on the safe side. In many cases yy2/zz2=35 is sufficient. However, not many gridcells are saved by making the box narrower because the resolution there is already coarse.
- In cases where the tail bcomes nearly closed and dense the back boundary may become subsonic and unstable. Most of the time this leads to a "front" that propagates sunward from the back boundary and may spoil the solution. In that case one needs to rerun with a more distant back boundary, say xx2=1000. Since the resolution in the back of the tail is very coarse this does not cost much.
- Adding 0.01 to an integer size makes sure interpoloation at that point does not fall outside the box.
- When making runs with very high resolution (say less than 0.1 RE) the grid coordinate information is written to the "$RUN.grid" and "$RUN.smf" files possibly with insufficient precision. If the high-precision coordinates are needed they can be extracted from the "gridx.$RUN.f" file. The CCMC can provide these files on request.