POT3D is a Fortran code that computes potential field solutions to approximate the solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It can be used to generate potential field source surface (PFSS), potential field current sheet (PFCS), and open field (OF) models. It has been (and continues to be) used for numerous studies of coronal structure and dynamics. The code is highly parallelized using MPI and is GPU-accelerated using Fortran standard parallelism (do concurrent) and OpenMP Target for data movement and device selection, along with an option to use the NVIDIA cuSparse library. The HDF5 file format is used for input/output.
POT3D is the potential field solver for the WSA/DCHB model in the CORHEL software suite publicly hosted at the Community Coordinated Modeling Center (CCMC).
A version of POT3D that includes GPU-acceleration with both MPI+OpenACC and MPI+OpenMP was released as part of the Standard Performance Evaluation Corporation's (SPEC) beta version of the SPEChpc(TM) 2021 benchmark suites.
Details of the POT3D code can be found in these publications:
- Variations in Finite Difference Potential Fields.
Caplan, R.M., Downs, C., Linker, J.A., and Mikic, Z. Ap.J. 915,1 44 (2021) - From MPI to MPI+OpenACC: Conversion of a legacy FORTRAN PCG solver for the spherical Laplace equation.
Caplan, R.M., Mikic, Z., and Linker, J.L. arXiv:1709.01126 (2017)
The included build.sh script will take a configuration file and generate a Makefile and build the code.
The folder conf contains example configuration files for various compilers and systems.
We recommend copying the configuration file closest to your setup and then modifying it to conform to your compiler and system (such as HDF5 library paths/flags, compiler flags, etc.).
Given a configure script conf/my_custom_build.conf, the build script is invoked as:
> ./build.sh ./conf/my_custom_build.conf
After building the code, it can be tested by running the testsuite.
Enter the testsuite directory and run:
./run_test_suite.sh -np=<N>
where "" is the number of MPI ranks to use.
To see available options, run run_test_suite.sh -h
POT3D uses a namelist in an input text file called pot3d.dat to set all parameters of a run. See the provided pot3d_input_documentation.txt file for details on the various parameter options. For any run, an input 2D data set in HDF5 format is required for the lower radial magnetic field (Br) boundary condition. Examples of this file are contained in the examples and testsuite folders.
To run POT3D, set the desired run parameters in a pot3d.dat text file, then copy or link the pot3d executable into the same directory as pot3d.dat
and run the command:
<MPI_LAUNCHER> -np <N> ./pot3d
where <N> is the total number of MPI ranks to use (typically equal to the number of CPU cores) and <MPI_LAUNCHER> is your MPI run command (e.g. mpiexec,mpirun, ibrun, srun, etc).
For example: mpiexec -np 1024 ./pot3d
POT3D uses a preconditoned Conjugate Gradient solver with two preconditioner options:
ifprec=1: Diagonal scaling.ifprec=2: Non-overlapping ILU0
Typically, usingifprec=2will run POT3D faster thanifprec=1. It uses much more memory and, for NVIDIA GPUs, requires building with thecuSparselibrary.
For Intel and AMD GPUs, it is currently not available.
POT3D will auto-detect how it is being built, and may overrideifprecto the option best suited for the current build.
For standard cases, one should launch the code such that the number of MPI ranks per node is equal to the number of GPUs per node
e.g.
mpiexec -np <N> --ntasks-per-node 4 ./pot3d
or
mpiexec -np <N> --npersocket 2 ./pot3d
If the cuSparse library option was used to build the code, than ifprec=2 can be set in pot3d.dat.
For Intel GPUs, one must set the following ENV variable before running:
export I_MPI_OFFLOAD=1
For AMD GPUs with the amdflang compiler, currently, the no-gpu-mpi branch must be used.
To estimate how much memory (RAM) is needed for a run, compute:
memory-needed = nr*nt*np*8*13.6/1000/1000/1000 GB
where nr, nt, and np are the chosen problem sizes in the r, theta, and phi dimension.
Note that this estimate is when using ifprec=1.
If using ifprec=2, the required memory is over 2x higher.
Depending on the input parameters, POT3D can have various outputs. Typically, the three components of the potential magnetic field is output as HDF5 files. In every run, the following two text files are output:
pot3d.outAn output log showing grid information and magnetic energy diagnostics.timing.outTime profile information of the run.
Some useful python scripts for reading and plotting the POT3D input data, and reading the output data can be found in the bin folder.
In the benchmarks folder, we provide large cases of various size that can be used to benchmark the performance of POT3D.
The following is a list of the included benchmark runs, their problem size, and their memory requirements:
bench_tiny
Grid size: 173x361x1171 = 73.2 million cells
Memory (RAM) needed (usingifprec=1): ~7.96 GBisc2023
Grid size: 325x450x2050 = 299.8 million cells
Memory (RAM) needed (usingifprec=1): ~32.62 GB
In the testsuite folder, we provide the following example runs of POT3D:
/potential_field_source_surface
A standard PFSS run with a source surface radii of 2.5 Rs./potential_field_current_sheet
A standard PFCS run using the outer boundary of the PFSS example as its inner boundary condition, with a domain that extends to 30 Rs. The magnetic field solution produced is unsigned./open_field
An example of computing the "open field" model from the solar surface out to 20 Rs using the same input surface Br as the PFSS example. The magnetic field solution produced is unsigned.
Each test case contains an input folder with the run input files, a run folder used to run the test, and a reference folder containing the output diagnostics used to validate the test.
The validation is done with the magnetic energy diagnostics in the pot3d.out file.
Note that, currently, all tests are set to use ifprec=1 only.
To run the testsuite, use the included script run_test_suite.sh
To see available options, run run_test_suite.sh -h
