Skip to content

predsci/POT3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

101 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

POT3D

POT3D: High Performance Potential Field Solver

Predictive Science Inc.

OVERVIEW

POT3DPOT3D is a Fortran code that computes potential field solutions to approximate the solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It can be used to generate potential field source surface (PFSS), potential field current sheet (PFCS), and open field (OF) models. It has been (and continues to be) used for numerous studies of coronal structure and dynamics. The code is highly parallelized using MPI and is GPU-accelerated using Fortran standard parallelism (do concurrent) and OpenMP Target for data movement and device selection, along with an option to use the NVIDIA cuSparse library. The HDF5 file format is used for input/output.

POT3D is the potential field solver for the WSA/DCHB model in the CORHEL software suite publicly hosted at the Community Coordinated Modeling Center (CCMC).
A version of POT3D that includes GPU-acceleration with both MPI+OpenACC and MPI+OpenMP was released as part of the Standard Performance Evaluation Corporation's (SPEC) beta version of the SPEChpc(TM) 2021 benchmark suites.

Details of the POT3D code can be found in these publications:

  • Variations in Finite Difference Potential Fields.
    Caplan, R.M., Downs, C., Linker, J.A., and Mikic, Z. Ap.J. 915,1 44 (2021)
  • From MPI to MPI+OpenACC: Conversion of a legacy FORTRAN PCG solver for the spherical Laplace equation.
    Caplan, R.M., Mikic, Z., and Linker, J.L. arXiv:1709.01126 (2017)

HOW TO BUILD POT3D

The included build.sh script will take a configuration file and generate a Makefile and build the code.
The folder conf contains example configuration files for various compilers and systems.
We recommend copying the configuration file closest to your setup and then modifying it to conform to your compiler and system (such as HDF5 library paths/flags, compiler flags, etc.).

Given a configure script conf/my_custom_build.conf, the build script is invoked as:

> ./build.sh ./conf/my_custom_build.conf

Validate Installation

After building the code, it can be tested by running the testsuite.
Enter the testsuite directory and run:
./run_test_suite.sh -np=<N>
where "" is the number of MPI ranks to use.
To see available options, run run_test_suite.sh -h


HOW TO USE POT3D

Setting Input Options

POT3D uses a namelist in an input text file called pot3d.dat to set all parameters of a run. See the provided pot3d_input_documentation.txt file for details on the various parameter options. For any run, an input 2D data set in HDF5 format is required for the lower radial magnetic field (Br) boundary condition. Examples of this file are contained in the examples and testsuite folders.

Launching the Code

To run POT3D, set the desired run parameters in a pot3d.dat text file, then copy or link the pot3d executable into the same directory as pot3d.dat and run the command:
<MPI_LAUNCHER> -np <N> ./pot3d
where <N> is the total number of MPI ranks to use (typically equal to the number of CPU cores) and <MPI_LAUNCHER> is your MPI run command (e.g. mpiexec,mpirun, ibrun, srun, etc).
For example: mpiexec -np 1024 ./pot3d

Solver Options

POT3D uses a preconditoned Conjugate Gradient solver with two preconditioner options:

  1. ifprec=1: Diagonal scaling.
  2. ifprec=2: Non-overlapping ILU0
    Typically, using ifprec=2 will run POT3D faster than ifprec=1. It uses much more memory and, for NVIDIA GPUs, requires building with the cuSparse library.
    For Intel and AMD GPUs, it is currently not available.
    POT3D will auto-detect how it is being built, and may override ifprec to the option best suited for the current build.

Running POT3D on GPUs

For standard cases, one should launch the code such that the number of MPI ranks per node is equal to the number of GPUs per node
e.g.
mpiexec -np <N> --ntasks-per-node 4 ./pot3d
or
mpiexec -np <N> --npersocket 2 ./pot3d

If the cuSparse library option was used to build the code, than ifprec=2 can be set in pot3d.dat.

For Intel GPUs, one must set the following ENV variable before running:
export I_MPI_OFFLOAD=1

For AMD GPUs with the amdflang compiler, currently, the no-gpu-mpi branch must be used.

Memory Requirements

To estimate how much memory (RAM) is needed for a run, compute:

memory-needed = nr*nt*np*8*13.6/1000/1000/1000 GB

where nr, nt, and np are the chosen problem sizes in the r, theta, and phi dimension.
Note that this estimate is when using ifprec=1.
If using ifprec=2, the required memory is over 2x higher.

Solution Output

Depending on the input parameters, POT3D can have various outputs. Typically, the three components of the potential magnetic field is output as HDF5 files. In every run, the following two text files are output:

  • pot3d.out An output log showing grid information and magnetic energy diagnostics.
  • timing.out Time profile information of the run.

Helpful Scripts

Some useful python scripts for reading and plotting the POT3D input data, and reading the output data can be found in the bin folder.


BENCHMARKS AND TESTSUITE

Benchmarks

In the benchmarks folder, we provide large cases of various size that can be used to benchmark the performance of POT3D.

The following is a list of the included benchmark runs, their problem size, and their memory requirements:

  1. bench_tiny
    Grid size: 173x361x1171 = 73.2 million cells
    Memory (RAM) needed (using ifprec=1): ~7.96 GB
  2. isc2023
    Grid size: 325x450x2050 = 299.8 million cells
    Memory (RAM) needed (using ifprec=1): ~32.62 GB

Testsuite

In the testsuite folder, we provide the following example runs of POT3D:

  1. /potential_field_source_surface
    A standard PFSS run with a source surface radii of 2.5 Rs.
  2. /potential_field_current_sheet
    A standard PFCS run using the outer boundary of the PFSS example as its inner boundary condition, with a domain that extends to 30 Rs. The magnetic field solution produced is unsigned.
  3. /open_field
    An example of computing the "open field" model from the solar surface out to 20 Rs using the same input surface Br as the PFSS example. The magnetic field solution produced is unsigned.

Each test case contains an input folder with the run input files, a run folder used to run the test, and a reference folder containing the output diagnostics used to validate the test.
The validation is done with the magnetic energy diagnostics in the pot3d.out file.
Note that, currently, all tests are set to use ifprec=1 only.

To run the testsuite, use the included script run_test_suite.sh To see available options, run run_test_suite.sh -h