Skip to content

Commit 158f2aa

Browse files
glesurvolodia99pre-commit-ci-lite[bot]neutrinoceros
authored
V2.1.00 (#223)
* fix Planet::RegisterInDump (#229) * fix segFault with gridCoarsening+MHD when DIMENSIONS==2 (#230) * add a comment on how to disable OMPiO on openmpi in the documentation * Update faq.rst with known MacOs linker bug * Fix parabolic MHD+coarsening (#233) * adding a new test for grid Coarsening * fix currents used with grid Coarsening * reconstruct BX2s only on the axis (#234) * update version & changelog * dumpimage that can work on distributed data * use correct bound for userdef slices * vtk slice automatically handles all of the variables * fix potential segfault in dumpimage * produce slices when emergency outputs * add compile time info * force disable Kokkos_ENABLE_IMPL_CUDA_MALLOC_ASYNC in tests as it is now enabled by default in Kokkos > 4.1 kokkos/kokkos@ac23f36 * Bump up to Kokkos 4.3.1 --------- Co-authored-by: volodia99 <gaylor.wafflard@univ-grenoble-alpes.fr> Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Clément Robert <cr52@protonmail.com>
1 parent 18c2b73 commit 158f2aa

18 files changed

Lines changed: 374 additions & 240 deletions

File tree

CHANGELOG.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,17 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [2.1.0] 2024-05-10
8+
### Changed
9+
- VTK slices are automatically produced along with standard VTK when an emergency abort is triggered.
10+
- Bump up Kokkos version to 4.3.01
11+
- Add compile time info on startup
12+
13+
### Added
14+
- DumpImage can load dump and distribute dump in memory, following the domain decomposition used in the current simulation
15+
- Vtk slices now show all of the fluids (not only hydro)
16+
17+
718
## [2.0.5] 2024-03-29
819
### Added
920
- implement per-fluid flux boundary conditions (experimental) following a suggestion from Paul Segretain (#235)

CMakeLists.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ set(CMAKE_BUILD_TYPE Release)
33
set (CMAKE_CXX_STANDARD 17)
44

55
set(Idefix_VERSION_MAJOR 2)
6-
set(Idefix_VERSION_MINOR 0)
7-
set(Idefix_VERSION_PATCH 05)
6+
set(Idefix_VERSION_MINOR 1)
7+
set(Idefix_VERSION_PATCH 00)
88

9-
project (idefix VERSION 2.0.05)
9+
project (idefix VERSION 2.1.00)
1010
option(Idefix_MHD "enable MHD" OFF)
1111
option(Idefix_MPI "enable Message Passing Interface parallelisation" OFF)
1212
option(Idefix_HIGH_ORDER_FARGO "Force Fargo to use a PPM reconstruction scheme" OFF)

doc/python_requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# Doxygen https://www.doxygen.nl/download.html
44
#
55
# Install Sphinx and required plugins with the following command
6-
# python -m pip install -r python-requirements.txt
6+
# python -m pip install -r python_requirements.txt
77

88
wheel>=0.38.4 # help forward compatibility for pip with old sphinx plugins
99
sphinx==5.3.0

doc/source/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
author = 'Geoffroy Lesur'
2424

2525
# The full version, including alpha/beta/rc tags
26-
release = '2.0.05'
26+
release = '2.1.00'
2727

2828

2929

doc/source/faq.rst

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,9 +61,6 @@ How can I stop the code without loosing the current calculation?
6161
I'm doing performance measures. How do I disable all outputs in *Idefix*?
6262
Add ``-nowrite`` when you call *Idefix* executable.
6363

64-
I sometimes get incoherent values in my VTK files when running Idefix with OpenMPI>4, but the code seems to be running fine.
65-
This is probably a bug ot the MPI library for some of the MPI I/O calls used by the VTK class. We recommend passing ``--mca io ^ompio`` to mpirun to avoid
66-
this problem which seems to be a bug of the OMPIO layer of OpenMPI.
6764

6865
Developement
6966
------------

pytools/idfx_test.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,8 @@ def configure(self,definitionFile=""):
116116
comm.append("-DKokkos_ENABLE_CUDA=ON")
117117
# disable fmad operations on Cuda to make it compatible with CPU arithmetics
118118
comm.append("-DIdefix_CXX_FLAGS=--fmad=false")
119+
# disable Async cuda malloc for tests performed on old UCX implementations
120+
comm.append("-DKokkos_ENABLE_IMPL_CUDA_MALLOC_ASYNC=OFF")
119121

120122
if self.hip:
121123
comm.append("-DKokkos_ENABLE_HIP=ON")

src/input.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -420,4 +420,5 @@ void Input::PrintOptions() {
420420
void Input::PrintVersion() {
421421
idfx::cout << " Idefix version " << IDEFIX_VERSION << std::endl;
422422
idfx::cout << " Built against Kokkos " << KOKKOS_VERSION << std::endl;
423+
idfx::cout << " Compiled on " << __DATE__ << " at " << __TIME__ << std::endl;
423424
}

src/kokkos

Submodule kokkos updated 724 files

src/main.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
//@HEADER
1010
// ************************************************************************
1111
//
12-
// IDEFIX v 2.0.05
12+
// IDEFIX v 2.1.00
1313
//
1414
// ************************************************************************
1515
//@HEADER

src/output/dump.cpp

Lines changed: 102 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
#include <experimental/filesystem>
1515
namespace fs = std::experimental::filesystem;
1616
#else
17-
error "Missing the <filesystem> header."
17+
#error "Missing the <filesystem> header."
1818
#endif
1919
#include <iomanip>
2020
#include "dump.hpp"
@@ -62,6 +62,97 @@ void Dump::RegisterVariable(IdefixHostArray4D<real>& in,
6262
}
6363

6464

65+
66+
void Dump::CreateMPIDataType(GridBox gb, bool read) {
67+
#ifdef WITH_MPI
68+
int start[3];
69+
int size[3];
70+
int subsize[3];
71+
72+
// the grid is required to now the current MPÏ domain decomposition
73+
Grid *grid = data->mygrid;
74+
75+
// Dimensions for cell-centered fields
76+
for(int dir = 0; dir < 3 ; dir++) {
77+
size[2-dir] = gb.sizeGlob[dir];
78+
start[2-dir] = gb.start[dir];
79+
subsize[2-dir] = gb.size[dir];
80+
}
81+
if(read) {
82+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
83+
MPI_ORDER_C, realMPI, &this->descCR));
84+
MPI_SAFE_CALL(MPI_Type_commit(&this->descCR));
85+
} else {
86+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
87+
MPI_ORDER_C, realMPI, &this->descCW));
88+
MPI_SAFE_CALL(MPI_Type_commit(&this->descCW));
89+
}
90+
91+
// Dimensions for face-centered field
92+
for(int face = 0; face < 3 ; face++) {
93+
for(int dir = 0; dir < 3 ; dir++) {
94+
size[2-dir] = gb.sizeGlob[dir];
95+
start[2-dir] = gb.start[dir];
96+
subsize[2-dir] = gb.size[dir];
97+
}
98+
if(read) {
99+
// Add the extra guy in the face direction
100+
size[2-face]++;
101+
subsize[2-face]++; // valid only for reading
102+
//since it involves an overlap of data between procs
103+
104+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
105+
MPI_ORDER_C, realMPI, &this->descSR[face]));
106+
MPI_SAFE_CALL(MPI_Type_commit(&this->descSR[face]));
107+
} else {
108+
// Now for writing, it is only the last proc which keeps one additional cell
109+
size[2-face]++;
110+
if(grid->xproc[face] == grid->nproc[face] - 1 ) subsize[2-face]++;
111+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
112+
MPI_ORDER_C, realMPI, &this->descSW[face]));
113+
MPI_SAFE_CALL(MPI_Type_commit(&this->descSW[face]));
114+
}
115+
}
116+
// Dimensions for edge-centered field
117+
for(int nv = 0; nv < 3 ; nv++) {
118+
// load the array size
119+
for(int dir = 0; dir < 3 ; dir++) {
120+
size[2-dir] = gb.sizeGlob[dir];
121+
start[2-dir] = gb.start[dir];
122+
subsize[2-dir] = gb.size[dir];
123+
}
124+
125+
if(read) {
126+
// Extra cell in the dirs perp to field
127+
for(int i = 0 ; i < DIMENSIONS ; i++) {
128+
if(i!=nv) {
129+
size[2-i]++;
130+
subsize[2-i]++; // valid only for reading
131+
//since it involves an overlap of data between procs
132+
}
133+
}
134+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
135+
MPI_ORDER_C, realMPI, &this->descER[nv]));
136+
MPI_SAFE_CALL(MPI_Type_commit(&this->descER[nv]));
137+
} else {
138+
// Now for writing, it is only the last proc which keeps one additional cell,
139+
// so we remove what we added for reads
140+
for(int i = 0 ; i < DIMENSIONS ; i++) {
141+
if(i!=nv) {
142+
size[2-i]++;
143+
if(grid->xproc[i] == grid->nproc[i] - 1 ) {
144+
subsize[2-i]++;
145+
}
146+
}
147+
}
148+
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
149+
MPI_ORDER_C, realMPI, &this->descEW[nv]));
150+
MPI_SAFE_CALL(MPI_Type_commit(&this->descEW[nv]));
151+
}
152+
}
153+
#endif
154+
}
155+
65156
void Dump::Init(DataBlock *datain) {
66157
idfx::pushRegion("Dump::Init");
67158
this->data = datain;
@@ -102,79 +193,15 @@ void Dump::Init(DataBlock *datain) {
102193

103194
#ifdef WITH_MPI
104195
Grid *grid = data->mygrid;
105-
106-
int start[3];
107-
int size[3];
108-
int subsize[3];
109-
110-
// Dimensions for cell-centered fields
196+
GridBox gb;
111197
for(int dir = 0; dir < 3 ; dir++) {
112-
size[2-dir] = grid->np_int[dir];
113-
start[2-dir] = data->gbeg[dir]-data->nghost[dir];
114-
subsize[2-dir] = data->np_int[dir];
198+
gb.start[dir] = data->gbeg[dir]-data->nghost[dir];
199+
gb.size[dir] = data->np_int[dir];
200+
gb.sizeGlob[dir] = grid->np_int[dir];
115201
}
116-
117-
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
118-
MPI_ORDER_C, realMPI, &this->descC));
119-
MPI_SAFE_CALL(MPI_Type_commit(&this->descC));
120-
121-
// Dimensions for face-centered field
122-
for(int face = 0; face < 3 ; face++) {
123-
for(int dir = 0; dir < 3 ; dir++) {
124-
size[2-dir] = grid->np_int[dir];
125-
start[2-dir] = data->gbeg[dir]-data->nghost[dir];
126-
subsize[2-dir] = data->np_int[dir];
127-
}
128-
// Add the extra guy in the face direction
129-
size[2-face]++;
130-
subsize[2-face]++; // valid only for reading
131-
//since it involves an overlap of data between procs
132-
133-
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
134-
MPI_ORDER_C, realMPI, &this->descSR[face]));
135-
MPI_SAFE_CALL(MPI_Type_commit(&this->descSR[face]));
136-
137-
// Now for writing, it is only the last proc which keeps one additional cell
138-
if(grid->xproc[face] != grid->nproc[face] - 1 ) subsize[2-face]--;
139-
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
140-
MPI_ORDER_C, realMPI, &this->descSW[face]));
141-
MPI_SAFE_CALL(MPI_Type_commit(&this->descSW[face]));
142-
}
143-
// Dimensions for edge-centered field
144-
for(int nv = 0; nv < 3 ; nv++) {
145-
// load the array size
146-
for(int dir = 0; dir < 3 ; dir++) {
147-
size[2-dir] = grid->np_int[dir];
148-
start[2-dir] = data->gbeg[dir]-data->nghost[dir];
149-
subsize[2-dir] = data->np_int[dir];
150-
}
151-
152-
// Extra cell in the dirs perp to field
153-
for(int i = 0 ; i < DIMENSIONS ; i++) {
154-
if(i!=nv) {
155-
size[2-i]++;
156-
subsize[2-i]++; // valid only for reading
157-
//since it involves an overlap of data between procs
158-
}
159-
}
160-
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
161-
MPI_ORDER_C, realMPI, &this->descER[nv]));
162-
MPI_SAFE_CALL(MPI_Type_commit(&this->descER[nv]));
163-
164-
// Now for writing, it is only the last proc which keeps one additional cell,
165-
// so we remove what we added for reads
166-
for(int i = 0 ; i < DIMENSIONS ; i++) {
167-
if(i!=nv) {
168-
if(grid->xproc[i] != grid->nproc[i] - 1 ) {
169-
subsize[2-i]--;
170-
}
171-
}
172-
}
173-
MPI_SAFE_CALL(MPI_Type_create_subarray(3, size, subsize, start,
174-
MPI_ORDER_C, realMPI, &this->descEW[nv]));
175-
MPI_SAFE_CALL(MPI_Type_commit(&this->descEW[nv]));
176-
}
177-
202+
// Create MPI datatypes for read/write
203+
CreateMPIDataType(gb, false);
204+
CreateMPIDataType(gb, true);
178205
#endif
179206

180207
// Register variables that are needed in restart dumps
@@ -661,6 +688,7 @@ bool Dump::Read(Output& output, int readNumber ) {
661688
}
662689
// Todo: check that coordinates are identical
663690
}
691+
664692
std::unordered_set<std::string> notFound {};
665693
for(auto it = dumpFieldMap.begin(); it != dumpFieldMap.end(); it++) {
666694
notFound.insert(it->first);
@@ -701,7 +729,7 @@ bool Dump::Read(Output& output, int readNumber ) {
701729
}
702730
}
703731
if(scalar.GetLocation() == DumpField::ArrayLocation::Center) {
704-
ReadDistributed(fileHdl, ndim, nx, nxglob, descC, scrch);
732+
ReadDistributed(fileHdl, ndim, nx, nxglob, descCR, scrch);
705733
} else if(scalar.GetLocation() == DumpField::ArrayLocation::Face) {
706734
ReadDistributed(fileHdl, ndim, nx, nxglob, descSR[direction], scrch);
707735
} else if(scalar.GetLocation() == DumpField::ArrayLocation::Edge) {
@@ -887,7 +915,7 @@ int Dump::Write(Output& output) {
887915
}
888916

889917
if(scalar.GetLocation() == DumpField::ArrayLocation::Center) {
890-
WriteDistributed(fileHdl, 3, nx, nxtot, fieldName, this->descC, scrch);
918+
WriteDistributed(fileHdl, 3, nx, nxtot, fieldName, this->descCW, scrch);
891919
} else if(scalar.GetLocation() == DumpField::ArrayLocation::Face) {
892920
WriteDistributed(fileHdl, 3, nx, nxtot, fieldName, this->descSW[dir], scrch);
893921
} else if(scalar.GetLocation() == DumpField::ArrayLocation::Edge) {

0 commit comments

Comments
 (0)