Skip to content

Commit 30f284c

Browse files
committed
updating distributed tutorial
1 parent b71a488 commit 30f284c

1 file changed

Lines changed: 15 additions & 15 deletions

File tree

src/poisson_distributed.jl

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# ## Introduction and caveat
1+
# ## Introduction
22

3-
# In this tutorial we will learn how to use `GridapDistributed.jl` and its satellite packages, `GridapP4est.jl`, `GridapGmsh.jl`, and `GridapPETSc.jl`, in order to solve a Poisson PDE problem on the unit square using grad-conforming Lagrangian Finite Elements for numerical discretization.
3+
# In this tutorial we will learn how to use [`GridapDistributed.jl`](https://github.com/gridap/GridapDistributed.jl) and its satellite packages, [`GridapP4est.jl`](https://github.com/gridap/GridapP4est.jl), [`GridapGmsh.jl`](https://github.com/gridap/GridapGmsh.jl), and [`GridapPETSc.jl`](https://github.com/gridap/GridapPETSc.jl), in order to solve a Poisson PDE problem on the unit square using grad-conforming Lagrangian Finite Elements for numerical discretization.
44

5-
# We will first solve the problem using solely the built-in tools in `GridapDistributed.jl`. While this is very useful for testing and debugging purposes, `GridapDistributed.jl` is **not** a library of parallel solvers. Indeed, the built-in linear solver kernel within `GridapDistributed.jl`, defined with the backslash operator `\`, is just a sparse LU solver applied to the global system gathered on a master task (thus not scalable). To address this, we will then illustrate which changes are required in the program to replace the built-in solver in `GridapDistributed.jl` by `GridapPETSc.jl`. This latter package provides the full set of scalable linear and nonlinear solvers in the [PETSc](https://petsc.org/release/) numerical package.
5+
# We will first solve the problem using solely the built-in tools in `GridapDistributed.jl`. While this is very useful for testing and debugging purposes, `GridapDistributed.jl` is *not* a library of parallel solvers. Indeed, the built-in linear solver kernel within `GridapDistributed.jl`, defined with the backslash operator `\`, is just a sparse LU solver applied to the global system gathered on a master task (thus not scalable). To address this, we will then illustrate which changes are required in the program to replace the built-in solver in `GridapDistributed.jl` by `GridapPETSc.jl`. This latter package provides the full set of scalable linear and nonlinear solvers in the [PETSc](https://petsc.org/release/) numerical package.
66

77
# On the other hand, in real-world applications, one typically needs to solve PDEs on more complex domains than simple boxes. To this end, we can leverage either `GridapGmsh.jl`, in order to partition and distribute automatically unstructured meshes read from disk in gmsh format, or `GridapP4est.jl`, which allows one to mesh in a very scalable way computational domains which can be decomposed as forests of octrees. The last part of the tutorial will present the necessary changes in the program in order to use these packages.
88

@@ -14,9 +14,9 @@ using Gridap
1414
using GridapDistributed
1515
using PartitionedArrays
1616

17-
# The first step in any `GridapDistributed.jl` program is to define a function (named `main` below) to be executed on each part on which the domain is distributed. This function receives a single argument (named `parts` below). The body of this function is equivalent to a sequential `Gridap` script, except for the `CartesianDiscreteModel` call, which in `GridapDistributed` also requires the `parts` argument passed to the `main` function. The domain is discretized using the parallel Cartesian-like mesh generator built-in in `GridapDistributed`.
17+
# The first step in any `GridapDistributed.jl` program is to define a function (named `main_ex1` below) to be executed on each part on which the domain is distributed. This function receives a single argument (named `parts` below). The body of this function is equivalent to a sequential `Gridap` script, except for the `CartesianDiscreteModel` call, which in `GridapDistributed` also requires the `parts` argument passed to the `main_ex1` function. The domain is discretized using the parallel Cartesian-like mesh generator built-in in `GridapDistributed`.
1818

19-
function main(parts)
19+
function main_ex1(parts)
2020
domain = (0,1,0,1)
2121
mesh_partition = (4,4)
2222
model = CartesianDiscreteModel(parts,domain,mesh_partition)
@@ -35,22 +35,22 @@ function main(parts)
3535
writevtk(Ω,"results_ex1",cellfields=["uh"=>uh,"grad_uh"=>(uh)])
3636
end
3737

38-
# Once the `main` function has been defined, we have to trigger its execution on the different parts. To this end, one calls the `prun` function of [`PartitionedArrays.jl`](https://github.com/fverdugo/PartitionedArrays.jl) right at the beginning of the program.
38+
# Once the `main_ex1` function has been defined, we have to trigger its execution on the different parts. To this end, one calls the `prun` function of [`PartitionedArrays.jl`](https://github.com/fverdugo/PartitionedArrays.jl) right at the beginning of the program.
3939

4040
partition = (2,2)
41-
prun(main, mpi, partition)
41+
prun(main_ex1, mpi, partition)
4242

43-
# With this function, the programmer sets up the `PartitionedArrays.jl` communication backend (i.e., MPI in the example), specifies the number of parts and their layout (i.e., 2x2 Cartesian-like mesh partition in the example), and provides the `main` function to be run on each part.
43+
# With this function, the programmer sets up the `PartitionedArrays.jl` communication backend (i.e., MPI in the example), specifies the number of parts and their layout (i.e., 2x2 Cartesian-like mesh partition in the example), and provides the `main_ex1` function to be run on each part.
4444

4545
# Although not illustrated in this tutorial, we note that one may also use the `sequential` `PartitionedArrays.jl` backend, instead of `mpi`. With this backend, the code executes serially on a single process (and there is thus no need to use `mpiexecjl` to launch the program), although the data structures are still partitioned into parts. This is very useful, among others, for interactive execution of the code, and debugging, before moving to MPI parallelism.
4646

4747
# ## Second example: `GridapDistributed.jl` + `GridapPETSc.jl` for the linear solver
4848

4949
using GridapPETSc
5050

51-
# In this example we use `GridapPETSc.jl` to have access to a scalable linear solver. The code is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
51+
# In this example we use `GridapPETSc.jl` to have access to a scalable linear solver. The code is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main_ex2` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
5252

53-
function main(parts)
53+
function main_ex2(parts)
5454
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
5555
GridapPETSc.with(args=split(options)) do
5656
domain = (0,1,0,1)
@@ -74,15 +74,15 @@ function main(parts)
7474
end
7575

7676
partition = (2,2)
77-
prun(main, mpi, partition)
77+
prun(main_ex2, mpi, partition)
7878

7979
# ## Third example: second example + `GridapP4est.jl` for mesh generation
8080

8181
# In this example, we define the Cartesian mesh using `GridapP4est.jl` via recursive uniform refinement starting with a single cell. It only involves minor modifications compared to the previous example. First, one has to generate a coarse mesh of the domain. As the domain is a just a simple box in the example, it suffices to use a coarse mesh with a single quadrilateral fitted to the box in order to capture the geometry of the domain with no geometrical error (see how the `coarse_discrete_model` object is generated). In more complex scenarios, one can read an unstructured coarse mesh from disk, generated, e.g., with an unstructured brick mesh generator. Second, when building the fine mesh of the domain (see `UniformlyRefinedForestOfOctreesDiscreteModel` call), one has to specify the number of uniform refinements to be performed on the coarse mesh in order to generate the fine mesh. Finally, when calling `prun`, we do not longer specify a Cartesian partition but just the number of parts.
8282

8383
using GridapP4est
8484

85-
function main(parts)
85+
function main_ex3(parts)
8686
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
8787
GridapPETSc.with(args=split(options)) do
8888
domain = (0,1,0,1)
@@ -110,14 +110,14 @@ function main(parts)
110110
end
111111

112112
nparts = 4
113-
prun(main, mpi, nparts)
113+
prun(main_ex3, mpi, nparts)
114114

115115
# ## Fourth example: second example + `GridapGmsh.jl` for mesh generation
116116

117117
# In this example, we want to use an unstructured mesh. The mesh is read from disk and partitioned/distributed automatically by `GridapGmsh` inside the call to the `GmshDiscreteModel` constructor.
118118

119119
using GridapGmsh
120-
function main(parts)
120+
function main_ex4(parts)
121121
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
122122
GridapPETSc.with(args=split(options)) do
123123
model = GmshDiscreteModel(parts,"../models/demo.msh")
@@ -139,4 +139,4 @@ function main(parts)
139139
end
140140

141141
nparts = 4
142-
prun(main, mpi, nparts)
142+
prun(main_ex4, mpi, nparts)

0 commit comments

Comments
 (0)