Skip to content

Commit b71a488

Browse files
committed
minor changes
1 parent a1187cc commit b71a488

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

src/poisson_distributed.jl

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ end
3838
# Once the `main` function has been defined, we have to trigger its execution on the different parts. To this end, one calls the `prun` function of [`PartitionedArrays.jl`](https://github.com/fverdugo/PartitionedArrays.jl) right at the beginning of the program.
3939

4040
partition = (2,2)
41-
prun(main, mpi,partition)
41+
prun(main, mpi, partition)
4242

4343
# With this function, the programmer sets up the `PartitionedArrays.jl` communication backend (i.e., MPI in the example), specifies the number of parts and their layout (i.e., 2x2 Cartesian-like mesh partition in the example), and provides the `main` function to be run on each part.
4444

@@ -48,7 +48,7 @@ prun(main, mpi,partition)
4848

4949
using GridapPETSc
5050

51-
# The example code that leverages `GridapPETSc.jl` is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
51+
# In this example we use `GridapPETSc.jl` to have access to a scalable linear solver. The code is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
5252

5353
function main(parts)
5454
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
@@ -78,7 +78,7 @@ prun(main, mpi, partition)
7878

7979
# ## Third example: second example + `GridapP4est.jl` for mesh generation
8080

81-
# Using `GridapP4est.jl` for mesh generation is very simple, and only involves minor modifications compared to the previous example. First, one has to generate a coarse mesh of the domain. As the domain is a just a simple box in the example, it suffices to use a coarse mesh with a single quadrilateral fitted to the box in order to capture the geometry of the domain with no geometrical error (see how the `coarse_discrete_model` object is generated). In more complex scenarios, one can read an unstructured coarse mesh from disk, generated, e.g., with an unstructured brick mesh generator. Second, when building the fine mesh of the domain (see `UniformlyRefinedForestOfOctreesDiscreteModel` call), one has to specify the number of uniform refinements to be performed on the coarse mesh in order to generate the fine mesh. Finally, when calling `prun`, we do not longer specify a Cartesian partition but just the number of parts.
81+
# In this example, we define the Cartesian mesh using `GridapP4est.jl` via recursive uniform refinement starting with a single cell. It only involves minor modifications compared to the previous example. First, one has to generate a coarse mesh of the domain. As the domain is a just a simple box in the example, it suffices to use a coarse mesh with a single quadrilateral fitted to the box in order to capture the geometry of the domain with no geometrical error (see how the `coarse_discrete_model` object is generated). In more complex scenarios, one can read an unstructured coarse mesh from disk, generated, e.g., with an unstructured brick mesh generator. Second, when building the fine mesh of the domain (see `UniformlyRefinedForestOfOctreesDiscreteModel` call), one has to specify the number of uniform refinements to be performed on the coarse mesh in order to generate the fine mesh. Finally, when calling `prun`, we do not longer specify a Cartesian partition but just the number of parts.
8282

8383
using GridapP4est
8484

@@ -114,7 +114,7 @@ prun(main, mpi, nparts)
114114

115115
# ## Fourth example: second example + `GridapGmsh.jl` for mesh generation
116116

117-
# The only modification with respect to the second example driver above is that now the mesh is read from disk and partitioned/distributed automatically by `GridapGmsh` inside the call to the `GmshDiscreteModel` constructor.
117+
# In this example, we want to use an unstructured mesh. The mesh is read from disk and partitioned/distributed automatically by `GridapGmsh` inside the call to the `GmshDiscreteModel` constructor.
118118

119119
using GridapGmsh
120120
function main(parts)

0 commit comments

Comments
 (0)