Skip to content

Commit 9bd0683

Browse files
Merge pull request #170 from JordiManyer/partitioned-arrays-v0.3
Updating to PartitionedArrays v0.3
2 parents 2314714 + 56b1d02 commit 9bd0683

3 files changed

Lines changed: 37 additions & 22 deletions

File tree

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,10 @@ deps/build.log
22
deps/deps.jl
33
notebooks
44
Manifest.toml
5+
LocalPreferences.toml
56
.ipynb_checkpoints/
67
*.vtu
8+
*.pvtu
79
tmp/
810
.vscode/
911
*.code-workspace

Project.toml

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,18 @@ SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
3333
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
3434

3535
[compat]
36-
Gridap = "0.17.9"
37-
GridapDistributed = "0.2"
38-
GridapGmsh = "0.6"
39-
GridapP4est = "0.1"
40-
GridapPETSc = "0.4"
41-
MPI = "0.19"
42-
PartitionedArrays = "0.2.12"
36+
Gridap = "0.17.19"
37+
GridapDistributed = "0.3"
38+
GridapGmsh = "0.7"
39+
GridapP4est = "0.3"
40+
GridapPETSc = "0.5"
41+
MPI = "0.20"
42+
PartitionedArrays = "0.3"
4343
SpecialFunctions = "1"
4444
julia = "1.3"
4545

4646
[extras]
47+
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
4748
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
4849

4950
[targets]

src/poisson_distributed.jl

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,11 @@ using PartitionedArrays
1616

1717
# The first step in any `GridapDistributed.jl` program is to define a function (named `main_ex1` below) to be executed on each part on which the domain is distributed. This function receives a single argument (named `parts` below). The body of this function is equivalent to a sequential `Gridap` script, except for the `CartesianDiscreteModel` call, which in `GridapDistributed` also requires the `parts` argument passed to the `main_ex1` function. The domain is discretized using the parallel Cartesian-like mesh generator built-in in `GridapDistributed`.
1818

19-
function main_ex1(parts)
19+
function main_ex1(rank_partition,distribute)
20+
parts = distribute(LinearIndices((prod(rank_partition),)))
2021
domain = (0,1,0,1)
2122
mesh_partition = (4,4)
22-
model = CartesianDiscreteModel(parts,domain,mesh_partition)
23+
model = CartesianDiscreteModel(parts,rank_partition,domain,mesh_partition)
2324
order = 2
2425
u((x,y)) = (x+y)^order
2526
f(x) = -Δ(u,x)
@@ -37,8 +38,10 @@ end
3738

3839
# Once the `main_ex1` function has been defined, we have to trigger its execution on the different parts. To this end, one calls the `with_backend` function of [`PartitionedArrays.jl`](https://github.com/fverdugo/PartitionedArrays.jl) right at the beginning of the program.
3940

40-
partition = (2,2)
41-
with_backend(main_ex1, MPIBackend(), partition)
41+
rank_partition = (2,2)
42+
with_mpi() do distribute
43+
main_ex1(rank_partition,distribute)
44+
end
4245

4346
# With this function, the programmer sets up the `PartitionedArrays.jl` communication backend (i.e., MPI in the example), specifies the number of parts and their layout (i.e., 2x2 Cartesian-like mesh partition in the example), and provides the `main_ex1` function to be run on each part.
4447

@@ -50,12 +53,13 @@ using GridapPETSc
5053

5154
# In this example we use `GridapPETSc.jl` to have access to a scalable linear solver. The code is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main_ex2` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
5255

53-
function main_ex2(parts)
56+
function main_ex2(rank_partition,distribute)
57+
parts = distribute(LinearIndices((prod(rank_partition),)))
5458
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
5559
GridapPETSc.with(args=split(options)) do
5660
domain = (0,1,0,1)
5761
mesh_partition = (4,4)
58-
model = CartesianDiscreteModel(parts,domain,mesh_partition)
62+
model = CartesianDiscreteModel(parts,rank_partition,domain,mesh_partition)
5963
order = 2
6064
u((x,y)) = (x+y)^order
6165
f(x) = -Δ(u,x)
@@ -73,23 +77,26 @@ function main_ex2(parts)
7377
end
7478
end
7579

76-
partition = (2,2)
77-
with_backend(main_ex2, MPIBackend(), partition)
80+
rank_partition = (2,2)
81+
with_mpi() do distribute
82+
main_ex2(rank_partition,distribute)
83+
end
7884

7985
# ## Third example: second example + `GridapP4est.jl` for mesh generation
8086

8187
# In this example, we define the Cartesian mesh using `GridapP4est.jl` via recursive uniform refinement starting with a single cell. It only involves minor modifications compared to the previous example. First, one has to generate a coarse mesh of the domain. As the domain is a just a simple box in the example, it suffices to use a coarse mesh with a single quadrilateral fitted to the box in order to capture the geometry of the domain with no geometrical error (see how the `coarse_discrete_model` object is generated). In more complex scenarios, one can read an unstructured coarse mesh from disk, generated, e.g., with an unstructured brick mesh generator. Second, when building the fine mesh of the domain (see `UniformlyRefinedForestOfOctreesDiscreteModel` call), one has to specify the number of uniform refinements to be performed on the coarse mesh in order to generate the fine mesh. Finally, when calling `with_backend`, we do not longer specify a Cartesian partition but just the number of parts.
8288

8389
using GridapP4est
8490

85-
function main_ex3(parts)
91+
function main_ex3(nparts,distribute)
92+
parts = distribute(LinearIndices((nparts,)))
8693
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
8794
GridapPETSc.with(args=split(options)) do
8895
domain = (0,1,0,1)
8996
coarse_mesh_partition = (1,1)
90-
num_uniform_refinements=2
91-
coarse_discrete_model=CartesianDiscreteModel(domain,coarse_mesh_partition)
92-
model=UniformlyRefinedForestOfOctreesDiscreteModel(parts,
97+
num_uniform_refinements = 2
98+
coarse_discrete_model = CartesianDiscreteModel(domain,coarse_mesh_partition)
99+
model = UniformlyRefinedForestOfOctreesDiscreteModel(parts,
93100
coarse_discrete_model,
94101
num_uniform_refinements)
95102
order = 2
@@ -110,14 +117,17 @@ function main_ex3(parts)
110117
end
111118

112119
nparts = 4
113-
with_backend(main_ex3, MPIBackend(), nparts)
120+
with_mpi() do distribute
121+
main_ex3(nparts,distribute)
122+
end
114123

115124
# ## Fourth example: second example + `GridapGmsh.jl` for mesh generation
116125

117126
# In this example, we want to use an unstructured mesh. The mesh is read from disk and partitioned/distributed automatically by `GridapGmsh` inside the call to the `GmshDiscreteModel` constructor.
118127

119128
using GridapGmsh
120-
function main_ex4(parts)
129+
function main_ex4(nparts,distribute)
130+
parts = distribute(LinearIndices((nparts,)))
121131
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
122132
GridapPETSc.with(args=split(options)) do
123133
model = GmshDiscreteModel(parts,"../models/demo.msh")
@@ -139,4 +149,6 @@ function main_ex4(parts)
139149
end
140150

141151
nparts = 4
142-
with_backend(main_ex4, MPIBackend(), nparts)
152+
with_mpi() do distribute
153+
main_ex4(nparts,distribute)
154+
end

0 commit comments

Comments
 (0)