You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/poisson_distributed.jl
+27-15Lines changed: 27 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -16,10 +16,11 @@ using PartitionedArrays
16
16
17
17
# The first step in any `GridapDistributed.jl` program is to define a function (named `main_ex1` below) to be executed on each part on which the domain is distributed. This function receives a single argument (named `parts` below). The body of this function is equivalent to a sequential `Gridap` script, except for the `CartesianDiscreteModel` call, which in `GridapDistributed` also requires the `parts` argument passed to the `main_ex1` function. The domain is discretized using the parallel Cartesian-like mesh generator built-in in `GridapDistributed`.
18
18
19
-
functionmain_ex1(parts)
19
+
functionmain_ex1(rank_partition,distribute)
20
+
parts =distribute(LinearIndices((prod(rank_partition),)))
20
21
domain = (0,1,0,1)
21
22
mesh_partition = (4,4)
22
-
model =CartesianDiscreteModel(parts,domain,mesh_partition)
23
+
model =CartesianDiscreteModel(parts,rank_partition,domain,mesh_partition)
23
24
order =2
24
25
u((x,y)) = (x+y)^order
25
26
f(x) =-Δ(u,x)
@@ -37,8 +38,10 @@ end
37
38
38
39
# Once the `main_ex1` function has been defined, we have to trigger its execution on the different parts. To this end, one calls the `with_backend` function of [`PartitionedArrays.jl`](https://github.com/fverdugo/PartitionedArrays.jl) right at the beginning of the program.
39
40
40
-
partition = (2,2)
41
-
with_backend(main_ex1, MPIBackend(), partition)
41
+
rank_partition = (2,2)
42
+
with_mpi() do distribute
43
+
main_ex1(rank_partition,distribute)
44
+
end
42
45
43
46
# With this function, the programmer sets up the `PartitionedArrays.jl` communication backend (i.e., MPI in the example), specifies the number of parts and their layout (i.e., 2x2 Cartesian-like mesh partition in the example), and provides the `main_ex1` function to be run on each part.
44
47
@@ -50,12 +53,13 @@ using GridapPETSc
50
53
51
54
# In this example we use `GridapPETSc.jl` to have access to a scalable linear solver. The code is almost identical as the one above (see below). The main difference is that now we are wrapping most of the code of the `main_ex2` function within a do-block syntax function call to the `GridapPETSc.with(args=split(options))` function. The `with` function receives as a first argument a function with no arguments with the instructions to be executed on each MPI task/subdomain (that we pass to it as an anonymous function with no arguments), along with the `options` to be passed to the PETSc linear solver. For a detailed explanation of possible options we refer to the PETSc library documentation. Note that the call to `PETScLinearSolver()` initializes the PETSc solver with these `options` (even though `options` is not actually passed to the linear solver constructor). Besides, we have to pass the created linear solver object `solver` to the `solve` function to override the default linear solver (i.e., a call to the backslash `\` Julia operator).
52
55
53
-
functionmain_ex2(parts)
56
+
functionmain_ex2(rank_partition,distribute)
57
+
parts =distribute(LinearIndices((prod(rank_partition),)))
model =CartesianDiscreteModel(parts,domain,mesh_partition)
62
+
model =CartesianDiscreteModel(parts,rank_partition,domain,mesh_partition)
59
63
order =2
60
64
u((x,y)) = (x+y)^order
61
65
f(x) =-Δ(u,x)
@@ -73,23 +77,26 @@ function main_ex2(parts)
73
77
end
74
78
end
75
79
76
-
partition = (2,2)
77
-
with_backend(main_ex2, MPIBackend(), partition)
80
+
rank_partition = (2,2)
81
+
with_mpi() do distribute
82
+
main_ex2(rank_partition,distribute)
83
+
end
78
84
79
85
# ## Third example: second example + `GridapP4est.jl` for mesh generation
80
86
81
87
# In this example, we define the Cartesian mesh using `GridapP4est.jl` via recursive uniform refinement starting with a single cell. It only involves minor modifications compared to the previous example. First, one has to generate a coarse mesh of the domain. As the domain is a just a simple box in the example, it suffices to use a coarse mesh with a single quadrilateral fitted to the box in order to capture the geometry of the domain with no geometrical error (see how the `coarse_discrete_model` object is generated). In more complex scenarios, one can read an unstructured coarse mesh from disk, generated, e.g., with an unstructured brick mesh generator. Second, when building the fine mesh of the domain (see `UniformlyRefinedForestOfOctreesDiscreteModel` call), one has to specify the number of uniform refinements to be performed on the coarse mesh in order to generate the fine mesh. Finally, when calling `with_backend`, we do not longer specify a Cartesian partition but just the number of parts.
# ## Fourth example: second example + `GridapGmsh.jl` for mesh generation
116
125
117
126
# In this example, we want to use an unstructured mesh. The mesh is read from disk and partitioned/distributed automatically by `GridapGmsh` inside the call to the `GmshDiscreteModel` constructor.
0 commit comments