4.4.4. pde.solvers.explicit_mpi module

Defines an explicit solver using multiprocessing via MPI

class ExplicitMPISolver(pde, scheme='euler', decomposition=-1, *, backend='auto', adaptive=False, tolerance=0.0001)[source]

Bases: ExplicitSolver

class for solving partial differential equations explicitly using MPI

Warning

This solver can only be used if MPI is properly installed. In particular, python scripts then need to be started using mpirun or mpiexec. Please refer to the documentation of your MPI distribution for details.

The main idea of the solver is to take the full initial state in the main node (ID 0) and split the grid into roughly equal subgrids. The main node then distributes these subfields to all other nodes and each node creates the right hand side of the PDE for itself (and independently). Each node then advances the PDE independently, ensuring proper coupling to neighboring nodes via special boundary conditions, which exchange field values between sub grids. This is implemented by the get_boundary_conditions() method of the sub grids, which takes the boundary conditions for the full grid and creates conditions suitable for the specific sub grid on the given node. The trackers (and thus all input and output) are only handled on the main node.

Warning

The function providing the right hand side of the PDE needs to support MPI. This is automatically the case for local evaluations (which only use the field value at the current position), for the differential operators provided by pde, and integration of fields. Similarly, modify_after_step can only be used to do local modifications since the field data supplied to the function is local to each MPI node.

Example

A minimal example using the MPI solver is

from pde import DiffusionPDE, ScalarField, UnitGrid

grid = UnitGrid([64, 64])
state = ScalarField.random_uniform(grid, 0.2, 0.3)

eq = DiffusionPDE(diffusivity=0.1)
result = eq.solve(state, t_range=10, dt=0.1, method="explicit_mpi")

if result is not None:  # restrict the output to the main node
   result.plot()

Saving this script as multiprocessing.py, a parallel simulation is started by

mpiexec -n 2 python3 multiprocessing.py

Here, the number 2 determines the number of cores that will be used. Note that macOS might require an additional hint on how to connect the processes even when they are run on the same machine (e.g., your workstation). It might help to run mpiexec -n 2 -host localhost python3 multiprocessing.py in this case

Parameters:
  • pde (PDEBase) – The instance describing the pde that needs to be solved

  • scheme (str) – Defines the explicit scheme to use. Supported values are ‘euler’ and ‘runge-kutta’ (or ‘rk’ for short).

  • decomposition (list of ints) – Number of subdivision in each direction. Should be a list of length grid.num_axes specifying the number of nodes along this axis. If one value is -1, its value will be determined from the number of available nodes. The default value decomposed the first axis using all available nodes.

  • backend (str) – Determines how the function is created. Accepted values are ‘numpy` and ‘numba’. Alternatively, ‘auto’ lets the code decide for the most optimal backend.

  • adaptive (bool) – When enabled, the time step is adjusted during the simulation using the error tolerance set with tolerance.

  • tolerance (float) – The error tolerance used in adaptive time stepping. This is used in adaptive time stepping to choose a time step which is small enough so the truncation error of a single step is below tolerance.

info: Dict[str, Any]
make_stepper(state, dt=None)[source]

return a stepper function using an explicit scheme

Parameters:
  • state (FieldBase) – An example for the state from which the grid and other information can be extracted

  • dt (float) – Time step of the explicit stepping. If None, this solver specifies 1e-3 as a default value.

Returns:

Function that can be called to advance the state from time t_start to time t_end. The function call signature is (state: numpy.ndarray, t_start: float, t_end: float)

Return type:

Callable[[FieldBase, float, float], float]

name = 'explicit_mpi'