Krylov solver's option max_it and a few questions

I have a fenicsx code running fine except that Nonlinearproblem faces problems of convergence (the problem is actually linear, but I just use that solver because it is more general, and this prevents a code rewriting in case I implement non linear things in the future). I don’t see any reason to stick to the usual linear solver. If I’m missing something, I’d be glad to know.

Newton method fails to converge when I modify a parameter within reasonable values, despite that I set an initial value very close to the solution. Anyway, I would like to increase the max iteration of Newton’s method.

The part of the code is:

# Solve the PDE.
problem = NonlinearProblem(weak_form, TempVolt, bcs=bcs, J = Jac)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = 1e-9
solver.report = True

ksp = solver.krylov_solver
opts = PETSc.Options()
#print(opts)
option_prefix = ksp.getOptionsPrefix()
#opts[f"{option_prefix}ksp_type"] = "cg"
#opts[f"{option_prefix}pc_type"] = "gamg"
opts[f"{option_prefix}ksp_type"] = "preonly"
opts[f"{option_prefix}pc_type"] = "lu"
opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
opts[f"{option_prefix}ksp_max_it"]= 10000
ksp.setFromOptions()

I get the following traceback. But it is the warning I would like to tackle down:

, line 130, in <module>
    n, converged = solver.solve(TempVolt)
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/nls/petsc.py", line 46, in solve
    n, converged = super().solve(u.vector)
RuntimeError: Newton solver did not converge because maximum number of iterations reached
WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
There is one unused database option. It is:
Option left: name:-nls_solve_ksp_max_it value: 10000 source: code

I would like to increase ksp_max_it. I never had this warning before. I tried to check in Petsc’s Krylov’s source code to see how to modify this parameter, and I saw it was with kps_max_it just like I do. Therefore, I don’t understand why I’m seeing this warning, nor how to fix it. This code used to work before, so I suspect a recent dolfinx update changed this parameter’s name, but I couldn’t find it.

This is not an error from petsc, But an error from the newton solver in DOLFINx.
You can call solver.max_it = 50 to increase the number of iterations,

Thanks a lot @dokken. It is strange that I never had this warning before though. And this line: opts[f"{option_prefix}ksp_max_it"]= 1000 can also be found in the dolfinx tutorial, I guess it returns this warning now?

Anyway, by modifying the max_it parameter using your suggestion, I could push a little bit further the resolution of my problem. But it’s still far from satisfying. Krylov solver fails now, with the message:

Traceback (most recent call last):
  File "/root/shared/Nye_example.py", line 132, in <module>
    n, converged = solver.solve(TempVolt)
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/nls/petsc.py", line 46, in solve
    n, converged = super().solve(u.vector)
RuntimeError: Failed to successfully call PETSc function 'KSPSolve'. PETSc error code is: 76, Error in external library

I am seriously considering to translate my code into using a linear solver instead of the non linear one. Do you know about possible drawbacks of using the NonlinearSolver as opposed to the Linear one?

By the way, my full code is:

import numpy as np
from dolfinx import log
from dolfinx.fem import (Constant, Function, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical,
                         locate_dofs_topological)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio
mesh, cell_markers, facet_markers = gmshio.read_from_msh("meshes/rectangles.msh", MPI.COMM_WORLD, gdim=2)


# Reminder of the mesh:
#Physical Surface("material", 11) = {2, 1, 3};
#Physical Curve("bottom_left", 12) = {10};
#Physical Curve("top_right", 13) = {6};
inj_current_curve = 12
out_current_curve = 13 # This corresponds to the curve the current leaves the material.
reading_voltage_surface_0 = 12
reading_voltage_surface_1 = 13

# Define ME function space
el = FiniteElement("CG", mesh.ufl_cell(), 2)
mel = MixedElement([el, el])
ME = FunctionSpace(mesh, mel)

u, v = TestFunction(ME)
TempVolt = Function(ME)
temp, volt = split(TempVolt)
dTV = TrialFunction(ME)

rho = 10
sigma = 1.0 / rho

kappa = 144

S_xx = 100e-6
S_xx = 0.0
S_yy = 3 * S_xx
Seebeck_tensor = as_tensor([[S_xx, 0], [0, S_yy]])

# Define the boundary conditions
left_facets = facet_markers.find(inj_current_curve)
right_facets = facet_markers.find(out_current_curve)
left_dofs = locate_dofs_topological(
    ME.sub(1), mesh.topology.dim-1, left_facets)

left_dofs_temp = locate_dofs_topological(
    ME.sub(0), mesh.topology.dim-1, left_facets)
right_dofs_temp = locate_dofs_topological(
    ME.sub(0), mesh.topology.dim-1, right_facets)
T_cold = 300.0
bc_temp_left = dirichletbc(ScalarType(T_cold), left_dofs_temp, ME.sub(0))
bc_temp_right = dirichletbc(ScalarType(T_cold+10.0), right_dofs_temp, ME.sub(0))
bcs = [bc_temp_left, bc_temp_right]

x = SpatialCoordinate(mesh)
dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)
the_current = 0.21 # Current, in amperes.
J = the_current / \
    assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))

# Weak form.
J_vector = -sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp)
Fourier_term = dot(-kappa * grad(temp), grad(u)) * dx
Joule_term = dot(rho * J_vector, J_vector) * u * dx
F_T = Fourier_term + Joule_term
F_V = -dot(grad(v), sigma * grad(volt))*dx -dot(grad(v), sigma * Seebeck_tensor * grad(temp))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 

weak_form = F_T + F_V

# Solve the PDE.
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver

Jac = derivative(weak_form,TempVolt,dTV)

print(f''' ------- Pre-processing --------
Current density: {J}
Length of the side where current is injected: {assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))}
Length of the side where current leaves the wire: {assemble_scalar(form(1 * ds(out_current_curve, domain=mesh)))}
This should correspond to the current injected: {assemble_scalar(form(J*ds(out_current_curve)))}
''')


# Solve the PDE.
problem = NonlinearProblem(weak_form, TempVolt, bcs=bcs, J = Jac)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = 1e-9
solver.report = True
solver.max_it = 10000

ksp = solver.krylov_solver
opts = PETSc.Options()
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "preonly"
opts[f"{option_prefix}pc_type"] = "lu"
opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
ksp.setFromOptions()
    
log.set_log_level(log.LogLevel.WARNING)
n, converged = solver.solve(TempVolt)
assert(converged)
print(f'''------- Processing --------
Number of interations: {n:d}''')

I cannot provide the mesh file here due to the limitation of Discourse.

Edit: I fixed the error using the suggestion given in PETSc error 76 dolfinx - #4 by hawkspar. In short, using ksp instead of lu fixed the problem.

ksp_max_it should still be a workable option: KSPSetTolerances — PETSc 3.20.1 documentation

Without the actual mesh, I cannot give you alot of guidance.

The mesh file can be created using the command “gmsh -3 mesh.geo” where the geo file is

SetFactory("OpenCASCADE");
//+
Mesh.CharacteristicLengthMax = 0.01;
Mesh.CharacteristicLengthMin = 0.01;
Rectangle(1) = {0, 0, 0, 0.1, 0.3, 0};
//+
Rectangle(2) = {0, 0.2, 0, 0.3, 0.1, 0};
//+
//BooleanFragments{ Surface{2}; Delete; }{Surface{1}; Delete; }
//+
//BooleanFragments{ Curve{10}; Delete; }{Surface{3}; Delete; }
//BooleanFragments{ Curve{6}; Delete; }{Surface{2}; Delete; }


//+
//+
BooleanFragments{ Surface{2}; Surface{1}; Delete; }{ }
//+
BooleanFragments{ Surface{1}; Surface{3}; Delete; }{ }
//+

Physical Surface("material", 11) = {2, 1, 3};
//+
Physical Curve("bottom_left", 12) = {10};
//+
Physical Curve("top_right", 13) = {6};

Ah, maybe I should have used a -2 instead of -3 since it is a 2D mesh.

I also have some difficulties in saving the results for postprocessing with Paraview. I am ditching xdmf in favor of adios2.
I tried:

ME.sub(0).name = "Temp"
with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [temp], engine="BP4") as vtx:
    vtx.write(0.0)

but this yields

Traceback (most recent call last):
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/io/utils.py", line 76, in __init__
    dtype = output.geometry.x.dtype  # type: ignore
AttributeError: 'list' object has no attribute 'geometry'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/io/utils.py", line 79, in __init__
    dtype = output.function_space.mesh.geometry.x.dtype  # type: ignore
AttributeError: 'list' object has no attribute 'function_space'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/shared/test.py", line 156, in <module>
    with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [temp], engine="BP4") as vtx:
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/io/utils.py", line 81, in __init__
    dtype = output[0].function_space.mesh.geometry.x.dtype  # type: ignore
AttributeError: 'Indexed' object has no attribute 'function_space'

You need to collapse the function, i.e.

TempVolt = Function(ME)
Output_0 = TempVolt.sub(0).collapse()
Output_1 = TempVolt.sub(1).collpase()

and write output 0 and 1 to file.

Thanks, that’s very helpful. I added the following code at the end of my code:

Output_0 = TempVolt.sub(0).collapse()
Output_1 = TempVolt.sub(1).collapse()

with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [Output_0], engine="BP4") as vtx:
    vtx.write(0.0)

Paraview just crashes when I load the file.bp. From a terminal, I see this message:

[openvkl] application requested ISPC device width 8via device name cpu_8
[openvkl] CPU device instantiated with width: 8, ISA: AVX2
/usr/include/c++/13.2.1/bits/stl_vector.h:1125: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = int; _Alloc = std::allocator<int>; reference = int&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.

Loguru caught a signal: SIGABRT
Stack trace:
51      0x55f250e16085 /opt/paraview/bin/paraview(+0xc085) [0x55f250e16085]
50      0x7f3031de7d8a __libc_start_main + 138
49      0x7f3031de7cd0 /usr/lib/libc.so.6(+0x27cd0) [0x7f3031de7cd0]
48      0x55f250e14e2f /opt/paraview/bin/paraview(+0xae2f) [0x55f250e14e2f]
47      0x7f303009c313 QCoreApplication::exec() + 147
46      0x7f303009ae74 QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 308
45      0x7f30300eaf7c QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 108
44      0x7f3028d0c162 g_main_context_iteration + 50
43      0x7f3028d6c327 /usr/lib/libglib-2.0.so.0(+0xb8327) [0x7f3028d6c327]
42      0x7f3028d0df69 /usr/lib/libglib-2.0.so.0(+0x59f69) [0x7f3028d0df69]
41      0x7f3008f2f570 /usr/lib/libQt5XcbQpa.so.5(+0x65570) [0x7f3008f2f570]
40      0x7f303052a6f5 QWindowSystemInterface::sendWindowSystemEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 181
39      0x7f303054196c QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*) + 1772
38      0x7f303009c168 QCoreApplication::notifyInternal2(QObject*, QEvent*) + 296
37      0x7f30313788ff QApplicationPrivate::notify_helper(QObject*, QEvent*) + 143
36      0x7f30313cec07 /usr/lib/libQt5Widgets.so.5(+0x1cec07) [0x7f30313cec07]
35      0x7f30313cd9b4 /usr/lib/libQt5Widgets.so.5(+0x1cd9b4) [0x7f30313cd9b4]
34      0x7f303137c0ea QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool, bool) + 458
33      0x7f303009c168 QCoreApplication::notifyInternal2(QObject*, QEvent*) + 296
32      0x7f303137ddaf QApplication::notify(QObject*, QEvent*) + 4351
31      0x7f30313788ff QApplicationPrivate::notify_helper(QObject*, QEvent*) + 143
30      0x7f30313af1a1 QWidget::event(QEvent*) + 2257
29      0x7f30314f43fd /usr/lib/libQt5Widgets.so.5(+0x2f43fd) [0x7f30314f43fd]
28      0x7f30314f42db /usr/lib/libQt5Widgets.so.5(+0x2f42db) [0x7f30314f42db]
27      0x7f303137160b QAction::activate(QAction::ActionEvent) + 187
26      0x7f303136bbb7 QAction::triggered(bool) + 71
25      0x7f30300d1253 /usr/lib/libQt5Core.so.5(+0x2d1253) [0x7f30300d1253]
24      0x7f3031af1b1b /opt/paraview/lib/libpqApplicationComponents.so.1(+0xf1b1b) [0x7f3031af1b1b]
23      0x7f3031b899b2 pqLoadDataReaction::loadData() + 50
22      0x7f3031b892bc pqLoadDataReaction::loadData(QSet<QPair<QString, QString> > const&) + 5836
21      0x7f3031b84cd5 pqLoadDataReaction::loadFilesForSupportedTypes(QList<QStringList>) + 1461
20      0x7f3031b8ee5f pqLoadDataReaction::loadData(QStringList const&, QString const&, QString const&, pqServer*) + 1615
19      0x7f3031b8e32a pqLoadDataReaction::loadData(QList<QStringList> const&, QString const&, QString const&, pqServer*) + 1946
18      0x7f3031b8bd67 pqLoadDataReaction::DetermineFileReader(QString const&, pqServer*, vtkSMReaderFactory*, QPair<QString, QString>&) + 151
17      0x7f302fb9ccf4 vtkSMReaderFactory::GetReaders(char const*, vtkSMSession*) + 324
16      0x7f302fb9c7ee vtkSMReaderFactory::vtkInternals::vtkValue::CanReadFile(char const*, bool, std::vector<std::string> const&, vtkSMSession*, bool) + 334
15      0x7f302fb9c40d vtkSMReaderFactory::CanReadFile(char const*, vtkSMProxy*) + 365
14      0x7f302fb78ea2 vtkSMProxy::UpdateVTKObjects() + 50
13      0x7f302fbc4dea vtkSMSourceProxy::CreateVTKObjects() + 26
12      0x7f302fb797a1 vtkSMProxy::CreateVTKObjects() + 1201
11      0x7f302fab9ad8 vtkPVSessionBase::PushState(paraview_protobuf::Message*) + 88
10      0x7f302fae4c90 vtkSIProxy::Push(paraview_protobuf::Message*) + 64
9       0x7f302fae4829 vtkSIProxy::InitializeAndCreateVTKObjects(paraview_protobuf::Message*) + 2345
8       0x7f302fae9618 vtkSISourceProxy::ReadXMLAttributes(vtkPVXMLElement*) + 24
7       0x7f302fadea8b vtkSIProxy::ReadXMLAttributes(vtkPVXMLElement*) + 715
6       0x7f302fae5121 vtkSIProxy::ReadXMLProperty(vtkPVXMLElement*) + 385
5       0x7f302faf0070 vtkSIStringVectorProperty::ReadXMLAttributes(vtkSIProxy*, vtkPVXMLElement*) + 2400
4       0x7f302b0dd3b2 /usr/lib/libstdc++.so.6(+0xdd3b2) [0x7f302b0dd3b2]
3       0x7f3031de64b8 abort + 215
2       0x7f3031dfe668 raise + 24
1       0x7f3031e4e83c /usr/lib/libc.so.6(+0x8e83c) [0x7f3031e4e83c]
0       0x7f3031dfe710 /usr/lib/libc.so.6(+0x3e710) [0x7f3031dfe710]
(  22.025s) [paraview        ]                       :0     FATL| Signal: SIGABRT
zsh: IOT instruction (core dumped)  paraview

What version of paraview are you running?

What is the output of bpls -a -l results/temperature/bp?

Paraview 5.11.2.

bpls -a -l results/temperature.bp
  uint32_t  NumberOfEntities     {1} = 1216 / 1216
  uint32_t  NumberOfNodes        {1} = 2553 / 2553
  int64_t   connectivity         [1]*{1216, 7} = 0 / 2552
  double    f                    [1]*{2553, 1} = 300 / 310
  double    geometry             [1]*{2553, 3} = -9.61481e-19 / 0.3
  double    step                 scalar = 0
  uint32_t  types                scalar = 69
  string    vtk.xml              attr   = 
<VTKFile type="UnstructuredGrid" version="0.1">
  <UnstructuredGrid>
    <Piece NumberOfPoints="NumberOfNodes" NumberOfCells="NumberOfCells">
      <Points>
        <DataArray Name="geometry" />
      </Points>
      <Cells>
        <DataArray Name="connectivity" />
        <DataArray Name="types" />
      </Cells>
      <PointData>
        <DataArray Name="TIME">step</DataArray>
        <DataArray Name="vtkOriginalPointIds" />
        <DataArray Name="vtkGhostType" />
        <DataArray Name="f" />
      </PointData>
    </Piece>
  </UnstructuredGrid>
</VTKFile>

  uint8_t   vtkGhostType         [1]*{2553} = 0 / 0
  int64_t   vtkOriginalPointIds  [1]*{2553} = 0 / 2552

If the problem is linear, and you’re failing to converge in one iteration, something is wrong with your formulation.

1 Like

I’m not see what is wrong with your .bp file.
If you can produce a minimal reproducible example, for instance:

  1. Create a standard unit square
  2. Define your function spaces
  3. Define function in mixed space, set data for functions.
  4. Store collapsed components to file.
    I might be able to reproduce the issue and debug it.

I fixed the issue by not using the LU preconditioner. Now it converges in very few steps without any problem.

I am trying. What do you mean exactly by “set data for functions”?
Here’s my MWE:

import numpy as np
from dolfinx import log
from dolfinx.fem import (Constant, Function, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical,
                         locate_dofs_topological)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio, VTXWriter
from dolfinx.mesh import create_unit_square


mesh = create_unit_square(MPI.COMM_WORLD, 15, 15)

# Define ME function space
el = FiniteElement("CG", mesh.ufl_cell(), 2)
mel = MixedElement([el, el])
ME = FunctionSpace(mesh, mel)

# Define the functions in the ME space.
TempVolt = Function(ME)
temp, volt = split(TempVolt)


Output_0 = TempVolt.sub(0).collapse()
Output_1 = TempVolt.sub(1).collapse()

with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [Output_0], engine="BP4") as vtx:
    vtx.write(0.0)

For a linear problem, any Newton iteration count greater than one would require thorough investigation before I could trust my formulation to be correctly discretised.

2 Likes
TempVolt.sub(0).interpolate(lambda x: x[0])
TempVolt.sub(1).interpolate(lambda x: x[1])

I cannot reproduce the error message on my system (using docker, dolfinx v0.7.1) and Paraview 5.11.0

from dolfinx.fem import (Function, FunctionSpace)
from mpi4py import MPI
from ufl import (FiniteElement, MixedElement, split)
from mpi4py import MPI
from dolfinx.io import VTXWriter
from dolfinx.mesh import create_unit_square


mesh = create_unit_square(MPI.COMM_WORLD, 15, 15)

# Define ME function space
el = FiniteElement("CG", mesh.ufl_cell(), 2)
mel = MixedElement([el, el])
ME = FunctionSpace(mesh, mel)

# Define the functions in the ME space.
TempVolt = Function(ME)
TempVolt.sub(0).interpolate(lambda x: x[0])
TempVolt.sub(1).interpolate(lambda x: x[1])
temp, volt = split(TempVolt)


Output_0 = TempVolt.sub(0).collapse()
Output_1 = TempVolt.sub(1).collapse()

with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [Output_0], engine="BP4") as vtx:
    vtx.write(0.0)

It converges in 4 steps now that I stopped to use the LU preconditioner. The results seem to make sense at first glance. They scale as they should when I modify several parameters. I didn’t compare with the analytical solution though, but they do seem to hold well.

I also use docker, but fenicsx 0.7.0.
In the docker container, pip list returns

Package                       Version
----------------------------- -----------
alabaster                     0.7.13
asttokens                     2.4.0
attrs                         23.1.0
Babel                         2.12.1
backcall                      0.2.0
certifi                       2023.7.22
cffi                          1.16.0
charset-normalizer            3.3.0
clang-format                  17.0.1
cmakelang                     0.6.13
contourpy                     1.1.1
cppimport                     22.8.2
cycler                        0.12.0
decorator                     5.1.1
docutils                      0.18.1
exceptiongroup                1.1.3
execnet                       2.0.2
executing                     2.0.0
fastjsonschema                2.18.1
fenics-basix                  0.7.0
fenics-dolfinx                0.7.0
fenics-ffcx                   0.7.0
fenics-ufl                    2023.2.0
filelock                      3.12.4
flake8                        6.1.0
fonttools                     4.43.0
gmsh                          4.11.1.dev1
idna                          3.4
imagesize                     1.4.1
iniconfig                     2.0.0
ipython                       8.16.1
isort                         5.12.0
jedi                          0.19.1
Jinja2                        3.1.2
jsonschema                    4.19.1
jsonschema-specifications     2023.7.1
jupyter_core                  5.3.2
jupytext                      1.15.2
kiwisolver                    1.4.5
llvmlite                      0.41.0
Mako                          1.2.4
markdown-it-py                3.0.0
MarkupSafe                    2.1.3
matplotlib                    3.8.0
matplotlib-inline             0.1.6
mccabe                        0.7.0
mdit-py-plugins               0.4.0
mdurl                         0.1.2
mpi4py                        3.1.4
mypy                          1.5.1
mypy-extensions               1.0.0
myst-parser                   2.0.0
nbformat                      5.9.2
numba                         0.58.0
numpy                         1.23.2
packaging                     23.1
parso                         0.8.3
petsc4py                      3.20.0
pexpect                       4.8.0
pickleshare                   0.7.5
Pillow                        10.0.1
pip                           23.2.1
platformdirs                  3.10.0
pluggy                        1.3.0
prompt-toolkit                3.0.39
ptyprocess                    0.7.0
pure-eval                     0.2.2
pybind11                      2.11.1
pycodestyle                   2.11.0
pycparser                     2.21
pyflakes                      3.1.0
Pygments                      2.16.1
pyparsing                     3.1.1
pytest                        7.4.2
pytest-xdist                  3.3.1
python-dateutil               2.8.2
PyYAML                        6.0.1
referencing                   0.30.2
requests                      2.31.0
rpds-py                       0.10.3
scipy                         1.11.3
setuptools                    68.2.2
six                           1.16.0
slepc4py                      3.20.0
snowballstemmer               2.2.0
Sphinx                        7.2.6
sphinx-rtd-theme              1.3.0
sphinxcontrib-applehelp       1.0.7
sphinxcontrib-devhelp         1.0.5
sphinxcontrib-htmlhelp        2.0.4
sphinxcontrib-jquery          4.1
sphinxcontrib-jsmath          1.0.1
sphinxcontrib-qthelp          1.0.6
sphinxcontrib-serializinghtml 1.1.9
stack-data                    0.6.3
toml                          0.10.2
tomli                         2.0.1
traitlets                     5.10.1
typing_extensions             4.8.0
urllib3                       2.0.5
wcwidth                       0.2.8
wheel                         0.37.1

Paraview in my system shows:

Client Information:
Version: 5.11.2
VTK Version: 9.2.20220823
Qt Version: 5.15.11
vtkIdType size: 64bits
Embedded Python: On
Python Library Path: /usr/lib/python3.11
Python Library Version: 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Python Numpy Support: On
Python Numpy Path: /usr/lib/python3.11/site-packages/numpy
Python Numpy Version: 1.26.2
Python Matplotlib Support: On
Python Matplotlib Path: /usr/lib/python3.11/site-packages/matplotlib
Python Matplotlib Version: 3.8.1
Python Testing: Off
MPI Enabled: On
Disable Registry: Off
Test Directory: 
Data Directory: 
SMP Backend: TBB
SMP Max Number of Threads: 8
OpenGL Vendor: Intel
OpenGL Version: 4.6 (Core Profile) Mesa 23.2.1-arch1.2
OpenGL Renderer: Mesa Intel(R) HD Graphics 630 (KBL GT2)
Accelerated filters overrides available: No

Connection Information:
Remote Connection: No

And I also get a core dump with your code…

Cannot reproduce with the binary (5.12) from the paraview repository:

Client Information:
Version: 5.12.0-RC1
VTK Version: 9.3.20231030
Qt Version: 5.15.10
vtkIdType size: 64bits
Embedded Python: On
Python Library Path: /home/dokken/Downloads/ParaView-5.12.0-RC1-MPI-Linux-Python3.10-x86_64/lib/python3.10
Python Library Version: 3.10.13 (main, Nov  7 2023, 20:18:03) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)]
Python Numpy Support: On
Python Numpy Path: /home/dokken/Downloads/ParaView-5.12.0-RC1-MPI-Linux-Python3.10-x86_64/lib/python3.10/site-packages/numpy
Python Numpy Version: 1.25.2
Python Matplotlib Support: On
Python Matplotlib Path: /home/dokken/Downloads/ParaView-5.12.0-RC1-MPI-Linux-Python3.10-x86_64/lib/python3.10/site-packages/matplotlib
Python Matplotlib Version: 3.7.2
Python Testing: Off
MPI Enabled: On
ParaView Build ID: superbuild b027a1fecb213766f402119f1f8b66d05ec6fc92 (!1129)
Disable Registry: Off
Test Directory: 
Data Directory: 
SMP Backend: TBB
SMP Max Number of Threads: 8
OpenGL Vendor: Intel
OpenGL Version: 4.6 (Core Profile) Mesa 21.2.6
OpenGL Renderer: Mesa Intel(R) HD Graphics 630 (KBL GT2)
Accelerated filters overrides available: No

Connection Information:
Remote Connection: No

or
5.11.2

Client Information:
Version: 5.11.2
VTK Version: 9.2.20220823
Qt Version: 5.15.2
vtkIdType size: 64bits
Embedded Python: On
Python Library Path: /home/dokken/Downloads/ParaView-5.11.2-MPI-Linux-Python3.9-x86_64/lib/python3.9
Python Library Version: 3.9.13 (main, Sep 22 2023, 19:15:44)  [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
Python Numpy Support: On
Python Numpy Path: /home/dokken/Downloads/ParaView-5.11.2-MPI-Linux-Python3.9-x86_64/lib/python3.9/site-packages/numpy
Python Numpy Version: 1.21.1
Python Matplotlib Support: On
Python Matplotlib Path: /home/dokken/Downloads/ParaView-5.11.2-MPI-Linux-Python3.9-x86_64/lib/python3.9/site-packages/matplotlib
Python Matplotlib Version: 3.2.1
Python Testing: Off
MPI Enabled: On
ParaView Build ID: superbuild 73cd3c6ca7e0a1a711136de246a826d8857bdaed (!1119)
Disable Registry: Off
Test Directory: 
Data Directory: 
SMP Backend: TBB
SMP Max Number of Threads: 8
OpenGL Vendor: Intel
OpenGL Version: 4.6 (Core Profile) Mesa 21.2.6
OpenGL Renderer: Mesa Intel(R) HD Graphics 630 (KBL GT2)
Accelerated filters overrides available: No

Connection Information:
Remote Connection: No

To me it looks like you have installed Paraview yourself?
Did you compile Paraview with adios2 support?

Yes, I installed it outside the docker container, on my Arch Linux system. It seems to have adios2 support. Adios2 package is installed and is listed as a dependency for Paraview (Arch Linux - paraview 5.11.2-4 (x86_64)).

Note that I didn’t compile Paraview myself. I took it from the official arch repository.