Parallelize zeroing rows operation

Hello,

I wonder how to change the following lines of code to make them work in parallel, as I did not find the corresponding functions in PETSc for local rows and matrices. Any help would be appreciated, thank you very much.

zero_rows = A.findZeroRows().array
diagonal_values = A.getDiagonal().array
diagonal_values[zero_rows] = 1.0
diagonal_petsc = PETSc.Vec().createWithArray(diagonal_values)

A.setOption(PETSc.Mat.Option.NEW_NONZERO_ALLOCATION_ERR, False)
A.setDiagonal(diagonal_petsc)
diagonal_petsc.destroy()

see:

and the more efficient version which is leveraging a priori information about where the non-zeros are, as shown in: Modify matrix diagonal -- dolfinx version for A.ident_zeros() - #3 by dokken

Hi @dokken and thank you. I see that in the examples you provided, the zero blocks do correspond to the boundary conditions, which is not the case for me. This is why I am using findZeroRows(), but it does not have a parallel/local counterpart apparently. In the code above, I get an out of range error when writing diagonal_values[zero_rows] = 1.0

In the original post, where findZeroRows is used (Modify matrix diagonal -- dolfinx version for A.ident_zeros()) it works in parallel, as it find the local index:

start = time.perf_counter()
IS_zeros = A.findZeroRows()
end = time.perf_counter()
print(f"Find zero rows: {end-start}s")
idx_zeros = IS_zeros.getIndices()
local_range = A.getOwnershipRange()
local_idx = idx_zeros - local_range[0]

Out of interest, what does you zeros correspond to/stem from. What kind of problem leaves you with a system with zero rows?

Hi @dokken and thanks for the reply. Then perhaps .getDiagonal() does not return what I expect?

To answer your question, it is an FSI problem, the zeros are stemming from the pressure in the solid domain.

You should rather use submeshes, as done in:

1 Like