Hello all,
I know topic (reading the mesh from gmsh) has been posted and answered several times, even the documentation gives great clarity in explaining the subdomains. I heave tried with the different geometries before and worked fine.
But I am facing the wierd issue now of malloc in reading the mesh tags from the mesh imported from gmsh for this particular case although it writes the tags smoothly which I verified.
Here is MWE
import dolfinx
from mpi4py import MPI
import meshio
import numpy as np
from dolfinx.io import XDMFFile
comm = MPI.COMM_WORLD
rank = comm.rank
def create_mesh(mesh, cell_type, prune_z=False):
cells = mesh.get_cells_type(cell_type)
cell_data = mesh.get_cell_data("gmsh:physical", cell_type)
points = mesh.points[:, :2] if prune_z else mesh.points
out_mesh = meshio.Mesh(points=points, cells={cell_type: cells}, cell_data={"name_to_read": [cell_data.astype(np.int32)]})
return out_mesh
if rank == 0:
domain = meshio.read("dp.msh")
# Create and save one file for the mesh, and one file for the facets
triangle_mesh = create_mesh(domain, "triangle", prune_z=True)
line_mesh = create_mesh(domain, "line", prune_z=True)
meshio.write("mesh.xdmf", triangle_mesh)
meshio.write("mt.xdmf", line_mesh)
MPI.COMM_WORLD.barrier()
with XDMFFile(MPI.COMM_WORLD, "mesh.xdmf", "r") as xdmf:
mesh = xdmf.read_mesh(name="Grid")
ct = xdmf.read_meshtags(mesh, name="Grid")
mesh.topology.create_connectivity(mesh.topology.dim, mesh.topology.dim - 1)
with XDMFFile(MPI.COMM_WORLD, "mt.xdmf", "r") as xdmf:
ft = xdmf.read_meshtags(mesh, name="Grid")
I don’t see an option of uploading the file here to upload my .msh file, in case you need to try on ur end, please let me know, I will just paste it here in comments.
Getting error while reading the tags
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see FAQ — PETSc 3.21.1 documentation and FAQ — PETSc 3.21.1 documentation
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
Thanks!