Below, I have attached a part of the syntax. Is it fine to compute the drag and lift coefficients like that? The problem I am facing is in plotting, as c_drag contains functions in FEniCS format, i.e., f_1. I don’t know how I can plot these graphs corresponding to time, as this list doesn’t contain numbers. I need a plot similar to the one in the link above.

Hope to hear from you soon.

F = (idt*dot(v,u)*dx - idt*dot(v,u_n)*dx + nu*0.5 * inner(D(u), D(v)) * dx + inner(dot(u_n,nabla_grad(u)), v) * dx - p * div(v) * dx - div(u) * q * dx - dot(f1,v)*dx -f2*q*dx )
n = FacetNormal(mesh)
dObs = Measure("ds", domain =mesh, subdomain_data= 3)
c_drag = [ ]
c_lift =[ ]
Time =[ ]
for z in range(num_steps):
t = (z+1)*dt
u_exact.t=t
solve(F == 0, up, bc, solver_parameters={"newton_solver":{"linear_solver":'mumps'},"newton_solver":{"relative_tolerance":1e-6}})
u, p, rho = up.split()
u_t = inner(as_vector((-n[1],n[0],0)),u)
drag = 2 / 0.1 * (nu * inner(grad(u_t), n) * n[1] - p * n[0]) * dObs
lift = -2 / 0.1 * (nu * inner(grad(u_t), n) * n[0] + p * n[1]) * dObs
c_drag.append(drag)
c_lift.append(lift)
Time.append(t)
assign(u_n,u)
assign(p_n,p)

No, this is not sufficient.
Please look through the rest of the example, as ir shows that you need to call dolfinx.fem.assemble_scalar(drag) and do relevant mpi communication if running in parallel.

# Compute physical quantities
# For this to work in paralell, we gather contributions from all processors
# to processor zero and sum the contributions.
drag_coeff = mesh.comm.gather(assemble_scalar(drag), root=0)
lift_coeff = mesh.comm.gather(assemble_scalar(lift), root=0)
if mesh.comm.rank == 0:
t_u[i] = t
t_p[i] = t - dt / 2
C_D[i] = sum(drag_coeff)
C_L[i] = sum(lift_coeff)

You are not appending values. You are appending a symbolic expression. To convert the symbolic expression into a scalar value, you need to call assemble_scalar.

Could you refer some link so that I am able to plot drag, lift using dolfin legacy ?

The trick you suggested show an error:

for z in range(num_steps):
t = (z+1)*dt
solve(F == 0, up, bc, solver_parameters={"newton_solver":{"linear_solver":'mumps'},"newton_solver":{"relative_tolerance":1e-6}})
u, p, rho = up.split()
u_t = inner(as_vector((-n[1],n[0],0)),u)
drag = 2 / 0.1 * (nu * inner(grad(u_t), n) * n[1] - p * n[0]) * dObs
lift = -2 / 0.1 * (nu * inner(grad(u_t), n) * n[0] + p * n[1]) * dObs
if MPI.rank(mesh.mpi_comm()) == 0:
C_D = np.zeros(num_steps, dtype=PETSc.ScalarType)
C_L = np.zeros(num_steps, dtype=PETSc.ScalarType)
t_u = np.zeros(num_steps, dtype=np.float64)
t_p = np.zeros(num_steps, dtype=np.float64)
drag_coeff = assemble(drag)
lift_coeff = assemble(lift)
if mesh.comm.rank == 0:
t_u[z] = t
t_p[z] = t - dt / 2
C_D[z] = sum(drag_coeff)
C_L[z] = sum(lift_coeff)

error:

drag_coeff = assemble(drag)
File "/usr/lib/petsc/lib/python3/dist-packages/dolfin/fem/assembling.py", line 202, in assemble
dolfin_form = _create_dolfin_form(form, form_compiler_parameters)
File "/usr/lib/petsc/lib/python3/dist-packages/dolfin/fem/assembling.py", line 60, in _create_dolfin_form
return Form(form,
File "/usr/lib/petsc/lib/python3/dist-packages/dolfin/fem/form.py", line 85, in __init__
self.set_exterior_facet_domains(subdomains)
TypeError: set_exterior_facet_domains(): incompatible function arguments. The following argument types are supported:
1. (self: dolfin.cpp.fem.Form, arg0: dolfin.cpp.mesh.MeshFunctionSizet) -> None
Invoked with: <dolfin.fem.form.Form object at 0x7fdd8e074680>, 3

I am getting an error regarding MPI. I just want to confirm the way I am computing is fine ?

from dolfin import *
from ufl import JacobianInverse, indices
from dolfin import project
from petsc4py import PETSc
import numpy as np
import meshio
from mpi4py import MPI
# Initialize MPI
MPI.Init()
# Get the MPI communicator
comm = MPI.COMM_WORLD
# Your code using MPI goes here
# Finalize MPI
MPI.Finalize()
dim = 3
msh = meshio.read("test.msh")
.
.
.
.
c_drag = []
c_lift =[]
Time =[]
for z in range(num_steps):
t = (z+1)*dt
solve(F == 0, up, bc, solver_parameters={"newton_solver":{"linear_solver":'mumps'},"newton_solver":{"relative_tolerance":1e-6}})
u, p, rho = up.split()
u_t = inner(as_vector((-n[1],n[0],0)),u)
drag = 2 / 0.1 * (nu * inner(grad(u_t), n) * n[1] - p * n[0]) * ds(3)
lift = -2 / 0.1 * (nu * inner(grad(u_t), n) * n[0] + p * n[1]) * ds(3)
if MPI.rank(mesh.mpi_comm()) == 0:
C_D = np.zeros(num_steps, dtype=PETSc.ScalarType)
C_L = np.zeros(num_steps, dtype=PETSc.ScalarType)
t_u = np.zeros(num_steps, dtype=np.float64)
t_p = np.zeros(num_steps, dtype=np.float64)
drag_coeff = assemble(drag)
lift_coeff = assemble(lift)
comm = MPI.COMM_WORLD
if comm.rank == 0:
t_u[z] = t
t_p[z] = t - dt / 2
C_D[z] = sum(drag_coeff)
C_L[z] = sum(lift_coeff)

error

hwloc/linux: Ignoring PCI device with non-16bit domain.
Pass --enable-32bits-pci-domain to configure to support such devices
(warning: it would break the library ABI, don't enable unless really needed).
hwloc/linux: Ignoring PCI device with non-16bit domain.
Pass --enable-32bits-pci-domain to configure to support such devices
(warning: it would break the library ABI, don't enable unless really needed).
--------------------------------------------------------------------------
Open MPI has detected that this process has attempted to initialize
MPI (via MPI_INIT or MPI_INIT_THREAD) more than once. This is
erroneous.
--------------------------------------------------------------------------
[ashini-Precision-5820-Tower:36038] *** An error occurred in MPI_Init
[ashini-Precision-5820-Tower:36038] *** reported by process [2019164161,0]
[ashini-Precision-5820-Tower:36038] *** on a NULL communicator
[ashini-Precision-5820-Tower:36038] *** Unknown error
[ashini-Precision-5820-Tower:36038] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[ashini-Precision-5820-Tower:36038] *** and potentially your MPI job)

Solving nonlinear variational problem.
Newton iteration 0: r (abs) = 8.808e+01 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-06)
Newton iteration 1: r (abs) = 8.754e-14 (tol = 1.000e-10) r (rel) = 9.938e-16 (tol = 1.000e-06)
Newton solver finished in 1 iterations and 1 linear solver iterations.
Traceback (most recent call last):
File "/home/ayush/Downloads/aparna/flow_past_cylinder_noslip.py", line 140, in <module>
C_D[z] = sum(drag_coeff)
TypeError: 'float' object is not iterable