Error: install fenics on Ubuntu 22.04.3

Hello,
when I try to install “fenics” package in pycharm i faced this error: "Error occurred when installing package ‘fenics’.
I attached terminal history below:

mhh@mhh:~/pyadjoint$ sudo apt-get install software-properties-common
[sudo] password for mhh:
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
software-properties-common is already the newest version (0.99.22.7).
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
mhh@mhh:~/pyadjoint$ sudo add-apt-repository ppa:fenics-packages/fenics
Repository: ‘deb Index of /fenics-packages/fenics/ubuntu jammy main’
Description:
This PPA provides packages for the FEniCS project (http://fenicsproject.org).
More info: FEniCS PPA : “FEniCS Packages Team” team
Adding repository.
Press [ENTER] to continue or Ctrl-c to cancel.
Found existing deb entry in /etc/apt/sources.list.d/fenics-packages-ubuntu-fenics-jammy.list
Adding deb entry to /etc/apt/sources.list.d/fenics-packages-ubuntu-fenics-jammy.list
Found existing deb-src entry in /etc/apt/sources.list.d/fenics-packages-ubuntu-fenics-jammy.list
Adding disabled deb-src entry to /etc/apt/sources.list.d/fenics-packages-ubuntu-fenics-jammy.list
Adding key to /etc/apt/trusted.gpg.d/fenics-packages-ubuntu-fenics.gpg with fingerprint 2C5275D7EF63D9DE2D28D3702940F5212B746472
Get:1 Index of /ubuntu jammy-security InRelease [110 kB]
Hit:2 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy InRelease
Get:3 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates InRelease [119 kB]
Hit:5 Index of /fenics-packages/fenics/ubuntu jammy InRelease
Get:6 Index of /ubuntu jammy-security/main amd64 DEP-11 Metadata [43.0 kB]
Get:4 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-backports InRelease [109 kB]
Get:7 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/main amd64 DEP-11 Metadata [101 kB]
Get:8 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/restricted amd64 Packages [1,105 kB]
Get:9 Index of /ubuntu jammy-security/universe amd64 DEP-11 Metadata [55.1 kB]
Get:10 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/restricted i386 Packages [32.8 kB]
Get:11 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/universe amd64 DEP-11 Metadata [304 kB]
Get:12 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/multiverse amd64 Packages [41.6 kB]
Get:13 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates/multiverse amd64 DEP-11 Metadata [940 B]
Get:14 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-backports/main amd64 DEP-11 Metadata [4,944 B]
Get:15 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-backports/universe amd64 DEP-11 Metadata [18.8 kB]
Fetched 2,045 kB in 3s (609 kB/s)
Reading package lists… Done
mhh@mhh:~/pyadjoint$ sudo apt-get update
Hit:1 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy InRelease
Hit:2 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-updates InRelease
Hit:3 Index of /ubuntu/ | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror jammy-backports InRelease
Hit:4 Index of /ubuntu jammy-security InRelease
Hit:5 Index of /fenics-packages/fenics/ubuntu jammy InRelease
Reading package lists… Done
mhh@mhh:~/pyadjoint$ sudo apt-get install fenics
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
fenics is already the newest version (2:0.7.0.2~ppa1~jammy1).
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
mhh@mhh:~/pyadjoint$ pip install git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0
Cloning GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. (to revision 2019.1.0) to /tmp/pip-req-build-mcppjylj
Running command git clone --filter=blob:none --quiet GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. /tmp/pip-req-build-mcppjylj
Running command git checkout -q 6c2363bc23f0260a7a1988852c358105f31d38c7
Resolved GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. to commit 6c2363bc23f0260a7a1988852c358105f31d38c7
Preparing metadata (setup.py) … done
Requirement already satisfied: scipy>=1.0 in /usr/lib/python3/dist-packages (from dolfin-adjoint==2019.1.0) (1.8.0)
mhh@mhh:~/pyadjoint$ git clone Bitbucket
fatal: destination path ‘pyadjoint’ already exists and is not an empty directory.
mhh@mhh:~/pyadjoint$ rm -rf pyadjoint
mhh@mhh:~/pyadjoint$ git clone Bitbucket
Cloning into ‘pyadjoint’…
Receiving objects: 100% (10836/10836), 35.18 MiB | 309.00 KiB/s, done.
Resolving deltas: 100% (7373/7373), done.
mhh@mhh:~/pyadjoint$

There is no error message in what you have presented here. Please present the error message and use 3x` encapsulation, ie.

```bash
Add output here
```

There is nothing wrong with your fenics installation. Your log says you have installed it.

You got an error from pyadjoint, not from fenics. Actually, not even that:

$ git clone https://bitbucket.org/dolfin-adjoint/pyadjoint.git
Cloning into ‘pyadjoint’…
Receiving objects: 100% (10836/10836), 35.18 MiB | 309.00 KiB/s, done.
Resolving deltas: 100% (7373/7373), done.

You’ve successfully cloned pyadjoint.

What is your error exactly?

Thank you for response. The error is: when I run the code in python, this error appears as you can see in the attached screenshot.


the packages that have been imported in the code:

This is the whole code:
‘’'Python
“”"
This code is made available under a BSD 3-clause license. See LICENSE for further information.

This script is a minimal working example of how to perform gradient-based topology optimisation
based on a homogenisation interpolation and GCMMA with FEniCS. This exaple minimises the temporal
average variance of the spatial average temperature at the heat source boundary by placing phase
change material (PCM) and highly thermally conductive material (HCM).

To run this you need

If you do not have a linux computer available a virtual machine with Linux can be used.
I have had good experiences with WMware and Oracle VM Virtualbox.

By Mark Christensen
28-07-23

Disclaimer:
The author does not guarantee that the code is free from errors.
Furthermore, the author shall not be liable in any event caused by the use of the program.
“”"

#####################################################

LIBRARIES

#####################################################

from fenics import *
from fenics_adjoint import * # Note: It is important that fenics_adjoint is called after fenics
import matplotlib.pyplot as plt
import numpy as np
import statistics as stat

from mma import gcmmasub,subsolv,kktcheck,asymp,concheck,raaupdate # Import MMA functions

set_log_level(LogLevel.ERROR) # Only print message if error message

#####################################################

PREPROCESSING

#####################################################

----- PHYSICAL MODEL DATA -----

Material constants

khcm = 10 # Thermal conductivity og HCM [W/(mK)]
cphcm = 1 # Specific heat capacity of HCM [J/(K
kg)]
rhohcm = 1 # Mass density density of HCM [kg/m³]

kpcm = 1e-3khcm # Thermal conductivity og PCM [W/(mK)]
cppcm = 1 # Specific heat capacity of PCM [J/(K*kg)]
rhopcm = 1 # Mass density density of PCM [kg/m³]
Tmelt = 0.5 # Melting temperature of PCM [K]
dTmelt = 0.5 # Melting temperture range of PCM [K]
Lheat = 10 # Latent heat of fusion [J/kg]

Thermal data

qheat = 1 # Heat production rate in electronics [W]
w = 1 # Ocsilation frequency of heat source [Hz]
hconv = 5 # Het atransfer coefficient [W/(m²*K)]
Tinf = 0 # Temperature of surroundings [K]
Tinitial = 0 # Initial temperature [K]

Temporal data

tfin = 20 # Final time [s]
num_steps = 500 # Number of time steps
dt = tfin/num_steps # Time step size [s]

Finite element data

nx = ny = 100 # Number of element in x and y direction
lx = ly = 1 # Dimension of the 2D heatsink [m]
lz = 1 # Out of plane thickness of the 2D heatsink [m]
degree_phys = 1 # Element order for physical problem

----- OPTIMIZATION ELEMENT DATA -----

Loop

niter = 300 # Max number of optimisation iterations
tol_opt = 1E-3 # Convergence tolerance, based on the absolute error

Problem parameters

volfrac = 0.3 # Maximum allowable volume fraction used in the optimisation

Filter data

r = 0.01 # Length scale for the Helmholzt PDE filterer

----- DEFINING DOMAIN AND FUNCTION SPACES -----

Create mesh

mesh = RectangleMesh(Point(-lx/2, -ly/2), Point(lx/2, ly/2), nx, ny, “crossed”)

Function space

V0 = FunctionSpace(mesh, “DG”, 0) # Discontinous function space
Vphys = FunctionSpace(mesh, “CG”, degree_phys) # Continous function space

----- BOUNDARY CONDITIONS -----

Define the Heat source and convection boundaries were the Neumann BC are a aplied.

class HeatSourceBoundary(SubDomain):
def inside(self, x, on_boundary):
tol_bc = 1E-14
return on_boundary and near(x[1], -ly/2, tol_bc) and x[0]+lx/4>=-tol_bc and x[0]-lx/4<=tol_bc
HS_boundary = HeatSourceBoundary()

Applying Neuman boundary condition to simulate the heat loss due to convection

class ConvectionBoundary(SubDomain):
def inside(self, x, on_boundary):
tol_bc = 1E-14
return on_boundary and near(x[1], ly/2, tol_bc)
C_boundary = ConvectionBoundary()

Marking the facets (boundaries) that coresspond to HeatSourceBoundary and ConvectionBoundary

boundaries = MeshFunction(‘size_t’, mesh, mesh.topology().dim()-1) # store all facets in array
HS_boundary.mark(boundaries, 1) # Mark all facest that are on the HS_boundary with 1
C_boundary.mark(boundaries, 2) # Mark all facest that are on the c_boundary with 2

Denfine the line intrgrals

dsHS = ds(subdomain_id=1, subdomain_data=boundaries)
dsC = ds(subdomain_id=2, subdomain_data=boundaries)

Define volume and area for later use

voltot = assemble(Constant(lz)*dx(domain=mesh)) # Compute total volume Of design domain
AHS = assemble(Constant(lz)*dsHS(domain=mesh)) # Compute crossectionl area of heat source

#####################################################

FUNCTIONS

#####################################################

Apparent heat capacity method

def cppcmmod(T_,cppcm_, Lheat_):
“”"
Incoorporates the latent heat of fusion into the heat capacity with
the aparent heat capacity method with heaviside step functions.
This function assumes no change in spefic heat capacity due to phase change.

Inputs: T_ is the temperture field, cppcm_ is the heat capacity without 
phase change, Lheat_ is the latent heat of fusion.
Outputs: Field of the modified heat capacity
"""
k_H = 25        # Steepness of Heviside step function 
return cppcm_ + Lheat_ / (dTmelt) * \
    (
        1 / (1 + exp(-2 * k_H * (T_ - (Tmelt - dTmelt/2))))
        - (1 / (1 + exp(-2 * k_H * (T_ - (Tmelt + dTmelt/2)))))
    )

def PDEfilter(u_, r_, mesh_):
“”"
Applies ering to a field by solving af PDEfilter type PDE
based on the work of B.S. Lazarov and O. Sigmund DOI: 10.1002/nme.3072

Inputs: u_ is the unfiltered field, r_ is the characteristic length
Outputs: filtered field u_tilde
"""
Vold = u_.function_space()            # Saving function space from u_
VCG = FunctionSpace(mesh_, "CG", 1)    # Defining continous function space

# Defining variational problem
u_tilde = TrialFunction(VCG)
vf = TestFunction(VCG)
# HELMHOLTZ-TYPE PDE (u_ is projected to a continouos function space)
F = u_tilde*vf*dx + r_**2*dot(grad(u_tilde), grad(vf))*dx - project(u_, VCG)*vf*dx(mesh_)
# a: unknowns, L: knowns
a, L = lhs(F), rhs(F)

u_temp = Function(VCG)
u_tilde = Function(Vold)

# Compute solution and project onto inital function space
solve(a == L, u_temp)
u_tilde.assign(project(u_temp, Vold))

return u_tilde

def forward(rho_tilde_):
“”"
Solves the forward problem (solving PDE) using homogenisation

Inputs: rho_tilde is the filtered material density variable field
Outputs: Variables used for computing the objective function
"""
# ----- DEFINING PROBLEM -----
# Material interpolations
def rhocppcmphys(rho_,T_):
    return (rhohcm*cphcm)*(rho_)+(1-rho_)*(rhopcm*cppcmmod(T_,cppcm, Lheat))

def kphys(rho_):
    a = 1-(1-rho_)**0.5
    return 1/(a/khcm + (1-a)/(kpcm*(1-a)+khcm*a))

# Defining the flucturating heat source
qelec = Expression("qheat/(AHS)*(1+sin(2*DOLFIN_PI*w*t))",
                    qheat=qheat, AHS=AHS, lz=lz, w=w, t=0, degree=0)

# Define initial values
T_ini = Constant(Tinitial)
T_n = interpolate(T_ini, Vphys)

# Defining the trial and test functions
T = TrialFunction(Vphys)
v = TestFunction(Vphys)

# Weak form rho_k=0 -> PCM, rho_k = 1 -> HCM
F = rhocppcmphys(rho_tilde_,T_n)*(T-T_n)/dt*v*dx \
    + kphys(rho_tilde_)*dot(grad(T),grad(v))*dx \
    - qelec*v*dsHS \
    + hconv*(T-Tinf)*v*dsC
# a: unknowns, L: knowns
a, L = lhs(F), rhs(F)

# ------ SOLVING THE PROBLEM -----

# Initialise the field for the Temperature
T = Function(Vphys) 

# Set initial condistions
T_n.assign(interpolate(T_ini, Vphys))
T.assign(interpolate(T_ini, Vphys))
t = 0
qelec.t = t

# Save the average temperture at the heat source
T_elec_array = [assemble(T*dsHS)/AHS]   

# ------ Solving Physics  -----
for timestep in range(num_steps):
    t += float(dt)      # Update time
    qelec.t = t         # Update heat source
    solve(a == L, T)    # Solve problem for current time step with lieaner solver

    T_n.assign(T)       # Update previous temperature
    T_elec_array.append(assemble(T*dsHS)/AHS)

# Compute the variance of the average temperture at the heat source over time
T_time_mean = stat.mean(T_elec_array)        # Average over time
T_var = []

for timesteps in range(len(T_elec_array)):
    T_var.append((T_elec_array[timesteps] - T_time_mean)**2)

return T_var

def Optimization(rho_,rho_tilde_, rf_f0_, rf_h_,f0_history_,Mnd_history_):
“”"
Performs gradient based topology optimazation using the Methode of moving assymptotes
based on a MMA and GCMMA script by Krister Svanberg (https://people.kth.se/~krille/mmagcmma.pdf)
which is translated to Python Arjeen Deetman (GitHub - arjendeetman/GCMMA-MMA-Python: Python implementation of the Method of Moving Asymptotes (MMA))

Inputs: rho is the density variable field, rho_tilde is the filtered density variable field, 
rf_f0 is the reduced functional for f0, rf_h is the reduced functional for h, f0_history is 
a empty list for storing the history of f0, and Mnd_history is a empty list for storing the
history of h.
Outputs: rho and rho_tilde at the final design and f0_history, Mnd_history
"""
with pyadjoint.stop_annotating() as _: # Stop saving opterations with dolfin-adjoint
    
    # ------ INITIALASING LOOP -----
    # Preallocate rho and rho_tilde for each iteration
    rho_k = rho_
    rho_tilde_k = rho_tilde_

    # Initialize GCMMA parameters
    m = 1                                       # Number of constraints
    n = rho.vector()[:].size                    # Number of design variables
    xval = rho.vector()[:].reshape(-1, 1)       # Initial design variables
    epsimin = 0.0000001
    eeen = np.ones((n,1))
    eeem = np.ones((m,1))
    zeron = np.zeros((n,1))
    zerom = np.zeros((m,1))
    xold1 = xval.copy()
    xold2 = xval.copy()
    xmin = 0*eeen.copy()
    xmax = 1*eeen.copy()
    low = xmin.copy()
    upp = xmax.copy()
    c = 1000*eeem
    d = eeem.copy()
    a0 = 1
    a = zerom.copy()
    raa0 = 0.01
    raa = 0.01*eeem
    raa0eps = 0.000001
    raaeps = 0.000001*eeem
    outeriter = 0	

    # Initilize preliminary design 
    f0_k = rf_f0_(rho_k)
    h_k = rf_h_(rho_k)
    Mnd = assemble(4*rho_tilde_k*(1-rho_tilde_k)*dx)/(lx*ly)*100
    # Calculate Sensitivities
    df0drho = Function(V0, name="Object function Sensitivity")
    dhdrho = Function(V0, name="Constraint function Sensitivity")
    df0drho.assign(rf_f0_.derivative())                     
    dhdrho.assign(rf_h_.derivative())

    # Scaling functions and storing them in matrices for the MMA solver
    xval = rho_k.vector()[:].reshape(-1, 1)                       # Saving design variable in xval
    scale = np.abs(f0_k/10)
    f0val = 1 + f0_k/scale                                        # Scaling f0 at xval
    df0dx = df0drho.vector()[:].reshape(-1, 1)/scale           # nx1 matrix of sensitivities of f0 at xval
    fval = h_k                                                    # Array of constraint functions at xval
    dfdx = dhdrho.vector()[:].reshape(m, n)  

    # Initialize the outer iteration and convergence counter
    outit = 0
    cc = 0 

    # ----- OPTIMIZATION LOOP -----
    while (outit < niter):  
        outit += 1
        outeriter += 1

        if outit > 1:   # update prev values for convergence check
            f0_kprev = f0_k
            Mndprev = Mnd
        
        # The parameters low, upp, raa0 and raa are calculated:
        low,upp,raa0,raa= \
            asymp(outeriter,n,xval,xold1,xold2,xmin,xmax,low,upp,raa0,raa,raa0eps,raaeps,df0dx,dfdx)
        # The MMA subproblem is solved at the point xval:
        xmma,ymma,zmma,lam,xsi,eta,mu,zet,s,f0app,fapp= \
            gcmmasub(m,n,iter,epsimin,xval,xmin,xmax,low,upp,raa0,raa,f0val,df0dx,fval,dfdx,a0,a,c,d)

        # Evaluate the functionals with the new deisgn
        rho_k.vector()[:] = xmma.flatten()          # Overwrite rho with the new rho from the MMA solver
        f0_knew = rf_f0_(rho_k)
        h_knew = rf_h_(rho_k)
        f0valnew = np.array([1 + f0_knew/scale])    # Scaling f0 at xval
        fvalnew = np.array([h_knew])

        # It is checked if the approximations are conservative:
        conserv = concheck(m,epsimin,f0app,f0valnew,fapp,fvalnew)

        # While the approximations are non-conservative (conserv=0), repeated inner iterations are made:
        innerit = 0
        if conserv == 0:
            while conserv == 0 and innerit < 2:
                innerit += 1
                # New values on the parameters raa0 and raa are calculated:
                raa0,raa = raaupdate(xmma,xval,xmin,xmax,low,upp,f0valnew,fvalnew,f0app,fapp,raa0, \
                    raa,raa0eps,raaeps,epsimin)
                # The GCMMA subproblem is solved with these new raa0 and raa:
                xmma,ymma,zmma,lam,xsi,eta,mu,zet,s,f0app,fapp = gcmmasub(m,n,iter,epsimin,xval,xmin, \
                    xmax,low,upp,raa0,raa,f0val,df0dx,fval,dfdx,a0,a,c,d)
                
                # Evaluate the functionals with the new deisgn
                rho_k.vector()[:] = xmma.flatten()    # Overwrite rho with the new rho from the MMA solver
                f0_knew = rf_f0_(rho_k)
                h_knew = rf_h_(rho_k)
                f0valnew = np.array([1 + f0_knew/scale])                                        # Scaling f0 at xval
                fvalnew = np.array([h_knew])
                # It is checked if the approximations have become conservative:
                conserv = concheck(m,epsimin,f0app,f0valnew,fapp,fvalnew)

        # Updat vectors:
        xold2 = xold1.copy()
        xold1 = xval.copy()
        xval = xmma.copy()

        # Re-calculate function values and gradients of the objective and constraints functions
        rho_k.vector()[:] = xmma.flatten()    # Overwrite rho with the new rho from the MMA solver
        rho_tilde_k.assign(PDEfilter(rho_k, r, mesh))
        f0_k = rf_f0_(rho_k)
        h_k = rf_h_(rho_k)
        df0drho.assign(rf_f0_.derivative())                     
        dhdrho.assign(rf_h_.derivative())

        # Scaling functions and storing them in matrices for the MMA solver
        f0val = 1 + f0_k/scale                                        # Scaling f0 at xval
        df0dx = df0drho.vector()[:].reshape(-1, 1)/scale           # nx1 matrix of sensitivities of f0 at xval
        fval = h_k                                                    # Array of constraint functions at xval
        dfdx = dhdrho.vector()[:].reshape(m, n)  

        # Store f0
        f0_history_.append(f0_k)   # Save f0 in f0 history

        Mnd = assemble(4*rho_tilde_k*(1-rho_tilde_k)*dx)/(lx*ly)*100
        Mnd_history_.append(Mnd)

        # Chect for convergence
        if outit > 1:
             # Check if converged
            f0ichange = abs((f0_k-f0_kprev)/(f0_k))
            Mndchange = abs((Mnd-Mndprev)/(Mnd) )

            if f0ichange <= tol_opt and Mndchange <= tol_opt:
                cc += 1
                if cc > 3:
                    print("Tolerance Reached!!")
                    break 
            else:
                cc = 0
            print("Iteration " + str(outeriter) + "." + str(innerit) + ": f0 = " + str(f'{f0_k:.3f}') + ", Mnd = " + str(f'{Mnd:.2f}')+", f0ichange = " + str("{:.2e}".format(f0ichange))+", Mndchange = " + str("{:.2e}".format(Mndchange)))               

        
        # ----- PLOTS While Solving -----               
        plt.figure(1)       # plot of current design
        plt.clf()
        plt.colorbar(plot(rho_tilde_k, cmap="Greys", vmin=0, vmax=1))
        plt.xlim([-lx/2, lx/2])
        plt.ylim([-ly/2, ly/2])
        plt.title("rho_tilde at iter " + str(outit)+": f0 = "+ str(f'{f0_history_[-1]:.2f}')+ ", Mnd = " + str(f'{Mnd_history[-1]:.2f}'))
        plt.pause(0.05)

        plt.figure(2)       # plot of f0 and Mnd history
        plt.clf()
        plt.plot(np.arange(1, len(f0_history_)+1)-1, f0_history_, color="red")
        ax = plt.gca()
        plt.xlabel("Number of iterations")
        plt.ylabel("Objective", color="red")
        plt.tick_params(axis="y", which="both", labelcolor="red")
        plt.title("Convergence history")
        ax2 = ax.twinx()
        ax2.plot(np.arange(1, len(Mnd_history_)+1)-1, Mnd_history_, color="blue")
        ax2.set_ylabel("Measure of non-discreteness", color="blue")
        ax2.tick_params(axis="y", which="both", labelcolor="blue")
        plt.pause(0.05)

return rho_k,rho_tilde_k, f0_history_, Mnd_history_

if name == ‘main’:
# ----- INITIALIZE DENSITIES -----
rho = interpolate(Constant(volfrac),V0) # Define initial design
rho_tilde = PDEfilter(rho, r, mesh) # Apply filter

T_var = forward(rho_tilde) # Solve forward problem (physical model)

# Objective and Constraint function
h = assemble(rho_tilde*lz*dx)/(volfrac*voltot) - 1
f0 = sum(T_var)/len(T_var)

# Reduced functionals
control = Control(rho)  
rf_f0 = ReducedFunctional(f0, control)  
rf_h = ReducedFunctional(h, control)  

# Initiate f0 and Mnd historry lists
f0_history = []
Mnd_history = []

rho,rho_tilde, f0_history, Mnd_history = Optimization(rho, rho_tilde, rf_f0, rf_h, (f0_history), (Mnd_history))

# ----- PLOTS -----  
plt.figure(1) #The filal design
plt.clf()
plt.colorbar(plot(rho_tilde, cmap="Greys", vmin=0, vmax=1))
plt.xlim([-lx/2, lx/2])
plt.ylim([-ly/2, ly/2])
plt.title("rho_tilde at iter " + str(iter)+": f0 = "+ str(f'{f0_history[-1]:.2f}')+ ", Mnd = "+ str(f'{Mnd_history[-1]:.2f}'))

plt.figure(2) #The f0 and Mnd history
plt.clf()
plt.plot(np.arange(1, len(f0_history)+1)-1, f0_history, color="red")
ax = plt.gca()
plt.xlabel("Number of iterations")
plt.ylabel("Objective", color="red")
plt.tick_params(axis="y", which="both", labelcolor="red")
plt.title("Convergence history")
ax2 = ax.twinx()
ax2.plot(np.arange(1, len(Mnd_history)+1)-1, Mnd_history, color="blue")
ax2.set_ylabel("Measure of non-discreteness", color="blue")
ax2.tick_params(axis="y", which="both", labelcolor="blue")
plt.show()

‘’’

You haven’t used the ubuntu packages to install fenics. You’ve got an inconsistent installation. Note the paths in your error message. Some refer to /usr/lib (system installed packaged), while others (ffc, ufl) refer to /usr/mnh/.local. You need to eliminate all the packages in .local Don’t use pip to install packages. Use apt only.

It is the first time for me with Ubuntu (linux as well). Could you please guide me how can I “eliminate all the packages in .local Don’t use pip to install packages. Use apt only.”
thank you

Do you remember which command you used to install ffc and ufl?

Yes these commands:
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:fenics-packages/fenics
sudo apt-get update
sudo apt-get install fenics

pip install git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0
git clone Bitbucket

Only that, nothing else? In that case the interference could only come from pyadjoint. Delete .local completely

rm -rf /home/mhh/.local/lib/python3.10/site-package/*

Then if pip needs to be used to install pyadjoint, use it strictly banning installation of other packages that it depends on:

pip install --no-deps git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0

Why are you installing pyadjoint from github and cloning it from bitbucket?

I faced almost same error, I have attached the error and the terminal:
/usr/bin/python3.10 /home/mhh/Documents/12heatSinkPCM-main/heatSinkPCM-main/heatSinkPCM-main/TOPCMheatsink.py
Unknown ufl object type FiniteElement
Traceback (most recent call last):
File “/home/mhh/Documents/12heatSinkPCM-main/heatSinkPCM-main/heatSinkPCM-main/TOPCMheatsink.py”, line 97, in
V0 = FunctionSpace(mesh, “DG”, 0) # Discontinous function space
File “/usr/lib/petsc/lib/python3/dist-packages/dolfin/function/functionspace.py”, line 33, in init
self._init_convenience(*args, **kwargs)
File “/usr/lib/petsc/lib/python3/dist-packages/dolfin/function/functionspace.py”, line 100, in _init_convenience
self._init_from_ufl(mesh, element, constrained_domain=constrained_domain)
File “/usr/lib/petsc/lib/python3/dist-packages/dolfin/function/functionspace.py”, line 42, in _init_from_ufl
ufc_element, ufc_dofmap = ffc_jit(element, form_compiler_parameters=None,
File “/usr/lib/petsc/lib/python3/dist-packages/dolfin/jit/jit.py”, line 50, in mpi_jit
return local_jit(*args, **kwargs)
File “/usr/lib/petsc/lib/python3/dist-packages/dolfin/jit/jit.py”, line 100, in ffc_jit
return ffc.jit(ufl_form, parameters=p)
File “/home/mhh/.local/lib/python3.10/site-packages/ffc/jitcompiler.py”, line 214, in jit
kind, module_name = compute_jit_prefix(ufl_object, parameters)
File “/home/mhh/.local/lib/python3.10/site-packages/ffc/jitcompiler.py”, line 156, in compute_jit_prefix
error(“Unknown ufl object type %s” % (ufl_object.class.name,))
File “”, line 1, in
File “/home/mhh/.local/lib/python3.10/site-packages/ufl/log.py”, line 172, in error
raise self._exception_type(self._format_raw(*message))
Exception: Unknown ufl object type FiniteElement

Process finished with exit code 1

the terminal:
mhh@mhh:~ rm -rf /home/mhh/.local/lib/python3.10/site-package/* mhh@mhh:~ pip install --no-deps git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/dolfin-adjoint/pyadjoint.git@2019.1.0
Cloning GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. (to revision 2019.1.0) to /tmp/pip-req-build-vu_a5q9p
Running command git clone --filter=blob:none --quiet GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. /tmp/pip-req-build-vu_a5q9p
Running command git checkout -q 6c2363bc23f0260a7a1988852c358105f31d38c7
Resolved GitHub - dolfin-adjoint/pyadjoint: The algorithmic differentation tool pyadjoint and add-ons. to commit 6c2363bc23f0260a7a1988852c358105f31d38c7
Preparing metadata (setup.py) … done
Building wheels for collected packages: dolfin-adjoint
Building wheel for dolfin-adjoint (setup.py) … done
Created wheel for dolfin-adjoint: filename=dolfin_adjoint-2019.1.0-py3-none-any.whl size=85035 sha256=0936abaa53b9df18ca4b23e92bd3dad4f07a0390c51bec3fd0e56b5973ff2f6f
Stored in directory: /tmp/pip-ephem-wheel-cache-99od0ttu/wheels/76/5c/97/bc7878a1f3044631f88e73f6ffd86386c55546e86d964057ff
Successfully built dolfin-adjoint
Installing collected packages: dolfin-adjoint
Successfully installed dolfin-adjoint-2019.1.0
mhh@mhh:~$

Unfortunately, I do not no why. I just follow the guidance of the code I want to run it.

There is a mismatch between ufl,ffc and dolfin here.
You have the latest version of ufl, which is not compatible with legacy dolfin and dolfin-adjoint

Please, how can I solve this problem?

You are trying to install a very old version of dolfin-adjoint, not compatible with what is on apt.
The following instructions (here executed in a Dockerfile)

FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update &&\
    apt-get install -y software-properties-common && \
    add-apt-repository ppa:fenics-packages/fenics && \
    apt-get update &&\
    apt-get install -y fenics

ENV DEB_PYTHON_INSTALL_LAYOUT=deb_system

RUN apt-get install -y python3-pip git
RUN python3 -m pip install  -U pip setuptools
RUN python3 -m pip install git+https://github.com/dolfin-adjoint/dolfin-adjoint@2023.2.0

RUN python3 -c "from dolfin import *; from dolfin_adjoint import *"

gives you a working environment. So I would strongly suggest you uinstall FEniCS, ufl and dolfin-adjoint, and follow these instructions.

I have installed it as you described. Also I have opened Pycharm and installed ‘fenics’ and ‘dolfin-adjoint’ from Python Interpreter (picture 1). but when I comeback to the code, I find red underline below the ‘fenics’ as shown in the (pictue 2)

This is pycharm configuration, which I unfortunately cannot help you with (as i dont use pycharm). Are you able to run the script?

when I run the code, I face this error:
/home/mhh/Downloads/11heatSinkPCM-main/heatSinkPCM-main/heatSinkPCM-main/venv/bin/python /home/mhh/Downloads/11heatSinkPCM-main/heatSinkPCM-main/heatSinkPCM-main/TOPCMheatsink.py
Traceback (most recent call last):
File “/home/mhh/Downloads/11heatSinkPCM-main/heatSinkPCM-main/heatSinkPCM-main/TOPCMheatsink.py”, line 35, in
from fenics import *
ModuleNotFoundError: No module named ‘fenics’

Process finished with exit code 1

What are your Python paths?

To me it Seems like you are using the wrong Python interpeter. If you installed fenics with apt, you shouldn’t use a local interpeter

What happens if you open a terminal and run python3 -c "import fenics"

do you mean this? when I select this interpreter, the icon “+” does not working, maybe because the error that appears at the below:

I have installed ‘fencis’ using you docker file as shown in the terminal below:
mhh@mhh-virtual-machine:~ sudo docker build -t myfenics -f myfenics.Dockerfile . [sudo] password for mhh: [+] Building -27451.7s (10/10) FINISHED docker:default => [internal] load build definition from myfenics.Dockerfile 0.0s => => transferring dockerfile: 674B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ubuntu:22.04 4.9s => CACHED [1/6] FROM docker.io/library/ubuntu:22.04@sha256:2b7412e6465c3 0.0s => [2/6] RUN apt-get update && apt-get install -y software-properties 0.0s => [3/6] RUN apt-get install -y python3-pip git 20.9s => [4/6] RUN python3 -m pip install -U pip setuptools 167.0s => [5/6] RUN python3 -m pip install git+https://github.com/dolfin-adjo 140.9s => [6/6] RUN python3 -c "from dolfin import *; from dolfin_adjoint impor 2.2s => exporting to image 37.4s => => exporting layers 37.3s => => writing image sha256:cc3db73defb95a90d74c9cc9a415983cd525556272b13 0.0s => => naming to docker.io/library/myfenics 0.0s mhh@mhh-virtual-machine:~