Efficient way for using ANN as the constitutive law

Hi,

Basically, I need to find out the simplest and fastest way of getting the spatial coordinates of one point of the cell where the stress is being computed in a problem.

I want to use an Artificial Neural Network as the constitutive equation e.g. for computing stress.
I have tried several ways that give me the correct response but they are not efficient. For a greater problem, I need it to be efficient. In fact, I only need the spatial coordinate info at the point (could be one integration point or any node of the element It is not that sensitive) where the stress is being computed.
I tried the following ways:
1- define a user expression and project it into the domain which is time-consuming:
2- define x=SpatialCoordinate(mesh) and
def stress(x,u):
phi, nu = 0.03*x[1] + 0.25, x[0]
INP = torch.tensor([phi, nu]).reshape(1, 1, 1, 2)
E = ANN(INP)
....
which gives the following error:
NotImplementedError: Cannot take length of non-vector expression. ANN does not accept expressions. It needs floats as its input.
If I use eval_cell() in UserExpressin class then it becomes too expensive.

Is there any faster way to obtain what I need?

Please provide a minimal working example (Even if not efficient) to increase the likeliness to receive any help. Some pointers on how to write a minimal working example can be found at: Read before posting: How do I get my question answered?

Dear dokken,

Thank you for your response. I would like to make my question simpler to understand. Is it possible to extract the coordinates of a node (in the form of floats not expression) in the element in which the grad(u) is being computed? it could be any of its nodes or Gauss points.

The dof coordinates can be extracted by using
x=V.tabulate_dof_coordinates() to get the coordinates ordered as the dof ordering. To get the dofs of a given cell, you can use indices=V.dofmap().cell_dofs(cell_index) to Get the dofs of a given cell.
coordinates = x[indices]
I cannot make a minimal example right now, as Im not at a computer.

Many thanks for your response it was very helpful there is no need to a minimal example. I only need to know how can I extract the cell index? In other words, is it possible to extract the cell index of the element in which grad(u) is being computed?

As you have not provided a minimal example of intended usage, i do not know Where in the pipeline of solving the PDE you require this information. If you want to use this information during or Pre/post assembly required different strategies.

It is during assembly. Let us go for a simple example:

def sigma(x,v):
    mu, lmbda = ANN(x[0],x[1]) # here I need x[0] and x[1] type floats. for this, I need to know the index of the cell in which we are computing sym(grad(v))

    return 2.0*mu*sym(grad(v)) + lmbda*tr(sym(grad(v)))*Identity(3)
bc = Some BCs
u = TrialFunction(V)
v = TestFunction(V)
x = SpatialCoordinate(mesh)
a = inner(sigma(x,u), sym(grad(v))) * dx
Ln = inner(v, -n) * h_ds(1)
u = Function(V)

solve(a == Ln, u, bc) 

So, I think this is during the assembly. In fact, I do not want to use class ANN(UserExpression): but I want to calculate mu, lmbda = ANN(x[0],x[1]) in real-time when we are computing sym(grad(v))

But a user expression does real time evaluation. You do not need to project the UserExpression into any function-space, as the user expression can be used in the variational form.

Maybe the problem is from my UserExpression. Here it is:

class ANN(UserExpression):
    def __init__(self, mesh, **kwargs):
        self.mesh = mesh
        super().__init__(**kwargs)
    def eval_cell(self, values, x, ufc_cell):     
        phi = 0.03*x[2]+ 0.25
        nu = 0.03*x[2] + 0.02
        INP = torch.tensor([nu, phi]).reshape(1,1,1,2)
        output = net(INP)
        for i in range(6):
            output[:,:,:,i] = output[:,:,:,i]*outCoef[i]
        K1 = Constant((output[0,0,0,5]*(d**2)/muf).detach().numpy())
        Ep1 = Constant((output[0,0,0,0]*(E/13.5)).detach().numpy()) #to reach the dimensional output
        nup1 = Constant((output[0,0,0,1]).detach().numpy())
        mup1 = Constant((output[0,0,0,2]*(E/13.5)).detach().numpy())
        biotM1 = Constant((output[0,0,0,3]*(E/13.5)).detach().numpy())
        alpha1 = Constant((output[0,0,0,4]).detach().numpy())
        values[0] = Ep1
        values[1] = nup1
        values[2] = mup1
        values[3] = biotM1
        values[4] = alpha1
        values[5] = K1
    def value_shape(self):
        return (6,)

I would not use Constant Inside a UserExpression. what error message do you obtain with this?

There is no error message here. It is just inefficient. Maybe it is what it takes!

Actually, deleting those Constants it is now around 20% faster!