Simple PDE solution becomes garbage with larger mesh sizes

I’ve been working on a problem that involves a wave equation and got some unexpected results. As the number of elements increases (precisely at nx=27 and ny=27), I get a bunch of pointy things that look like noise. I’ve narrowed down the code such that there is no longer a time component. I believe it has something to do with the Laplacian.

Here is SlaterModel.ufl:

element = FiniteElement("Lagrange", "triangle", 1)

u = TrialFunction(element)
v = TestFunction(element)

p = Coefficient(element) 

a = inner(grad(v), grad(u)) * dx
L = (p*v)*dx 

And here is my cpp file:

#include <dolfin.h>
#include "SlaterModel.h"

class Forcing :
    public dolfin::Expression
{
        void eval(dolfin::Array<double>& values, const dolfin::Array<double>& x) const{
            values[0] = std::cos(M_PI * x[0]) * std::cos(M_PI * x[1]);
        }
};

class DirichletBoundary : public dolfin::SubDomain
{
    bool inside(const dolfin::Array<double>& x, bool on_boundary) const
    {
        return on_boundary;
    }
};

int main(){
    int nx = 100;
    int ny = 100;
    auto mesh = std::make_shared<dolfin::RectangleMesh>(dolfin::Point(-1.0/2, -1.0/2, 0), dolfin::Point(1.0/2, 1.0/2, 0), nx, ny);
    auto V = std::make_shared<SlaterModel::FunctionSpace>(mesh);

    SlaterModel::BilinearForm a(V, V);
    SlaterModel::LinearForm L(V);

    dolfin::DirichletBC bcs(V, std::make_shared<dolfin::Constant>(0.0), std::make_shared<DirichletBoundary>());

    auto p = std::make_shared<Forcing>();
    L.p = p;

    dolfin::File file("test.pvd");

    dolfin::Function u(V);
    dolfin::solve(a == L, u, bcs);
    
    file << u;
}

Here is an image of the problem when nx = ny = 100:

I’m at the end of my wits. I have a Python implementation that works fine, but need to migrate to C++ for reasons. I have a feeling its a simple error I am making, but after three days I think its best I ask for a second set of eyes!

Thanks.

1 Like

This is puzzling. I don’t see anything obviously wrong. How’re you compiling / running the C++ version? Against a native build on your machine or in a Docker container?

If you’ve not tried the latter it may help to understand if this is a weird allocation / initialisation issue due to the compiler you’re using.

Another thing perhaps is to move the std::make_shared instantiations from the DirichletBC constructor to main's level to make sure the memory isn’t getting accidentally freed? Although I doubt that’s the problem.

I’m linking and including against the libraries and headers supplied by a conda environment. The same conda environment that the Python version works fine in. I’ve noticed that the spikes change based on what optimization level I tell the compiler to use. That is disturbing… possibly some memory issue? I’m going to try stripping away all the other “.cpp” files that get linked in the final executable to see if that changes something.

And I modified my code to make sure there is no accidental free-ing of memory. Still the same issue.

The stripped down version of code still gives the same issue. So it has to be something in the two files posted that is giving the issue…

Running your code using the stable Docker image works fine for me. quay.io/fenicsproject/stable:latest

1 Like

I guess it looks like its time for me to migrate to docker then. Thanks for verifying it works on there.

Apologies if I’m telling you what you already know.

In principle it should be trivial. Navigate to the directory with your code, then run

docker run -ti -v $(pwd):/home/fenics/shared quay.io/fenicsproject/stable

Inside your container navigate to fenics/shared which should point to your host machine’s working directory.

then run ffc -l dolfin SlaterModel.ufl && cmake . && make (or however you want to do it) and run it as usual.

You should then be able to plot the output files in your current working directory on your host machine.

1 Like

Thanks, I knew it would be simple but your post helped me avoid googling around a bit.

FEniCS in Docker — FEniCS Containers 1.0 documentation is an excellent resource.

2 Likes