Difference in behavior for 3d/2d mesh vs 2d/1d

Hello all,

I have been struggling with understanding the following recently - any input would be very much appreciated. Essentially I am confused as to why I can assemble a function defined on the function space of a boundary mesh in 3d, but not in 2d. For the 2d case, I can get the assembly to work by using a different measure that is instead defined as “dx” of the boundary mesh but then I cannot create variational forms using terms defined on the full domain.

Here is a MWE:

from dolfin import *

def example_3d_2d():
    # 3d-2d example
    m  = UnitCubeMesh(10,10,10)
    mb = BoundaryMesh(m,'exterior')
    Vb = FunctionSpace(mb, "P", 1)
    
    ub = interpolate(Constant(5.0), Vb)
    ub.set_allow_extrapolation(True)
    
    ds = Measure('ds', m)
    
    print(assemble(ub*ds))
    assert abs(30.0 - assemble(ub*ds)) < 1e-6

def example_2d_1d():
    # 2d-1d example
    m  = UnitSquareMesh(10,10)
    mb = BoundaryMesh(m,'exterior')
    Vb = FunctionSpace(mb, "P", 1)
    
    ub = interpolate(Constant(5.0), Vb)
    ub.set_allow_extrapolation(True)
    
    ds = Measure('ds', m)
    dxb = Measure('dx', mb) # assembling when using this measure works but not ds

    print(assemble(ub*ds))
    assert abs(20.0 - assemble(ub*ds)) < 1e-6


example_3d_2d() # this works
example_2d_1d() # this fails

Thank you for your help!

Here is a little more information…

The error produced is

  File "/Users/justin/anaconda3/envs/fenicsproject/lib/python3.7/site-packages/FIAT/expansions.py", line 150, in <listcomp>
    ref_pts = numpy.array([self.mapping(pt) for pt in pts])
  File "/Users/justin/anaconda3/envs/fenicsproject/lib/python3.7/site-packages/FIAT/expansions.py", line 141, in <lambda>
    self.mapping = lambda x: numpy.dot(self.A, x) + self.b
ValueError: shapes (1,1) and (2,) not aligned: 1 (dim 1) != 2 (dim 0)

So I added some print statements and it looks like LineExpansionSet is making an array A of size (1,1) and array b of size (1,) for its mapping function. However the “x” input into mapping is a (2,) array so it fails. Is mapping supposed to be from the global coordinates to the reference element coordinates?

Also,

assemble(Constant(5.0)*ds)

works. It only happens when the function is defined on the function space Vb

I am not sure how you are intending to use this in a bigger example, But you should not assemble a function on a different mesh (in general).
Have you had a look at @cdaversin’s Mixed dimensional FEM.
For an example see Stress continuity using mixed-dimensional branch or
https://bitbucket.org/fenics-project/dolfin/raw/f0a90b8fffc958ed612484cd3465e59994b695f9/python/demo/undocumented/meshview-3D2D/demo_meshview-3D2D.py

2 Likes

Hi Jorgen,

Thanks for the quick reply and the references! I am somewhat familiar with Cecile’s branch. In my application I am using fixed-point iterations to decouple+solve bulk-surface PDEs so I only really need the ability to use (known) functions defined on the surface in variational forms for the bulk. e.g.

Fbulk = inner(grad(u), grad(v))*dx + ub*v*ds

where ub is just the current (known) values of some function living on the boundary and u is the unknown function in the bulk. I figured an alternative (albiet possibly inefficient) approach would be to interpolate ub onto the function space V using the LagrangeInterpolator class. If possible I’d really like to keep the problems decoupled rather than solving them monolithically; I will look more into the Arxiv paper you sent, thanks again.

Yes, I think the solution is to use a LagrangeInterpolator…

But i do not see why you do not want to solve this
problem monolithically. As The boundary space is way smaller than the full space,
and it will not require large amounts of memory to solve the coupled problem (using Cécile’s) approach.

Hi Jorgen,

I’m still looking into the possibility of a monolithic solver for my application; in general the actual problems we are solving are on very large meshes with many vertices on the boundaries - sometimes the coupling is weak and it makes more sense to use an iterative approach - other times the coupling is stronger and it may make more sense to use monolithic.

I tried LagrangeInterpolator and that pretty much fixed the issue! I totally forgot this function existed until I stumbled upon some old forum threads :").

Thanks,
Justin