Nanobind throws bad_cast after dolfinx and nanobind update

I just updated dolfinx and everything else in my conda environment. They’re now 0.10.0 and 2.9.2, respectively. Previous versions were installed this summer. With them, the following C++ code, applied to the _cpp_object inside the Python-side Function, used to work:

    typedef Function<double> Function_f64;

    const Function_f64* cast_Function_f64(const PyObject* obj) {
        auto handle = nanobind::handle(obj);
        return nanobind::cast<const Function_f64*>(handle);
    }

Now, after the update, it throws std::bad_cast.

Anybody have an idea what has changed? Is there any other way to get the C++ object pointer to use in my code.

The following code, added to the above function, confirms that obj is a Function_float64.

        const char* cstr = PyUnicode_AsUTF8(PyType_GetName(obj->ob_type));
        std::cout << cstr << std::endl;

I’m using direct PyObjects here, because my own bindings are Python ← pyo3 → Rust – cxx → C++, with the main part of my own abstract code written in Rust, and some fenics-extensibility in Python. But I also need some higher performance parts that need the C++ pointer.

I would probably ask the nanobind developers at

as this seems very nanobind specific. Do you have a minimal reproducible example?

I’ve posted the question to the nanobind people as well at github/jakob/nanobind/discussions/1234. (Sorry, this doesn’t allow links.)

Downgrading to nanobind 2.8.0 and dolfinx 0.9.0 from conda-forge, the problem disappears. Did something significant happen between those versions, either in dolfinx or nanobind? ._cpp_object is still the nanobind-generated Python-side container for the C++ Function<double>? Did something move in the dolfinx headers? I’ve read that nanobind could be sensitive to the correct things being #included.