Closed francesco-ballarin closed 4 years ago
Could be due to changes in FFC with cffi now being used in place of dijitso. @chrisrichardson ?
It's possible. It would be good to add this as a test. @francesco-ballarin - can you push it in a branch for me?
However, it works OK for me.
Yes, I will add a test. I am on current master for both ffcx and dolfinx.
Apparently there is no need to add a new test, I get very similar failures from
test_assembler.py::test_matrix_assembly_block
Expected element: FiniteElement('Lagrange', triangle, 1)
Input element: FiniteElement('Lagrange', triangle, 2)
test_assembler.py::test_assembly_solve_taylor_hood[mesh0]
Expected element: VectorElement(FiniteElement('Lagrange', triangle, 2), dim=2)
Input element: FiniteElement('Lagrange', triangle, 1)
test_assembler.py::test_assembly_solve_taylor_hood[mesh1]
Expected element: VectorElement(FiniteElement('Lagrange', triangle, 2), dim=2)
Input element: VectorElement(FiniteElement('Lagrange', tetrahedron, 2), dim=3)
I also get a segmentation faults for
test_assembler.py::test_basic_assembly
test_assembler.py::test_assembly_bcs
test_assembler.py::test_assembly_solve_block
test_assembler.py::test_projection
at
Eigen::internal::handmade_aligned_free
when called from
cpp/dolfin/fem/assemble_vector_impl.cpp:280
Is this affecting anybody else? (since CI is clearly not affected) If not, any suggestion on what could be the issue in my environment? (which, in any case, used to work with master at 0068c9bb0)
Maybe you can describe your environment, or better: provide a docker container for it? Most testing we do is on ubuntu/CI or MacOS.
Hi,
the system is CentOS 7. Relevant dependencies are
boost/c++14/1.68.0
cmake/3.13.1
Cython/0.28.5
eigen/3.3.5
fenicsx/2019-02-13-6c0a5ad12/petsc-2018-12-05-d566698e4b-3.10.2
gcc/7.3.1
hdf5/1.8.21
mpi4py/2018-12-05-59c4262-3.0.0
numpy/1.16.0-pre
openmpi/2.1.3/gcc/7.3.1
petsc/2018-12-05-d566698e4b-3.10.2/opt
petsc4py/2018-12-05-3619809-3.10.0
pybind11/2018-12-05-e2b884c
python3/3.6.3
scipy/1.2.0-pre
slepc/2018-12-05-e8be09e9c-3.10.1/opt
slepc4py/2018-12-05-b59f91a-3.10.0
sympy/1.2
I will install dolfinx on another machine (debian) and test it there as well.
I can confirm that the code is working on the Debian machine. The most relevant differences are a newer compiler (g++ 8.2.0) and newer python (3.7.2).
It would be useful if you could create a suitable centos container (docker). I tried, but centos 7 seems quite far behind, and I wasn't sure how to "yum install" a recent gcc etc.On 13 Feb 2019 14:54, Francesco Ballarin notifications@github.com wrote:I can confirm that the code is working on the Debian machine. The most relevant differences are a newer compiler (g++ 8.2.0) and newer python (3.7.2).
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread.
Yes I will, although it will probably take me a few days. I will let you know when the docker container is ready.
We have a fix in the pipeline for this, thanks for reporting it. There shouldn't be any need to make the centos container.
OK thanks. When the fix is ready please tell the name of the branch and I will test it too. Thanks.
@francesco-ballarin You can test hotfix at FFCX branch https://github.com/FEniCS/ffcx/tree/michal/static-methods
But a proper solution will most likely be different.
Thanks @michalhabera, I confirm that the fix is working on my centos machine. Should I leave this issue open until the "proper solution" is merged?
Has this been solved? I can't reproduce.
Isn't solved yet. You need a specific environment to reproduce this. I will have a look again. My fix with adding static keywrd to all methods is probably not the right thing to do.
Thanks @michalhabera, I am available for testing future PR on this issue.
@francesco-ballarin Can you test with current ffcx master? I've pushed a fix in https://github.com/FEniCS/ffcx/pull/151 , solved the issue for me on our HPC CentOS.
Thanks, I confirm that this solved the issue on my CentOS machine as well. I am closing the issue.
I have the same problem.
spack:
specs:
- fenics-dolfinx+adios2
- py-fenics-dolfinx cflags=-O3 fflags=-O3
view: true
concretizer:
unify: true
reuse: false
I install fenics with spack on Ubuntu 20.04 and compiled it with gcc 14.1.0.
I have the same problem.
spack: specs: - fenics-dolfinx+adios2 - py-fenics-dolfinx cflags=-O3 fflags=-O3 view: true concretizer: unify: true reuse: false
I install fenics with spack on Ubuntu 20.04 and compiled it with gcc 14.1.0.
Please specify what exact code you are running and add the full stack trace of the error message.
I have the same problem.
spack: specs: - fenics-dolfinx+adios2 - py-fenics-dolfinx cflags=-O3 fflags=-O3 view: true concretizer: unify: true reuse: false
I install fenics with spack on Ubuntu 20.04 and compiled it with gcc 14.1.0.
Please specify what exact code you are running and add the full stack trace of the error message.
With the spack.yml
file above, run spack install
, then cd dolfinx/cpp/demo/poisson
, cmake . && make
. The compilation is OK, but when I run ./demo_poisson
, the program is terminated with the error message Cannot create form. Wrong type of function space for argument for codes with more than one function space
.
I solved the problem with steps from GitHub Actions:
spack env create cpp-main
spack env activate cpp-main -p
spack add fenics-dolfinx@main+adios2 cmake py-fenics-ffcx@main
spack install
git clone https://github.com/FEniCS/dolfinx.git
cd dolfinx/cpp/demo/poisson
cmake .
export VERBOSE=1
make -j 4
mpirun -np 2 ./demo_poisson
For someone with the same problem:
If you want to develop a C++ program, the package py-fenics-dolfinx
cannot be installed.
I don't have an installation of ubuntu 20.04 with gcc 14 (as I would have to install gcc manually). It seems like you have resolved your issue by following the Github actions test.
Hi, I believe some of the recent PRs has broken the following code
which prints
Swapping the two assemblies prints instead
Assembling one form at a time (commenting the other) does not raise any error.
I guess that some incorrect caching is happening behind the scenes, but I am not able to debug this any further.
Thanks,
Francesco