Open WuKyrin opened 8 months ago
I get the same error followed by a "Segmentation fault". Did you find a solution?
It's hard to reproduce your problem. But, one thing I'm curious is how many correspondences are given to TEASER++?
Hi, thanks for your reply. I'm giving about 600 putative correspondences to Teaser. But the segmentation fault only happens for cases with a high noise bound, e.g. above 1. For low noise bounds, the algorithm works as intended. Funnily enough, I get a segmentation fault and the end of the algorithm even after successful registration.
I'd happily give you more detailed information but im not too sure what info to give.
If i run an example in c++ with extreme high noise bound it takes long but it still works.
I using Win 11 with WSL2 on Ubuntu 20.04 with Python 3.6 and PyCharm. Open3d==0.15.2
Hmmm it's weird, so you mean you only have a problem when you run TEASER++ on Python? Please use gdb and check the lines where the TEASER++ code makes a breakdown.
Hi @LimHyungTae yes only if i run the python examples with high noise bound. if i run c++ even with high noise bound it runs perfectly. as im having issues with vs code and such i started using the python implementation.
im running with os.environ["OMPNUMTHREADS"] = "12" this is the output from gdb:
Starting program: /home/armin/anaconda3/envs/teasertest/bin/python teaserpythonply.py [Thread debugging using libthreaddb enabled] Using host libthreaddb library "/lib/x8664-linux-gnu/libthread_db.so.1". [Detaching after fork from child process 76685] [Detaching after fork from child process 76686] [Detaching after fork from child process 76689] [Detaching after fork from child process 76690] [Detaching after fork from child process 76691] [Detaching after fork from child process 76694] [New Thread 0x7f3509278700 (LWP 76695)] [New Thread 0x7f34dacfd700 (LWP 76696)] [New Thread 0x7f34d9714700 (LWP 76697)] [New Thread 0x7f34d8f13700 (LWP 76698)] [Thread 0x7f34d8f13700 (LWP 76698) exited] [New Thread 0x7f34d39f8780 (LWP 76699)] [New Thread 0x7f34d31f6800 (LWP 76700)] [New Thread 0x7f34d29f4880 (LWP 76701)] [New Thread 0x7f34d21f2900 (LWP 76702)] [New Thread 0x7f34d19f0980 (LWP 76703)] [New Thread 0x7f34d11eea00 (LWP 76704)] [New Thread 0x7f34d09eca80 (LWP 76705)] [New Thread 0x7f34c5ffeb00 (LWP 76706)] [New Thread 0x7f34c57fcb80 (LWP 76707)] [New Thread 0x7f34c4ffac00 (LWP 76708)] [New Thread 0x7f34c47f8c80 (LWP 76709)] [Detaching after fork from child process 76710] [Detaching after fork from child process 76712] [New Thread 0x7f34d8f13700 (LWP 76713)] [New Thread 0x7f34c1586700 (LWP 76714)] [New Thread 0x7f34c0d85700 (LWP 76715)] [New Thread 0x7f34802b2700 (LWP 76716)] [New Thread 0x7f347dab1700 (LWP 76717)] [New Thread 0x7f347b2b0700 (LWP 76718)] [New Thread 0x7f3478aaf700 (LWP 76719)] [New Thread 0x7f34782ae700 (LWP 76720)] [New Thread 0x7f3473aad700 (LWP 76721)] [New Thread 0x7f34712ac700 (LWP 76722)] [New Thread 0x7f346eaab700 (LWP 76723)] TEASER++ Python registration example [Detaching after vfork from child process 76725] [New Thread 0x7f345a74e700 (LWP 76727)] [New Thread 0x7f344beb8700 (LWP 76728)] [New Thread 0x7f344b6b7700 (LWP 76729)] [New Thread 0x7f344aeb6700 (LWP 76730)] [New Thread 0x7f344a6b5700 (LWP 76731)] Starting scale solver (only selecting inliers if scale estimation has been disabled). Scale estimation complete. --Type for more, q to quit, c to continue without paging-- Thread 1 "python" received signal SIGSEGV, Segmentation fault. 0x00007f345cc13f10 in teaser::RobustRegistrationSolver::solve(Eigen::Matrix const&, Eigen::Matrix const&) () from /home/armin/PYTEASER/build/teaser/libteaserregistration.so
How much large noise bound do you use now? Originally, a 'large' noise bound is not intended; it should be no larger than voxel sampling size (if you conduct voxel sampling as a preprocessing)
Well, im trying to register 2 point clouds from different sources. One pcd is highly accurate extracted from a 3D model and the other is noisy/incomplete reconstructed from SLAM. I have extracted the columns from both point clouds and extracted the center point from each column. Now im trying to register these column center points with all-to-all correspondence. It works very well when I use the same cloud for source and target. I did not get it to work for the cross-source case with noise bound above 2.5. I have around 25 column center points on each pcd.
My idea was to test to increase the noise bound to compensate for the inaccuracies from the SLAM column centers... If I could just get the translation, that would be beneficial and I could do more testing. Maybe i am not not fully understanding the paper yet.
@arminkmbr All-to-all correspondences are going to be difficult for your case with a large number of points. The segfault might be caused by the program using too much memory.
[New Thread 0x7fff53ff7700 (LWP 1431930)] [New Thread 0x7fff537f6700 (LWP 1431931)] [New Thread 0x7fff547f8700 (LWP 1431932)] [New Thread 0x7fff51ff3700 (LWP 1431933)] [New Thread 0x7fff577fe700 (LWP 1431934)] [New Thread 0x7fff56ffd700 (LWP 1431935)] [New Thread 0x7fff567fc700 (LWP 1431936)] [New Thread 0x7fff55ffb700 (LWP 1431937)] [New Thread 0x7fff557fa700 (LWP 1431938)] [New Thread 0x7fff54ff9700 (LWP 1431939)] [New Thread 0x7fff52ff5700 (LWP 1431940)] [New Thread 0x7fff527f4700 (LWP 1431941)] Starting scale solver (only selecting inliers if scale estimation has been disabled). Scale estimation complete. Max core number: 2 Num vertices: 53 Max Clique of scale estimation inliers: Using chain graph for GNC rotation. Starting rotation solver. GNC rotation estimation noise bound:0.6 GNC rotation estimation noise bound squared:0.36 GNC-TLS solver terminated due to cost convergence. Cost diff: 0 Iterations: 5 Rotation estimation complete. Starting translation solver. Translation estimation complete. double free or corruption (out)
Hello, this always appears before each segmentation fault. Do you have any good solutions?
Did you set OMP_NUM_THREADS=16
in front of your command?
I encountered an "Edge exists" error message while running the TEASER solver on a point cloud dataset in my project. This issue arises under specific circumstances, which leads me to believe there might be a problem with how edges are being processed or identified within the dataset. This dataset is a stl file of femur and the ply file we captured with our RGBD camera of the operating scences of patients.