cartographer-project / point_cloud_viewer

View billions of points in your browser.
Apache License 2.0
339 stars 98 forks source link

ply to octree fails due to missing position data #374

Closed ajprax closed 5 years ago

ajprax commented 5 years ago

I'm running into an issue while trying to convert a ply to an octree. When I run the command below I get normal seeming output for a while and then it crashes consistently due to a missing .xyz file for a particular node. The splitting process seems to reach a point where it does not split any more and a large number of nodes are created with the same set of 878063 points (more than half the cloud), and the first of these nodes is the one which causes the error. I'm wondering if a file is not created because it would have no points or something like that. It's also strange that so many points would end up in the same node. I've checked my point cloud and there are no large concentrations of points, they're fairly evenly distributed throughout the entire volume.

Command I'm running:

./target/release/build_octree ~/tmp/1.nonans.ply --output_directory ~/tmp/1_octree

Output

Splitting r which has 1602518 points (16.03x MAX_POINTS_PER_NODE).
Splitting r4 which has 135377 points (1.35x MAX_POINTS_PER_NODE).
Splitting r1 which has 135509 points (1.36x MAX_POINTS_PER_NODE).
Splitting r5 which has 1308914 points (13.09x MAX_POINTS_PER_NODE).
Splitting r12 which has 106121 points (1.06x MAX_POINTS_PER_NODE).
Splitting r42 which has 106078 points (1.06x MAX_POINTS_PER_NODE).
Splitting r50 which has 106336 points (1.06x MAX_POINTS_PER_NODE).
Splitting r52 which has 1199278 points (11.99x MAX_POINTS_PER_NODE).
Splitting r522 which has 1193464 points (11.93x MAX_POINTS_PER_NODE).
Splitting r5222 which has 1187137 points (11.87x MAX_POINTS_PER_NODE).
Splitting r52222 which has 1182802 points (11.83x MAX_POINTS_PER_NODE).
Splitting r522222 which has 1176433 points (11.76x MAX_POINTS_PER_NODE).
Splitting r5222222 which has 1171077 points (11.71x MAX_POINTS_PER_NODE).
Splitting r52222222 which has 1164831 points (11.65x MAX_POINTS_PER_NODE).
Splitting r522222222 which has 1160570 points (11.61x MAX_POINTS_PER_NODE).
Splitting r5222222222 which has 1153695 points (11.54x MAX_POINTS_PER_NODE).
Splitting r52222222222 which has 1142597 points (11.43x MAX_POINTS_PER_NODE).
Splitting r522222222220 which has 1134263 points (11.34x MAX_POINTS_PER_NODE).
Splitting r5222222222200 which has 1130290 points (11.30x MAX_POINTS_PER_NODE).
Splitting r52222222222000 which has 1122819 points (11.23x MAX_POINTS_PER_NODE).
Splitting r522222222220000 which has 1116353 points (11.16x MAX_POINTS_PER_NODE).
Splitting r5222222222200002 which has 1107310 points (11.07x MAX_POINTS_PER_NODE).
Splitting r52222222222000020 which has 1101742 points (11.02x MAX_POINTS_PER_NODE).
Splitting r522222222220000200 which has 1094439 points (10.94x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002 which has 1085260 points (10.85x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020 which has 1076004 points (10.76x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205 which has 1066903 points (10.67x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052 which has 1052999 points (10.53x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527 which has 1046108 points (10.46x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275 which has 1040038 points (10.40x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755 which has 998032 points (9.98x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527552 which has 992318 points (9.92x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275522 which has 986783 points (9.87x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755222 which has 980000 points (9.80x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527552220 which has 972787 points (9.73x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275522202 which has 966082 points (9.66x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755222022 which has 960214 points (9.60x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527552220220 which has 952283 points (9.52x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275522202200 which has 945983 points (9.46x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755222022002 which has 938236 points (9.38x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527552220220020 which has 928568 points (9.29x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275522202200202 which has 922601 points (9.23x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755222022002022 which has 915815 points (9.16x MAX_POINTS_PER_NODE).
Splitting r52222222222000020020527552220220020222 which has 895203 points (8.95x MAX_POINTS_PER_NODE).
Splitting r522222222220000200205275522202200202225 which has 891581 points (8.92x MAX_POINTS_PER_NODE).
Splitting r5222222222200002002052755222022002022255 which has 885605 points (8.86x MAX_POINTS_PER_NODE).
Splitting r000002222222222000020020527552220220020222555 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000002222222220000200205275522202200202225551 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000002222222200002002052755222022002022255511 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000002222222000020020527552220220020222555113 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000002222220000200205275522202200202225551136 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000002222200002002052755222022002022255511360 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000002222000020020527552220220020222555113604 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000002220000200205275522202200202225551136040 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000002200002002052755222022002022255511360400 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000002000020020527552220220020222555113604000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000200205275522202200202225551136040000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000002002052755222022002022255511360400000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000020020527552220220020222555113604000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000200205275522202200202225551136040000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000002002052755222022002022255511360400000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000020527552220220020222555113604000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000205275522202200202225551136040000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000002052755222022002022255511360400000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000527552220220020222555113604000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000005275522202200202225551136040000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000002755222022002022255511360400000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000007552220220020222555113604000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000005522202200202225551136040000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000005222022002022255511360400000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000002220220020222555113604000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000002202200202225551136040000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000002022002022255511360400000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000220020222555113604000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000002200202225551136040000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000002002022255511360400000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000020222555113604000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000202225551136040000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000002022255511360400000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000222555113604000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000002225551136040000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000000002255511360400000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000000000002555113604000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000000000000005551136040000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000000000005511360400000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000000000000000005113604000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000000000000000000000001136040000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000000000000000000000001360400000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000000000000000000000003604000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000000000000000000000000000006040000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Splitting r0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE).
Node r00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which has 878063 points (8.78x MAX_POINTS_PER_NODE) is too small to be split, keeping all points.
Building level 44: 1 / 2 [====================================================================================================================>--------------------------------------------------------------------------------------------------------------------] 50.00 % 3728.38/s 0s

Error

thread '<unnamed>' panicked at 'called \`Result::unwrap()\` on an \`Err\` value: Error(NodeNotFound, State { next_error: None, backtrace: InternalBacktrace { backtrace: Some(stack backtrace:
   0: error_chain::backtrace::imp::InternalBacktrace::new
   1: <error_chain::State as core::default::Default>::default
   2: <point_viewer::data_provider::on_disk::OnDiskDataProvider as point_viewer::data_provider::common::DataProvider>::data
   3: point_viewer::read_write::node_iterator::NodeIterator::from_data_provider
   4: point_viewer::octree::generation::subsample_children_into
   5: <F as scoped_pool::Task>::run
   6: scoped_pool::Pool::run_thread
   7: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:80
   8: core::ops::function::FnOnce::call_once{{vtable.shim}}
   9: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/237d54ff6c4fb3577e02d4c5af02813c11b63d01/src/liballoc/boxed.rs:932
  10: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/237d54ff6c4fb3577e02d4c5af02813c11b63d01/src/liballoc/boxed.rs:932
      std::sys_common::thread::start_thread
             at src/libstd/sys_common/thread.rs:13
      std::sys::unix::thread::Thread::new::thread_start
             at src/libstd/sys/unix/thread.rs:79
  11: start_thread
  12: __clone
) } })', src/libcore/result.rs:1165:5
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:77
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1028
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1412
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:189
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:206
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:469
  11: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:376
  12: rust_begin_unwind
             at src/libstd/panicking.rs:303
  13: core::panicking::panic_fmt
             at src/libcore/panicking.rs:84
  14: core::result::unwrap_failed
             at src/libcore/result.rs:1165
  15: <F as scoped_pool::Task>::run
  16: scoped_pool::Pool::run_thread
  17: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:80
  18: core::ops::function::FnOnce::call_once{{vtable.shim}}
  19: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/237d54ff6c4fb3577e02d4c5af02813c11b63d01/src/liballoc/boxed.rs:932
  20: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/237d54ff6c4fb3577e02d4c5af02813c11b63d01/src/liballoc/boxed.rs:932
  21: std::sys_common::thread::start_thread
             at src/libstd/sys_common/thread.rs:13
  22: std::sys::unix::thread::Thread::new::thread_start
             at src/libstd/sys/unix/thread.rs:79
  23: start_thread
  24: __clone
note: Some details are omitted, run with \`RUST_BACKTRACE=full\` for a verbose backtrace.
thread 'main' panicked at 'WaitGroup explicitly poisoned!', /home/aaron/.cargo/registry/src/github.com-1ecc6299db9ec823/scoped-pool-1.0.0/src/lib.rs:457:13
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:77
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1028
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1412
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:189
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:206
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:469
  11: std::panicking::begin_panic
  12: scoped_pool::Scope::join
  13: scoped_pool::Scope::zoom
  14: point_viewer::octree::generation::build_octree_from_file
  15: build_octree::main
  16: std::rt::lang_start::{{closure}}
  17: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:48
  18: std::panicking::try::do_call
             at src/libstd/panicking.rs:288
  19: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:80
  20: std::panicking::try
             at src/libstd/panicking.rs:267
  21: std::panic::catch_unwind
             at src/libstd/panic.rs:396
  22: std::rt::lang_start_internal
             at src/libstd/rt.rs:47
  23: main
  24: __libc_start_main
  25: _start
note: Some details are omitted, run with \`RUST_BACKTRACE=full\` for a verbose backtrace.

ls of the node which causes the error (also the first node with the final number of points)

ls ~/tmp/1_octree/r000002222222222000020020527552220220020222555*
~/tmp/1_octree/r000002222222222000020020527552220220020222555.rgb
feuerste commented 5 years ago

Hi @ajprax Sorry to hear that you ran into problems. Would it be possible for you to provide the ply file for reproduction?

ajprax commented 5 years ago

https://drive.google.com/file/d/1G94EA74FkXseDXFQKj2Y8itmhmx0U3MJ/view?usp=sharing is the cloud in question.

I also just tried a second similar point cloud and got the same issue.

feuerste commented 5 years ago

Thanks for sharing the file. Did you verify that your points are in the correct format? When reading the first 4 points, I get the following coordinates:

[
    -35.082672119140625,
    -20.7841796875,
    262.5119934082031,
]
[
    0.00000000000000000000000000000003094276995682936,
    -0.00000000000016122782904755273,
    -0.00000000000000000000000000000000000110002288181905,
]
[
    NaN,
    0.0000000000000010807879944333592,
    -129058759245824.0,
]
[
    -631795853673368500000000000000.0,
    -0.00000000000000008414934179537062,
    0.000000000000000000000000000000000018215204755882552,
]

This looks pretty suspicious to me...

ajprax commented 5 years ago

The first one looks about like I'd expect but the rest are definitely wrong. I generated the ply from a pcd (which I've confirmed looks normal) using pcl's tools, maybe that conversion didn't work correctly for some reason. In particular it looks like a lot of the points are coming out super close to 0 which would explain why so many points end up in the same smallest-size octree node. I'll look into fixing the ply and reopen the issue if I still can't generate the octree. Thanks for your help.

feuerste commented 5 years ago

Ok, please let us know if the conversion was correct. If this is the case, we may have to dig into the ply reader to see if there is anything going wrong.

ajprax commented 5 years ago

Unless the other binary ply readers I tried are broken it seems like the problem was pcl's pcd2ply binary encoding is wrong. pcd2ply ascii works fine, but I guess you guys don't support that yet.

feuerste commented 5 years ago

@ajprax Thanks for the information! It shouldn't be too difficult to integrate ascii support. How about you create a PR for this? As a starting point you would need to integrate ascii support into https://github.com/googlecartographer/point_cloud_viewer/blob/master/src/read_write/ply.rs#L339 and https://github.com/googlecartographer/point_cloud_viewer/blob/master/src/read_write/ply.rs#L243...