Open zy-cuhk opened 7 months ago
These two files are not used to generate the dataset. For maps, you can refer to: https://github.com/KumarRobotics/kr_param_map to generate your own .pcd files For trajectories, please refer to: https://github.com/ZJU-FAST-Lab/GCOPTER to generate sample trajectories.
We will update a guideline for dataset generation later.
@yuwei-wu i am unable to build your package, the issue is specific to libtorch directory inside planner, where codes makes compile errors, I did attached some issue to figure out the problems,
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = unsigned int]’: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = unsigned int; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24: required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = unsigned int; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50: required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54: required from here /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: ‘hash’ is not a member of ‘unsigned int’ 285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) { | ~~~~~~~^~~ In file included from /usr/include/c++/9/ext/alloc_traits.h:36, from /usr/include/c++/9/bits/stl_tree.h:67, from /usr/include/c++/9/set:60, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:31, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1: /usr/include/c++/9/bits/alloc_traits.h: In instantiation of ‘static std::allocator_traits<std::allocator<_Tp1> >::size_type std::allocator_traits<std::allocator<_Tp1> >::max_size(const allocator_type&) [with _Tp = void; std::allocator_traits<std::allocator<_Tp1> >::size_type = long unsigned int; std::allocator_traits<std::allocator<_Tp1> >::allocator_type = std::allocator<void>]’: /usr/include/c++/9/bits/stl_vector.h:1780:51: required from ‘static std::vector<_Tp, _Alloc>::size_type std::vector<_Tp, _Alloc>::_S_max_size(const _Tp_alloc_type&) [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::_Tp_alloc_type = std::allocator<void>]’ /usr/include/c++/9/bits/stl_vector.h:921:27: required from ‘std::vector<_Tp, _Alloc>::size_type std::vector<_Tp, _Alloc>::max_size() const [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/9/bits/vector.tcc:69:23: required from ‘void std::vector<_Tp, _Alloc>::reserve(std::vector<_Tp, _Alloc>::size_type) [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/functional.h:20:3: required from ‘std::vector<decltype (fn((* inputs.begin())))> c10::fmap(const T&, const F&) [with F = torch::jit::Object::get_properties() const::<lambda(c10::ClassType::Property)>; T = std::vector<c10::ClassType::Property>; decltype (fn((* inputs.begin()))) = void]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/api/object.h:153:6: required from here /usr/include/c++/9/bits/alloc_traits.h:505:20: error: ‘const allocator_type’ {aka ‘const class std::allocator<void>’} has no member named ‘max_size’ 505 | { return __a.max_size(); } | ~~~~^~~~~~~~ In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:4, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict.h:397, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue_inl.h:8, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue.h:1555, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List_inl.h:4, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List.h:490, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef_inl.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef.h:631, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/DeviceGuard.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ATen.h:9, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/api/include/torch/types.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In instantiation of ‘size_t c10::hash<T>::operator()(const T&) const [with T = double; size_t = long unsigned int]’: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24: required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = double; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const double&, const double&}; Types = {const double&, const double&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const double&, const double&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50: required from ‘size_t c10::get_hash(const Types& ...) [with Types = {double, double}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:375:20: required from ‘size_t c10::hash<c10::complex<U> >::operator()(const c10::complex<U>&) const [with T = double; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:48:70: required from here /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: error: no matching function for call to ‘dispatch_hash(const double&)’ 295 | return _hash_detail::dispatch_hash(o); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: candidate: ‘template<class T> decltype (((std::hash<_Tp>()(o), <expression error>), <expression error>)) c10::_hash_detail::dispatch_hash(const T&)’ 273 | auto dispatch_hash(const T& o) | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: template argument deduction/substitution failed: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: candidate: ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&)’ 285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) { | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: template argument deduction/substitution failed: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = double]’: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = double; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24: required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = double; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const double&, const double&}; Types = {const double&, const double&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const double&, const double&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50: required from ‘size_t c10::get_hash(const Types& ...) [with Types = {double, double}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:375:20: required from ‘size_t c10::hash<c10::complex<U> >::operator()(const c10::complex<U>&) const [with T = double; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:48:70: required from here /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: ‘hash’ is not a member of ‘double’ 285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) { | ~~~~~~~^~~ In file included from /usr/include/c++/9/bits/stl_tempbuf.h:60, from /usr/include/c++/9/bits/stl_algo.h:62, from /usr/include/c++/9/algorithm:62, from /usr/include/eigen3/Eigen/Core:288, from /usr/include/eigen3/Eigen/Dense:1, from /usr/include/eigen3/Eigen/Eigen:1, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:32, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1: /usr/include/c++/9/bits/stl_construct.h: In instantiation of ‘void std::_Construct(_T1*, _Args&& ...) [with _T1 = at::Tensor; _Args = {}]’: /usr/include/c++/9/bits/stl_uninitialized.h:545:18: required from ‘static _ForwardIterator std::__uninitialized_default_n_1<_TrivialValueType>::__uninit_default_n(_ForwardIterator, _Size) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int; bool _TrivialValueType = false]’ /usr/include/c++/9/bits/stl_uninitialized.h:601:20: required from ‘_ForwardIterator std::__uninitialized_default_n(_ForwardIterator, _Size) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int]’ /usr/include/c++/9/bits/stl_uninitialized.h:663:44: required from ‘_ForwardIterator std::__uninitialized_default_n_a(_ForwardIterator, _Size, std::allocator<_Tp>&) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int; _Tp = at::Tensor]’ /usr/include/c++/9/bits/stl_vector.h:1603:36: required from ‘void std::vector<_Tp, _Alloc>::_M_default_initialize(std::vector<_Tp, _Alloc>::size_type) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/9/bits/stl_vector.h:509:9: required from ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const allocator_type&) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<at::Tensor>]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ExpandUtils.h:436:46: required from here /usr/include/c++/9/bits/stl_construct.h:75:7: error: use of deleted function ‘at::Tensor::Tensor()’ 75 | { ::new(static_cast<void*>(__p)) _T1(std::forward<_Args>(__args)...); } | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:4, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict.h:397, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue_inl.h:8, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue.h:1555, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List_inl.h:4, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List.h:490, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef_inl.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef.h:631, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/DeviceGuard.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ATen.h:9, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/api/include/torch/types.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:3, from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In instantiation of ‘size_t c10::hash<T>::operator()(const T&) const [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]’: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24: required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:314:43: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<0, Ts ...>::operator()(const std::tuple<_Args1 ...>&) const [with Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:307:39: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50: required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54: required from here /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: error: no matching function for call to ‘dispatch_hash(const std::shared_ptr<torch::autograd::Node>&)’ 295 | return _hash_detail::dispatch_hash(o); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: candidate: ‘template<class T> decltype (((std::hash<_Tp>()(o), <expression error>), <expression error>)) c10::_hash_detail::dispatch_hash(const T&)’ 273 | auto dispatch_hash(const T& o) | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: template argument deduction/substitution failed: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: candidate: ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&)’ 285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) { | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: template argument deduction/substitution failed: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>]’: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24: required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:314:43: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<0, Ts ...>::operator()(const std::tuple<_Args1 ...>&) const [with Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:307:39: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56: required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50: required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54: required from here /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: ‘hash’ is not a member of ‘std::shared_ptr<torch::autograd::Node>’ 285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) { | ~~~~~~~^~~ In file included from /usr/include/x86_64-linux-gnu/c++/9/bits/c++allocator.h:33, from /usr/include/c++/9/bits/allocator.h:46, from /usr/include/c++/9/bits/stl_tree.h:64, from /usr/include/c++/9/set:60, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:31, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28, from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1: /usr/include/c++/9/ext/new_allocator.h: In instantiation of ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = torch::jit::BuiltinModule; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule]’: /usr/include/c++/9/bits/alloc_traits.h:483:4: required from ‘static void std::allocator_traits<std::allocator<_Tp1> >::construct(std::allocator_traits<std::allocator<_Tp1> >::allocator_type&, _Up*, _Args&& ...) [with _Up = torch::jit::BuiltinModule; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; std::allocator_traits<std::allocator<_Tp1> >::allocator_type = std::allocator<torch::jit::BuiltinModule>]’ /usr/include/c++/9/bits/shared_ptr_base.h:548:39: required from ‘std::_Sp_counted_ptr_inplace<_Tp, _Alloc, _Lp>::_Sp_counted_ptr_inplace(_Alloc, _Args&& ...) [with _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’ /usr/include/c++/9/bits/shared_ptr_base.h:679:16: required from ‘std::__shared_count<_Lp>::__shared_count(_Tp*&, std::_Sp_alloc_shared_tag<_Alloc>, _Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’ /usr/include/c++/9/bits/shared_ptr_base.h:1344:71: required from ‘std::__shared_ptr<_Tp, _Lp>::__shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’ /usr/include/c++/9/bits/shared_ptr.h:359:59: required from ‘std::shared_ptr<_Tp>::shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule]’ /usr/include/c++/9/bits/shared_ptr.h:701:14: required from ‘std::shared_ptr<_Tp> std::allocate_shared(const _Alloc&, _Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}]’ /usr/include/c++/9/bits/shared_ptr.h:717:39: required from ‘std::shared_ptr<_Tp> std::make_shared(_Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Args = {const char (&)[5]}]’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/resolver.h:53:52: required from here /usr/include/c++/9/ext/new_allocator.h:146:4: error: no matching function for call to ‘torch::jit::BuiltinModule::BuiltinModule(const char [5])’ 146 | { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); } | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/resolver.h:5, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/script_type_parser.h:4, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/serialization/unpickler.h:7, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/serialization/pickle.h:8, from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:10, from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3, from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10: /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:308:3: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(std::string, int)’ 308 | BuiltinModule(std::string name, c10::optional<int64_t> version = at::nullopt) | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:308:3: note: candidate expects 2 arguments, 1 provided /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(const torch::jit::BuiltinModule&)’ 307 | struct TORCH_API BuiltinModule : public SugaredValue { | ^~~~~~~~~~~~~ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: no known conversion for argument 1 from ‘const char [5]’ to ‘const torch::jit::BuiltinModule&’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(torch::jit::BuiltinModule&&)’ /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: no known conversion for argument 1 from ‘const char [5]’ to ‘torch::jit::BuiltinModule&&’ make[2]: *** [CMakeFiles/learning_planning.dir/build.make:63: CMakeFiles/learning_planning.dir/src/learning_planning.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:764: CMakeFiles/learning_planning.dir/all] Error 2 make: *** [Makefile:141: all] Error 2 cd /home/kkg/codes_rl/AllocNet/build/planner; catkin build --get-env planner | catkin env -si /usr/bin/make --jobserver-auth=3,4; cd - ............................................................................... Failed << planner:make [ Exited with code 2 ] Failed <<< planner [ 1 minute and 50.3 seconds ] [build] Summary: 2 of 3 packages succeeded. [build] Ignored: None. [build] Warnings: None. [build] Abandoned: None. [build] Failed: 1 packages failed. [build] Runtime: 1 minute and 52.8 seconds total.
It looks like a dependence issue. Do you use a GPU or CPU version?
@yuwei-wu thanks you reply, I download cpu version of libtorch from the link, and place in planner folder. as you may see above. Then I change device to cpu device in learning_planner.hpp file in the include directory inside planner folder, my gcc compiler is 9.4.0.
hi, there's a pytorch update that causes this issue, you can try to download this version:
I will make sure to update the readme accordingly. Thank you for bringing this issue up.
Hi, yuwei-wu thanks that libtorch works.
@yuwei-wu Thank you for sharing the source code of this paper. I have successfully compiled the code, but after running it, the program fails to compute a result when given two target points, and an error occurs. I haven't been able to identify where the issue lies, and I hope to get your assistance. Below is the error log. Thank you very much.
process[learning_planning_node-4]: started with pid [117827] [ INFO] [1728459359.161506086]: rviz version 1.14.20 [ INFO] [1728459359.161538487]: compiled against Qt version 5.12.8 [ INFO] [1728459359.161554244]: compiled against OGRE version 1.9.0 (Ghadamon) [ INFO] [1728459359.168224787]: Forcing OpenGl version 0. frame_id odom [set up the model] optOrder 3 [ INFO] [1728459359.861809834]: Stereo is NOT SUPPORTED [ INFO] [1728459359.861849441]: OpenGL device: NVIDIA GeForce RTX 4060 Laptop GPU/PCIe/SSE2 [ INFO] [1728459359.861859904]: OpenGl version: 4.6 (GLSL 4.6). ++++++++++++++++++++++++++++++++++++++ +++++++Grid Map Information+++++++++++ +++ resolution : 0.1 +++ map volume : 2000 +++ origin : -10 -10 0 +++ size : 20 20 5 ++++++++++++++++++++++++++++++++++++++ error loading the model Error: Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31085 [kernel] Meta: registered at aten/src/ATen/RegisterMeta.cpp:26824 [kernel] QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:929 [kernel] BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:734 [kernel] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback] Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:21 [kernel] Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:23 [kernel] ZeroTensor: fallthrough registered at ../aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback] AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel] Tracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:16868 [kernel] AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback] AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback] FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback] FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback] PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback] PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
Exception raised from reportError at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:549 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::cxx11::basic_string<char, std::char_traits
++++++++++++++++++++++++++++++++++++++ +++ Finished generate random map ! +++ +++ The ratios for geometries are: +++ +++ cylinders : 12.03% +++ +++ circles : 0.51% +++ +++ gates : 0.21% +++ +++ ellipsoids : 1.14% +++ +++ polytopes : 1.01% +++ ++++++++++++++++++++++++++++++++++++++ [ WARN] [1728459363.211652491]: GRID OBS: 298117 [ WARN] [1728459363.636415996]: GRID OBS: 298117 [ WARN] [1728459364.057875391]: GRID OBS: 298117 [ WARN] [1728459364.476742100]: GRID OBS: 298117 [ WARN] [1728459364.894841533]: GRID OBS: 298117 [ WARN] [1728459365.315261510]: GRID OBS: 298117 [ WARN] [1728459365.736091932]: GRID OBS: 297825 [ WARN] [1728459366.157382648]: GRID OBS: 297294 [ INFO] [1728459366.571006419]: Setting goal: Frame:odom, Position(2.751, 3.602, 0.000), Orientation(0.000, 0.000, -0.313, 0.950) = Angle: -0.636
[ WARN] [1728459366.576724889]: GRID OBS: 297506 [ WARN] [1728459366.997799207]: GRID OBS: 297380 [ WARN] [1728459367.420854779]: GRID OBS: 297380 [ WARN] [1728459367.840933026]: GRID OBS: 297380 [ WARN] [1728459368.258019060]: GRID OBS: 297055 [ WARN] [1728459368.676310048]: GRID OBS: 296854 [ WARN] [1728459369.093412528]: GRID OBS: 296651 [ WARN] [1728459369.510933459]: GRID OBS: 296585 [ INFO] [1728459369.521487203]: Setting goal: Frame:odom, Position(4.008, -1.260, 0.000), Orientation(0.000, 0.000, -0.392, 0.920) = Angle: -0.806
============================ New Try ===================================
[ WARN] [1728459369.930596198]: GRID OBS: 296517 [learning_planning_node-4] process has died [pid 117827, exit code -6, cmd /home/penghui/AllocNet/devel/lib/planner/learning_planning name:=learning_planning_node log:=/home/penghui/.ros/log/20544c20-8611-11ef-b210-9d62ecb80ba5/learning_planning_node-4.log]. log file: /home/penghui/.ros/log/20544c20-8611-11ef-b210-9d62ecb80ba5/learning_planning_node-4*.log [ WARN] [1728459370.346978741]: GRID OBS: 296517 [ WARN] [1728459370.765971635]: GRID OBS: 296043 [ WARN] [1728459371.186035702]: GRID OBS: 295908 [ WARN] [1728459371.603406476]: GRID OBS: 295908 [ WARN] [1728459372.023163742]: GRID OBS: 295748 [ WARN] [1728459372.440859526]: GRID OBS: 295748 [ WARN] [1728459372.858567931]: GRID OBS: 295371 [ WARN] [1728459373.276415321]: GRID OBS: 295371 [ WARN] [1728459373.692699892]: GRID OBS: 295371 [ WARN] [1728459374.111118668]: GRID OBS: 295371 [ WARN] [1728459374.527947539]: GRID OBS: 295217
Hi, are you using the CPU version model? This error usually occurs because of the model's incorrect load or the mismatching of the library's version.
Thank you very much for your response. I am using the CPU version, and the issue has been resolved. I was using the wrong version, but now it's working correctly. Thanks again.
when I try to obtain some traning data by (1) running "python3 sample_on_single_map.py", one bug occurs "[Open3D WARNING] Read PCD failed: unable to open file: datasets/single_map_dataset/map.pcd" (2) running "python3 sample_trajs.py:, another bug occurs "File "sample_trajs.py", line 8, in from utils.bilevel_traj_opt import BiLevelTrajOpt ModuleNotFoundError: No module named 'utils.bilevel_traj_opt'", I try to find the script file named bileve_traj_opt under the repositry of "util", however, I did not find it out.
So, please help me with the above two problem and thank you !