We have been able to run Python version of Foundationpose released by NVLabs on an RTX3060 With 12GB Ram whereas when we try ROS version of foundationpose the GPU memory requirements suits up and ultimately the execution stops with below errors:
component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/std/block_memory_pool.cpp@77: Failure in cudaMalloc. cuda_error: cudaErrorMemoryAllocation, error_str: out of memory
[component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/std/entity_warden.cpp@437: Failed to initialize component 00157 (pool)
[component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/core/runtime.cpp@702: Could not initialize entity 'YNWEMQPEYV_inference' (E152): GXF_OUT_OF_MEMORY
[component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/std/program.cpp@283: Failed to activate entity 00152 named YNWEMQPEYV_inference: GXF_OUT_OF_MEMORY
[component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/std/program.cpp@285: Deactivating...
[component_container_mt-1] 2024-06-21 14:47:16.877 ERROR gxf/core/runtime.cpp@1452: Graph activation failed with error: GXF_OUT_OF_MEMORY
Is there a fix/workaround of this? Is there any reason for memory requirement of this package to go higher compare to original Foundationpose?
We have been able to run Python version of Foundationpose released by NVLabs on an RTX3060 With 12GB Ram whereas when we try ROS version of foundationpose the GPU memory requirements suits up and ultimately the execution stops with below errors:
Is there a fix/workaround of this? Is there any reason for memory requirement of this package to go higher compare to original Foundationpose?
The requirement mentioned by you on the documentation is already satisfied. https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_pose_estimation/index.html#supported-platforms