Open fadonoso opened 5 years ago
Hi @fadonoso ,
Thank you for your kind suggestion.
Currently, we do not have the Jetson TX2 nor Xavier machines, so we are sorry that we are not able to support the Jetson in the short term. But we do have plans to support ONNX thus the models will be easy to deploy. Further, we will change the project's dependencies so that the running environments are more generalizable to different machines (e.g. those do not support GUI, etc.). We hope these plans could do help to your project.
@ZwwWayne a model trained using mmdetection can not be converted into tensorrt, so as to run on the jetson?
Hi @Hemantr05 , You can try and it should be possible. But for now, we have not tried it due to the limitation of resources. PRs are welcomed.
I am about to get nvidia xavier device. I am familiar with mmdetection well and I would like to use models on the device. Are they supported ?
Just install mmcv-full and mmdet, and run demo.
➜ ~ python -m mmdet.utils.collect_env
/usr/lib/python3.6/runpy.py:125: RuntimeWarning: 'mmdet.utils.collect_env' found in sys.modules after import of package 'mmdet.utils', but prior to execution of 'mmdet.utils.collect_env'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
fatal: not a git repository (or any of the parent directories): .git
sys.platform: linux
Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
CUDA available: True
GPU 0: Xavier
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.9.0
PyTorch compiling details: PyTorch built with:
- GCC 7.5
- C++ Version: 201402
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: NO AVX
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_62,code=sm_62;-gencode;arch=compute_72,code=sm_72
- CuDNN 8.0
- Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=8.0.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -DMISSING_ARM_VST1 -DMISSING_ARM_VLD1 -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=ON, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.10.0
OpenCV: 4.1.1
MMCV: 1.3.13
MMCV Compiler: GCC 7.5
MMCV CUDA Compiler: 10.2
MMDetection: 2.16.0+
PyTorch for Jetson:https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/72048
git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
wget https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_2x_coco/mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth
import time
from mmcv.runner.fp16_utils import wrap_fp16_model
from mmdet.apis import init_detector, inference_detector
image_path = 'demo/demo.jpg'
config_path = 'configs/mask_rcnn/mask_rcnn_r50_fpn_2x_coco.py'
checkpoint_path = 'mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth'
model = init_detector(config_path, checkpoint_path, device='cuda:0')
wrap_fp16_model(model)
for i in range(5):
start = time.time()
result = inference_detector(model, image_path)
inference_time = time.time() - start
print(f'Inference time: {inference_time * 1000: .2f}ms')
model.show_result(
image_path, result=result, score_thr=0.5,
font_size=13, thickness=1, show=True, win_name='model_result'
)
Outputs:
Inference time: 2970.71ms
Inference time: 528.09ms
Inference time: 422.58ms
Inference time: 416.77ms
Inference time: 412.91ms
Wanted to confirm that mmcv and mmdetection work without issues on the Xavier (forked lt4-ml Docker image).
ENV MM_INSTALL_DIR=/opt/mmlibs
ENV MMCV_VERSION=v1.3.14
ENV MMDET_VERSION=v2.17.0
RUN mkdir ${MM_INSTALL_DIR}
WORKDIR ${MM_INSTALL_DIR}
RUN git clone 'https://github.com/open-mmlab/mmcv.git' && \
cd mmcv && \
git checkout tags/${MMCV_VERSION} && \
MMCV_WITH_OPS=1 pip3 install -e .
WORKDIR ${MM_INSTALL_DIR}
RUN git clone 'https://github.com/open-mmlab/mmdetection.git' && \
cd mmdetection && \
git checkout tags/${MMDET_VERSION} && \
pip3 install -r requirements/build.txt && \
pip3 install -e .
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-tk
@anuragxel I'm trying to reproduce your Dockerfile to run a mmdetection container on Xavier, can you point me to a correct l4t-ml image? I'm using the latest NGC NVIDIA L4T ML but i get the following error when the docker build command reach the installation of mmcv: libcurand.so.10: cannot open shared object file: No such file or directory
@ypwhs I can see you are using PyTorch: 1.9.0 but with no cuda ?! for example torch==1.9.1+cu111 ?? Im struggling to install pytorch with GPU support.
Hey @AndreaMacri-sys,
Here's my Dockerfile: https://gist.github.com/anuragxel/6cf5183a21155e1b5dee788f5f58fc20
Working with nvidia containers has been a pain, haven't been able to make mmdetection work with TensorRT within docker yet. Hope this helps!
Hey @AndreaMacri-sys,
Here's my Dockerfile: https://gist.github.com/anuragxel/6cf5183a21155e1b5dee788f5f58fc20
Working with nvidia containers has been a pain, haven't been able to make mmdetection work with TensorRT within docker yet. Hope this helps!
Hey @anuragxel Thanks for sharing the Dockerfile... I have successfully installed mmcv and mmdet on my Jetson using this. However, when I want to import modules like "from mmdet.apis import init_detector, inference_detector" I always get the following error: ModuleNotFoundError: No module named 'mmcv._ext'
Did anyone encounter the same problems and how did you solve it? Thanks!
Hello,
I have exactly the same issue. I have successfully compiled and installed mmcv and mmdet for usage with mmtracking but when running a demo or just messing around with mmcv, i always get the same import error, namely:
ModuleNotFoundError: No module named 'mmcv._ext'
I want to specify that i have installed those library into my regular Python3.8 environment (No Docker image or anaconda virtual environment) as i will have to use them into a ROS System.
If anyone has an idea on how to tackle this problem, it would be awesome !
Thanks a lot
Hi @miniferretti I used an older version of mmcv and when I installed the newest one it suddenly worked. Maybe that also helps for your cause!
Just install mmcv-full and mmdet, and run demo.
Environment
➜ ~ python -m mmdet.utils.collect_env /usr/lib/python3.6/runpy.py:125: RuntimeWarning: 'mmdet.utils.collect_env' found in sys.modules after import of package 'mmdet.utils', but prior to execution of 'mmdet.utils.collect_env'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) fatal: not a git repository (or any of the parent directories): .git sys.platform: linux Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] CUDA available: True GPU 0: Xavier CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.2, V10.2.89 GCC: gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.9.0 PyTorch compiling details: PyTorch built with: - GCC 7.5 - C++ Version: 201402 - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: NO AVX - CUDA Runtime 10.2 - NVCC architecture flags: -gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_62,code=sm_62;-gencode;arch=compute_72,code=sm_72 - CuDNN 8.0 - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=8.0.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -DMISSING_ARM_VST1 -DMISSING_ARM_VLD1 -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=ON, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON, TorchVision: 0.10.0 OpenCV: 4.1.1 MMCV: 1.3.13 MMCV Compiler: GCC 7.5 MMCV CUDA Compiler: 10.2 MMDetection: 2.16.0+
PyTorch for Jetson:https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/72048
Prepare model
git clone https://github.com/open-mmlab/mmdetection cd mmdetection wget https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_2x_coco/mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth
Run Demo
import time from mmcv.runner.fp16_utils import wrap_fp16_model from mmdet.apis import init_detector, inference_detector image_path = 'demo/demo.jpg' config_path = 'configs/mask_rcnn/mask_rcnn_r50_fpn_2x_coco.py' checkpoint_path = 'mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth' model = init_detector(config_path, checkpoint_path, device='cuda:0') wrap_fp16_model(model) for i in range(5): start = time.time() result = inference_detector(model, image_path) inference_time = time.time() - start print(f'Inference time: {inference_time * 1000: .2f}ms') model.show_result( image_path, result=result, score_thr=0.5, font_size=13, thickness=1, show=True, win_name='model_result' )
Outputs:
Inference time: 2970.71ms Inference time: 528.09ms Inference time: 422.58ms Inference time: 416.77ms Inference time: 412.91ms
excuese me how to build it my device is jeson AGX xavier i use the https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048 install Python 3.6 - torch-1.9.0-cp36-cp36m-linux_aarch64.whl(https://nvidia.box.com/shared/static/h1z9sw4bb1ybi0rm3tu8qdj8hs05ljbm.whl) but when import torch it will Illegal instruction (core dumped)
嗨,我使用了旧版本的 mmcv,当我安装最新版本时,它突然起作用了。也许这对你的事业也有帮助! I had the same problem, I got an error running the programme in pycharm. When I try to run the programme in the system terminal he runs it, I don't quite understand the reason for this situation yet. You can try this, I hope it helps.
Thanks!!!
***@***.***
---- Replied Message ----
From
***@***.***>
Date
1/8/2024 10:44
To
***@***.***>
Cc
***@***.***>
,
***@***.***>
Subject
Re: [open-mmlab/mmdetection] make mmdetection compatible with jetson tx2 and xavier. (#1563)
嗨,我使用了旧版本的 mmcv,当我安装最新版本时,它突然起作用了。也许这对你的事业也有帮助! I had the same problem, I got an error running the programme in pycharm. When I try to run the programme in the system terminal he runs it, I don't quite understand the reason for this situation yet. You can try this, I hope it helps.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
Describe the feature Make mmdetection compatible with jetson tx2 and xavier.
Motivation Nvidia Jetson is the most used SBC for image processing in the world.
Related resources https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/
Regards, Felipe D.