GeekAlexis / FastMOT

High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀
MIT License
1.14k stars 254 forks source link

installation problems #27

Closed juanluisrosaramos closed 3 years ago

juanluisrosaramos commented 3 years ago

I really want to test your project as it seems quite accurate in tracking but i couldn't create a proper environment to test it.

Can you please provide versions numbers of the software your're using. To my understanding CUDA and etc are very unstable frameworks and you are using quite a lot of frameworks (ONXX, TensorRT....) that became possible points of failures.

My best chance was to use a nvidia tensort docker image but I finnaly have this error on ONXX and couldn't execute.

[TensorRT] ERROR: (Unnamed Layer* 486) [Shuffle]: at most one dimension may be inferred [CRITICAL] Failed to parse the ONNX file [ERROR] In node -1 (scaleHelper): UNSUPPORTED_NODE: Assertion failed: dims.nbDims == 4 || dims.nbDims == 5

I'm using this image

Maybe you can build a docker image :)

Thank you very much and kind regards, Juan Luis

GeekAlexis commented 3 years ago

@juanluisrosaramos Hi, thanks for your interest. Please try an older docker version like 20.09. TensorRT 7.2 seems to have issues with one of the ONNX models. Let me know if it works, I do plan to support docker image if time permits.

juanluisrosaramos commented 3 years ago

Hi @GeekAlexis I got it working but with some problems. Images of TensorRT above 20.03 are not working due to problems loading the models. "Unable to load the engine file" and [TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match) [TensorRT] ERROR: INVALID_STATE: std::exception [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

In fact, I couldn't have it working with Yolo and need to use SSD from the config file. So,download PascalVOC

OpenCV with gstream couldn't be installed in that docker image so WITH_GSTREAMER = False here

To launch the container xhost + docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $(pwd):/workspace nvcr.io/nvidia/tensorrt:20.03-py3

Then proceed with install Download models and build Yolov4...

Also uff installation is needed as you already provided a how to: /workspace/TensorRT-7.2.1.6/uff# pip install uff-0.6.9-py2.py3-none-any.whl

Hope it help, please don't have any doubts if anything else or imporvement is needed

Thanks, Juan Luis

GeekAlexis commented 3 years ago

@juanluisrosaramos Thanks for your detailed instructions! CUDA 11 might be the issue if TensorRT images above 20.03 failed. Not sure why YOLOv4 doesn't work though.

juanluisrosaramos commented 3 years ago

You're welcome I try to compile model following the https://github.com/jkjung-avt/tensorrt_demos#yolov4 but it's still not working About YOLO not working maybe is related to TensorRT versions?

Was this engine (opt.engine) also created using TensorRT6.0? If the engine was created and ran on different versions, this may happen. TensorRT engines are not compatible across different TensorRT versions.

Mentioned here

GeekAlexis commented 3 years ago

@juanluisrosaramos The deserialization error seems strange. The engine should be both created and run using your installed version in the container. Did you switch containers after creating the engine? Can you delete yolov4_crowdhuman.trt and rebuild the engine?

juanluisrosaramos commented 3 years ago

@GeekAlexis you're right deleting and rebuilding the Yolo engine make it work with Yolo. The problem now is that Yolo is not detecting anyone and with SSD it works charmingly. I reviewed the configuration but there isn't any error there. I will let you know when I have Yolo detecting anything. But definitely, TensorRT containers are a good point of start to have your project working on it. Thanks again for sharing your code.

GeekAlexis commented 3 years ago

@juanluisrosaramos Thanks for the update. Check #30 about YOLO

Jordan-5i commented 3 years ago

I really want to test your project as it seems quite accurate in tracking but i couldn't create a proper environment to test it.

Can you please provide versions numbers of the software your're using. To my understanding CUDA and etc are very unstable frameworks and you are using quite a lot of frameworks (ONXX, TensorRT....) that became possible points of failures.

My best chance was to use a nvidia tensort docker image but I finnaly have this error on ONXX and couldn't execute.

[TensorRT] ERROR: (Unnamed Layer* 486) [Shuffle]: at most one dimension may be inferred [CRITICAL] Failed to parse the ONNX file [ERROR] In node -1 (scaleHelper): UNSUPPORTED_NODE: Assertion failed: dims.nbDims == 4 || dims.nbDims == 5

I'm using this image

Maybe you can build a docker image :)

Thank you very much and kind regards, Juan Luis

I also encountered the same problem, do you mean that the version of tensorrt is wrong? I am using TensorRT-7.2.1.6, CUDA 10.2. I have not used TensorRT before, so I am very confused about this.

GeekAlexis commented 3 years ago

@Jordan-5i Yes, at this time please use TensorRT < 7.2. If you can't, I will support a docker container soon, so stayed tuned.

GeekAlexis commented 3 years ago

Docker image is added, closing this now.

Jordan-5i commented 3 years ago

I used the docker image you provided, but I was running it in docker and prompted an error. Mine is ubuntu18.04, RTX 2080Ti, what is the error? root@43fb18ff6fcb:/FastMOT# python3 app.py --input_uri test.flv --mot [INFO] 1280x720 stream @ 25 FPS [INFO] Loading detector model... [TensorRT] ERROR: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match) [TensorRT] ERROR: INVALID_STATE: std::exception [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed. Traceback (most recent call last): File "app.py", line 109, in main() File "app.py", line 47, in main draw=draw, verbose=args.verbose) File "/FastMOT/fastmot/mot.py", line 50, in init self.detector = YoloDetector(self.size, config['yolo_detector']) File "/FastMOT/fastmot/detector.py", line 181, in init self.backend = InferenceBackend(self.model, self.batch_size) File "/FastMOT/fastmot/utils/inference.py", line 45, in init raise RuntimeError('Unable to load the engine file') RuntimeError: Unable to load the engine file

微信截图_20201129205349

GeekAlexis commented 3 years ago

@Jordan-5i Hi, you are using the old TensorRT engines generated on your host. I just fixed dockerignore on master so old engines won't be copied over to the container. You can remove and rebuild the image. Then repeat the steps you did to download models and compile the plugin.