Closed dpwolfe closed 3 years ago
Hello!
If you look at your error log you can actually see the issue
File "/home/dpwolfe/repo/OpenPCDet/pcdet/datasets/augmentor/data_augmentor.py", line 84, in random_image_flip
It looks like your environment is using the source code from OpenPCDet
rather than CaDDN
, which currently are slightly different (Currently working on updating this repo to be in line with OpenPCDet
.) It might be that you are using the same conda environment for both, and you installed OpenPCDet more recently so this code is being used.
If you are already using the OpenPCDet
repo, my recommendation is to stick with the CaDDN
implementation over there. However, if you only require CaDDN
then I would recommend to use this source code here and just ensure that these are in separate virtual environments.
Thank you @codyreading for the fast reply! I'll give this a shot and let you know how it goes here soon.
Also FYI, refer to https://github.com/TRAILab/CaDDN/issues/23 for running on versions > torch 1.4.0. This has been fixed in OpenPCDet
but not in CaDDN
Thank you again @codyreading! This worked great along with the path mentioned in #23.
Hello,
Thank you for maintaining this project. I am trying out the test.py and train.py scripts with the kitti dataset from NVIDIA Jetson AGX Xavier and running into an error I am having trouble resolving. Shortly after I get to these log lines, an exception is thrown:
This is the command:
python train.py --cfg_file cfgs/kitti_models/CaDDN.yaml --batch_size 2
That exception is this:
While going through the setup process, I've needed to use some slightly different versions of dependencies declared in the requirements file. I've done this to use available pre-built versions for aarch64 and avoid having to build them myself. Those are:
numpy 1.19.5 instead of 1.20.1 scikit-image 0.18.0rc1 instead of 0.18.1 scipy 1.5.4 instead of 1.6.1 tifffile 2020.9.3 instead of 2021.2.26
I've also needed to make a couple major version changes that I'm concerned might be the problem:
kornia 0.5.3 instead of 0.2.2 since 0.2.2 was not readily available for aarch64 torch 1.6.0 since kornia 0.5.3 requires >= 1.6.0
Hardware and Environment: NVIDIA Jetson AGX Xavier Jetpack 4.5 (Ubuntu 18.04) Python 3.6 (using miniforge) CUDA Version: 10.2
My
conda list
outputAlso, if I run
python test.py --cfg_file cfgs/kitti_models/CaDDN.yaml --batch_size 2 --ckpt ../checkpoints/caddn.pth
I will get the following:Followed by this error:
Have you seen these errors before or do you know if they're caused by the change in one of the dependencies, such as pytorch 1.6.0 instead of 1.4.0?
I greatly appreciate your help. The next path for me otherwise is to unwind my setup and rebuild it with torch 1.4.0 (NVIDIA provides it) since the exception originates from torch. I'll also have to see if I can build kornia 0.2.2 for aarch64.
Thank you!