Closed fogx closed 5 years ago
Changes:
After these changes i was able to run the DaSiam Tracker on my CPU-Only machine.
With your changes can we still use the DaSiam Tracker on GPU? I was thinking that we could add an argument in the main.py
file --gpu
which would use the GPU version if CUDA was detected and the CPU version if not.
What do you think?
no i think it wont run on GPU anymore tbh. Should definitely add a switch argument like this:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
and replacing every .cuda()
with .to(device)
(from here: https://github.com/pytorch/pytorch/issues/1668)
An issue was that the DaSiam Tracker is in a submodule, so i couldn't change that part and had to fork it as well. Do you have any preferred method for that?
We could have two submodules:
main/DaSiamRPN
from the original repomain/DaSiamRPN_no_CUDA
from your forkwhat do you think? Then we could test if torch.cuda.available():
and we would import the functions from your fork if no CUDA is available.
i will take care of this as soon as i have some time
perfect! thank you
i implemented the change, but now the DaSIAM Tracker is going all over the place. Im not sure if its because of my changes though, since i can't really test it on a GPU right now. It shouldn't have any performance impacts (only time) as far as i know? Can you test this?
I also updated the Readme a bit, since the current one is outdated.
Great PR! I will test it! Sorry for the super later response!
Can I also recommend that you change the readme to instruct the user to use the install shell script in the dasiamrpn submodule for installing the dasiamrpn dependencies? This makes sure that you get the correct version of pytorch. It may still work with a different version but it's not guaranteed.
@WillJStone yes, I agree!
The rest is working fine on the CPU! I need to test on GPU (:
Should the GPU part work the same way as the official version? I haven't checked your fork yet.
I fixed the issues you pointed out and uploaded the model to the no_cuda repository. When i run the openLabeling Tool now i get this error: MESA-LOADER: failed to open r600 (search paths /usr/lib/dri) and KCF and MOSSE only run for 3 frames and then stop working. GOTURN will not work at all. Do you guys also get these errors? ive tried it on two different setups
The rest is working fine on the CPU! I need to test on GPU (:
Should the GPU part work the same way as the official version? I haven't checked your fork yet.
i added this code: device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') so everything should work fine with a gpu as well (and will use the first cuda device)
Can I also recommend that you change the readme to instruct the user to use the install shell script in the dasiamrpn submodule for installing the dasiamrpn dependencies? This makes sure that you get the correct version of pytorch. It may still work with a different version but it's not guaranteed.
can you give me a text to insert? Then ill do that. I'm not sure what the submodule prompt says. Otherwise i would just say "Install version as prompted by daSiamRPN or use this link to generate your required install code/file"
I fixed the issues you pointed out and uploaded the model to the no_cuda repository. When i run the openLabeling Tool now i get this error: MESA-LOADER: failed to open r600 (search paths /usr/lib/dri) and KCF and MOSSE only run for 3 frames and then stop working. GOTURN will not work at all. Do you guys also get these errors? ive tried it on two different setups
I do not get that error
I merged but there is still one issue:
The file SiamRPNVOT.model
should be inside the folder fogx/DaSiamRPN_noCUDA/code
.
I fixed other errors.
should i still move the model? since you closed the PR?
yes please, I assumed you would do that asap (:
ok i changed it. I think you need to update the submodule though (git submodule sync & git submodule update)
Done! Should be working fine now (:
I created a fork with fixes to the issues explained in Issue #57