Open VisionZQ opened 6 years ago
I'll test it in this week and make sure this repo can be work fine for the cpu model.
Thank you very much.
Hi, Some nets can work fine for gpu mode, but others not, such as 'fssd_lite_mobilenetv2_train_voc.yml' . because the code 'x = torch.autograd.Variable(x, volatile=True).cuda()' in model_builder.py if I modify the code for cpu mode , the error 'Fan in and fan out can not be computed for tensor with less than 2 dimensions' in ssds_train.py", line 189, in initialize getattr(self.model, module).apply(self.weights_init) happened。
Do you mean fssd_lite_mobilenetv2_train_voc.yaml
has problem in cpu model rather than gpu model?
Yes, that part is used to infer the featuremap size. Let me try whether it could work fine in cpu model.
I just commit a update for cpu model in infer. I test it by running sh time_benchmark.sh
. This script can test the speed in cpu model and gpu model.
It seems that all the model is ok, I'll keep this issue open. Please let me know if you have any questions.
Thanks a lot for you kindness and efforts.. I do the same work by run 'sh time_benchmark.sh',all is ok. But , the error happened again when I running 'python train --cfg=./experiments/cfgs/fssd_lite_mobilenetv2_train_voc.yml ' . The error reminder 'Fan in and fan out can not be computed for tensor with less than 2 dimensions ' , it seems that error occur in /lib/ssds_train.py ,in initialize getattr(self.model, module).apply(self.weights_init)
OK, thanks for that, I'll take a look for that in this week.
It's weird, I do not meet this problem neither in gpu nor cpu model. Could you share you error?
ok..firstly, I don't have COCO datasets, VOC datasets is available for me..
Step1. I run 'python train.py --cfg=./experiments/cfgs/fssd_lite_mobilenetv2_train_voc.yml '
No module named 'lib.utils.pycocotools._mask'
So, I shield some code in dataset_factory.py
Then ,run 'python train.py --cfg=./experiments/cfgs/fssd_lite_mobilenetv2_train_voc.yml '
maybe,I should change the mode of initialization weight ???
it just work on"one" GPU also, no cpu, no multi GPU :/
The occupancy of CPU is full, how to fix it?
reduce the batch size
Hello ! Have you ever manged to modify the project codes to support multiple-gpu training and testing ? @isalirezag
Currently, it checks whether the gpu can be detected, so it should be work for cpu model. But we still do not have time to test it.