Closed nabi-rony closed 1 year ago
14538 14539 Labeled set size: 3011 Doing PL (pseudo-label) 0 Traceback (most recent call last): File "train.py", line 393, in main() File "train.py", line 387, in main unsupervised_dataset, labeled_set, unlabeled_set, indices, net_name, pseudo=args.do_PL) File "/work/mn918/AL-SSL-main/loaders.py", line 87, in change_loaders voc=voc, num_classes=num_classes) File "/work/mn918/AL-SSL-main/pseudo_labels.py", line 42, in predict_pseudo_labels boxes = get_pseudo_labels(testset, net, labels, unlabeled_set=unlabeled_set, threshold=threshold, voc=voc) File "/work/mn918/AL-SSL-main/pseudo_labels.py", line 62, in get_pseudo_labels y = net(xx) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, kwargs) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, *kwargs) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(input, kwargs) File "/work/mn918/AL-SSL-main/ssd.py", line 109, in forward self.priors.type(type(x.data)) File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/autograd/function.py", line 151, in call "Legacy autograd function with non-static forward method is deprecated. " RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
I got this error. Any idea how to fix it? I am using Python 3.6.13 Pytorch 1.1.0
I found the solution - Need to update the detection.py and ssd.py with new-style autograd function. Check out these two to use for latest pytorch and CUDA. https://github.com/miyamotok0105/pytorch_handbook/blob/master/chapter7/layers/functions/detection.py https://github.com/miyamotok0105/pytorch_handbook/blob/master/chapter7/ssd.py
Thanks
14538 14539 Labeled set size: 3011 Doing PL (pseudo-label) 0 Traceback (most recent call last): File "train.py", line 393, in
main()
File "train.py", line 387, in main
unsupervised_dataset, labeled_set, unlabeled_set, indices, net_name, pseudo=args.do_PL)
File "/work/mn918/AL-SSL-main/loaders.py", line 87, in change_loaders
voc=voc, num_classes=num_classes)
File "/work/mn918/AL-SSL-main/pseudo_labels.py", line 42, in predict_pseudo_labels
boxes = get_pseudo_labels(testset, net, labels, unlabeled_set=unlabeled_set, threshold=threshold, voc=voc)
File "/work/mn918/AL-SSL-main/pseudo_labels.py", line 62, in get_pseudo_labels
y = net(xx)
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, kwargs)
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, *kwargs)
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(input, kwargs)
File "/work/mn918/AL-SSL-main/ssd.py", line 109, in forward
self.priors.type(type(x.data))
File "/work/mn918/miniconda3/envs/newenv/lib/python3.6/site-packages/torch/autograd/function.py", line 151, in call
"Legacy autograd function with non-static forward method is deprecated. "
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
I got this error. Any idea how to fix it? I am using Python 3.6.13 Pytorch 1.1.0