Closed talenz closed 5 years ago
(.haq) [root@b6706b6a30d7 HAQ-master]# python rl_quantize.py --gpu_id 2 support models: ['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'googlenet', 'inception_v3', 'mobilenet_v2', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x8d', 'resnext50_32x4d', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn', 'mobilenet_v3'] ==> Output path: ../../save/mobilenet_v2_imagenet... Traceback (most recent call last): File "rl_quantize.py", line 210, in model = torch.nn.DataParallel(model).cuda() File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 265, in cuda return self._apply(lambda t: t.cuda(device)) File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 199, in _apply param.data = fn(param.data) File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 265, in return self._apply(lambda t: t.cuda(device)) RuntimeError: CUDA error: out of memory
I think you should use more GPU. (PS: This code is evaluated on 4x2080Ti GPU.)
(.haq) [root@b6706b6a30d7 HAQ-master]# python rl_quantize.py --gpu_id 2 support models: ['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'googlenet', 'inception_v3', 'mobilenet_v2', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x8d', 'resnext50_32x4d', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn', 'mobilenet_v3'] ==> Output path: ../../save/mobilenet_v2_imagenet... Traceback (most recent call last): File "rl_quantize.py", line 210, in
model = torch.nn.DataParallel(model).cuda()
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 265, in cuda
return self._apply(lambda t: t.cuda(device))
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 199, in _apply
param.data = fn(param.data)
File "/aiml/.haq/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 265, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory