An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Describe the bug:
I'm using https://github.com/princeton-vl/RAFT trained with the pre-trained raft-things.pt.
When speeding up the model I get and error:
File "/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py", line 245, in call
result = self.func(*self.positional, **self.keyword)
TypeError: linspace(): argument 'layout' must be torch.layout, not NoneType
The 'layout' is indeed None.
Environment:
NNI version:2.10
Training service (local|remote|pai|aml|etc):
Python version: 3.8
PyTorch version:1.10
Cpu or cuda version:cuda 11.3
Reproduce the problem
Code|Example:
'''
model = torch.nn.DataParallel(RAFT(args))
model.load_state_dict(torch.load(args.model))
model.cuda()
print(model)
Describe the bug: I'm using https://github.com/princeton-vl/RAFT trained with the pre-trained raft-things.pt. When speeding up the model I get and error: File "/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py", line 245, in call result = self.func(*self.positional, **self.keyword) TypeError: linspace(): argument 'layout' must be torch.layout, not NoneType
The 'layout' is indeed None.
Environment:
Reproduce the problem
Code|Example: ''' model = torch.nn.DataParallel(RAFT(args)) model.load_state_dict(torch.load(args.model)) model.cuda() print(model)
dimensinos from chairs
dummy_input_1 = torch.rand(1, 3, 384, 512).to('cuda') dummy_input_2 = torch.rand(1, 3, 384, 512).to('cuda')
traced_model = torch.jit.trace(model, [dummy_input_1, dummy_input_2])
config_list = [{ 'sparsity_per_layer': 0.5, 'op_types': ['Linear', 'Conv2d'] }, { 'exclude': True, 'op_names': ['module.cnet.conv2'] }]
pruner = L1NormPruner(model, config_list)
_, masks = pruner.compress()
for name, mask in masks.items(): print(name, ' sparsity : ', '{:.2}'.format(mask['weight'].sum() / mask['weight'].numel()))
pruner._unwrap_model()
ModelSpeedup(model, [dummy_input_1, dummy_input_2] , masks).speedup_model() '''
How to reproduce: Download RAFT github or take RAFT from Torch vision. Run speedup sample code.