jonbarron / robust_loss_pytorch

A pytorch port of google-research/google-research/robust_loss/
Apache License 2.0
656 stars 88 forks source link

TypeError #20

Closed pengzhangzhi closed 2 years ago

pengzhangzhi commented 3 years ago

Hi,there! When I implemented the following code:

import robust_loss_pytorch.general

adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
    num_dims = 3, float_dtype=torch.cuda.FloatTensor, device='cuda:0')

I got the error:


---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
2
3 adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
----> 4 num_dims = 3, float_dtype=torch.cuda.FloatTensor, device='cuda:0')
5
6 params = list(model.parameters()) + list(adaptive.parameters())

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in init(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
154 latent_alpha_init.clone().detach().to(
155 dtype=self.float_dtype,
--> 156 device=self.device)[np.newaxis, np.newaxis].repeat(
157 1, self.num_dims),
158 requires_grad=True))

TypeError: to() received an invalid combination of arguments - got (device=str, dtype=torch.tensortype, ), but expected one of:

(torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
(torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
(Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)

I am confused why there is a type error.

Would you please provide me some solutions?

Thanks a million!

pengzhangzhi commented 3 years ago

when I tried to use code : adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction( num_dims = 3, float_dtype=torch.cuda.FloatTensor, device=torch.device('cuda')) still got an error:


ValueError                                Traceback (most recent call last)
<ipython-input-352-9429c9b297ff> in <module>()
      2 
      3 adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
----> 4     num_dims = 3, float_dtype=torch.cuda.FloatTensor, device=torch.device('cuda'))

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in __init__(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
    130        (isinstance(device, str) and 'cuda' in device) or\
    131        (isinstance(device, torch.device) and device.type == 'cuda'):
--> 132         torch.cuda.set_device(self.device)
    133 
    134     self.distribution = distribution.Distribution()

~/anaconda3/envs/torch36/lib/python3.6/site-packages/torch/cuda/__init__.py in set_device(device)
    241             if this argument is negative.
    242     """
--> 243     device = _get_device_index(device)
    244     if device >= 0:
    245         torch._C._cuda_setDevice(device)

~/anaconda3/envs/torch36/lib/python3.6/site-packages/torch/cuda/_utils.py in _get_device_index(device, optional)
     32         else:
     33             raise ValueError('Expected a cuda device with a specified index '
---> 34                              'or an integer, but got: '.format(device))
     35     return device_idx

ValueError: Expected a cuda device with a specified index or an integer, but got: 

`​``
I am totally freaking out by the error!!(I an stuck as there for  2 days!)
Hope to konw what is the correct usage!
Thank you for such an amazing work!