yoyololicon / pytorch-NMF

A pytorch package for non-negative matrix factorization.
https://pytorch-nmf.readthedocs.io/
MIT License
226 stars 24 forks source link

'NoneType' object has no attribute 't' #8

Open schonkopf opened 3 years ago

schonkopf commented 3 years ago

While running the following codes: net = NMF(data.shape, rank=self.basis_num, W=torch.Tensor(self.W), trainable_W=False).cuda() net.fit(data.cuda(), verbose=True, max_iter=200, tol=1e-18, beta=1)

I received the error message, any idea? /content/pytorchNMF/torchnmf/nmf.py in fit(self, V, beta, tol, max_iter, verbose, alpha, l1_ratio) 281 282 with torch.no_grad(): --> 283 WH = self.reconstruct(H, W) 284 loss_init = previous_loss = beta_div(WH, V, beta).mul(2).sqrt().item() 285

/content/pytorchNMF/torchnmf/nmf.py in reconstruct(H, W) 486 @staticmethod 487 def reconstruct(H, W): --> 488 return F.linear(H, W) 489 490

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1690 ret = torch.addmm(bias, input, weight.t()) 1691 else: -> 1692 output = input.matmul(weight.t()) 1693 if bias is not None: 1694 output += bias

AttributeError: 'NoneType' object has no attribute 't'

yoyololicon commented 3 years ago

@schonkopf If the Vshape argumet (the first argument) is specified, it will overrided other arguments like W and H. To use custom template tensor, in your case, you have to manually set W and H.

Example:

net = NMF(W=torch.Tensor(self.W), trainable_W=False, H=(data.shape[0], self.basis_num)).cuda()

Seems I didn't write the document clearly, I will add some assertions in the module to let user know what happened.

yoyololicon commented 3 years ago

Hi @schonkopf,

In the newest commit e4563b2 I add an warning note in class docstring. Also, I changed the behavior of trainable_* arguments to only take effect when template tensor is given, it's also written in docstring.

While running the following codes: net = NMF(data.shape, rank=self.basis_num, W=torch.Tensor(self.W), trainable_W=False).cuda() net.fit(data.cuda(), verbose=True, max_iter=200, tol=1e-18, beta=1)

This way will work without error now, though W would be replaced by random weights.