Closed irowberry closed 1 year ago
@irowberry oh yes, thank you
can you check if 0.1.21 works?
Yes, that worked, however, I am getting this issue now, conflict of data types
File "/home/remote/Documents/Isaac/train_perfusion.py", line 85, in __call__
key = self.to_k_custom_diffusion(encoder_hidden_states.to(self.to_k_custom_diffusion.weight.dtype),
File "/home/remote/Documents/Isaac/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "<@beartype(perfusion_pytorch.perfusion.Rank1EditModule.forward) at 0x7f8b481b41f0>", line 28, in forward
beartype.roar.BeartypeCallHintParamViolation: Method perfusion_pytorch.perfusion.Rank1EditModule.forward() parameter text_enc="tensor([[[-0.1979, -0.1260, -0.2684, ..., -0.3120, -0.5919, -0.1097],
[ 2.5177, -1... violates type hint <class 'torch.FloatTensor'>, as <protocol "torch.Tensor"> "tensor([[[-0.1979, -0.1260, -0.2684, ..., -0.3120, -0.5919, -0.1097],
[ 2.5177, -1... not instance of <protocol "torch.FloatTensor">.
@irowberry ahh ok, yea, i'll fight that battle another day
want to try 0.1.22?
That fixed it
@lucidrains I've finished a training and inference loop, however, the issue is now putting all the tensors on the same device. I've ran around 10ish loops on a CPU, and the loss is going down, so it's working. Currently it's getting stuck in embedding.py at line 176.
self.embed.weight
is on GPU, andx
is on CPU.x
is coming from the CLIP tokenizer.