FoundationVision / LlamaGen

Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
https://arxiv.org/abs/2406.06525
MIT License
1.33k stars 55 forks source link

When I set ipdb in gpt.py, I encounter this error,torch._dynamo.exc.InternalTorchDynamoError: `example_value` needs to be a `FakeTensor`wrapped by this instance of Dynamo. Found: tensor(..., device='meta', size=(2,)) #58

Open BinZhu-ece opened 3 months ago

BinZhu-ece commented 3 months ago

/storage/zhubin/LlamaGen/autoregressive/models/gpt.py(343)forward() 342 if idx is not None and cond_idx is not None: # training or naive inference --> 343 import ipdb; ipdb.set_trace() 344 cond_embeddings = self.cls_embedding(cond_idx, train=self.training)[:,:self.cls_token_num]

ipdb> n torch._dynamo.exc.InternalTorchDynamoError: example_value needs to be a FakeTensorwrapped by this instance of Dynamo. Found: tensor(..., device='meta', size=(2,))

from user code: File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 344, in torch_dynamo_resume_in_forward_at_343 cond_embeddings = self.cls_embedding(cond_idx, train=self.training)[:,:self.cls_token_num] File "/storage/miniconda3/envs/motionctrl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 113, in forward caption = self.token_drop(caption, force_drop_ids) File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 104, in token_drop drop_ids = torch.rand(caption.shape[0], device=caption.device) < self.uncond_prob

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True