lucidrains / meshgpt-pytorch

Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
MIT License
744 stars 59 forks source link

Can't run MeshTransformer.generate(return_codes = True) #19

Closed fire closed 10 months ago

fire commented 10 months ago

MeshTransformer.generate(return_codes = True) fails for me. No need to rush for the weekend.

lucidrains commented 10 months ago

oh what kind of error do you see? i can put in one last fix before i head out with doggo

lucidrains commented 10 months ago

hmm, it runs for me

i'll address this Monday if you are hitting some edge case

fire commented 10 months ago

I captured some logs. Probably using the decoder wrong.

██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3612/3612 [03:03<00:00, 19.66it/s]
Traceback (most recent call last):
  File "C:\Users\ernes\scoop\apps\python310\current\lib\site-packages\einops\einops.py", line 523, in reduce
    return _apply_recipe(
  File "C:\Users\ernes\scoop\apps\python310\current\lib\site-packages\einops\einops.py", line 234, in _apply_recipe
    init_shapes, axes_reordering, reduced_axes, added_axes, final_shapes, n_axes_w_added = _reconstruct_from_shape(
  File "C:\Users\ernes\scoop\apps\python310\current\lib\site-packages\einops\einops.py", line 187, in _reconstruct_from_shape_uncached
    raise EinopsError(f"Shape mismatch, can't divide axis of length {length} in chunks of {known_product}")
einops.EinopsError: Shape mismatch, can't divide axis of length 3611 in chunks of 3

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "F:\meshgpt-pytorch\inference.py", line 61, in <module>
    continuous_coors = transformer.autoencoder.decode_from_codes_to_faces(codes)
  File "<@beartype(meshgpt_pytorch.meshgpt_pytorch.MeshAutoencoder.decode_from_codes_to_faces) at 0x198ba48d1b0>", line 53, in decode_from_codes_to_faces
  File "C:\Users\ernes\scoop\apps\python310\current\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "F:\meshgpt-pytorch\meshgpt_pytorch\meshgpt_pytorch.py", line 685, in decode_from_codes_to_faces
    face_mask = reduce(codes != self.pad_id, 'b (nf nv q) -> b nf', 'all', nv = 3, q = self.num_quantizers)
  File "C:\Users\ernes\scoop\apps\python310\current\lib\site-packages\einops\einops.py", line 533, in reduce
    raise EinopsError(message + "\n {}".format(e))
einops.EinopsError:  Error while processing all-reduction pattern "b (nf nv q) -> b nf".
 Input tensor shape: torch.Size([1, 3611]). Additional info: {'nv': 3, 'q': 1}.
 Shape mismatch, can't divide axis of length 3611 in chunks of 3
lucidrains commented 10 months ago

are you on the most recent version?

fire commented 10 months ago

I'm on a older version, there's another report the transformer broke, but take the weekend off, take care.

fire commented 10 months ago

I fixed it, the python pip needed a newer version of pooled attention

lucidrains commented 10 months ago

@fire oh nice, which package was it?

fire commented 10 months ago

One of your attention ones

lucidrains commented 10 months ago

@fire not very specific lol

fire commented 10 months ago

I’ll check when I’m at a computer

lucidrains commented 10 months ago

not a big deal, was just curious

glad it fixed itself!

fire commented 10 months ago

'gateloop-transformer>=0.1.5', was a few versions behind.

lucidrains commented 10 months ago

@fire ohh got it, that is actually not attention, but a new emerging technique (transformer is a misnomer)