I am trying to run the example in Google Collab but I get a Runtime error when running the part for obtaining the conditioning embeddings:
from musiclm_pytorch import MuLaNEmbedQuantizer
# setup the quantizer with the namespaced conditioning embeddings, unique per quantizer as well as namespace (per transformer)
quantizer = MuLaNEmbedQuantizer(
mulan = mulan, # pass in trained mulan from above
conditioning_dims = (1024, 1024, 1024), # say all three transformers have model dimensions of 1024
namespaces = ('semantic', 'coarse', 'fine')
)
# now say you want the conditioning embeddings for semantic transformer
wavs = torch.randn(2, 1024)
conds = quantizer(wavs = wavs, namespace = 'semantic') # (2, 8, 1024) - 8 is number of quantizers
RuntimeError: The size of tensor a (20) must match the size of tensor b (2560) at non-singleton dimension 3
I am trying to run the example in Google Collab but I get a Runtime error when running the part for obtaining the conditioning embeddings:
RuntimeError: The size of tensor a (20) must match the size of tensor b (2560) at non-singleton dimension 3