Closed pseudotensor closed 5 months ago
@pseudotensor hi! Can you share a minimal reproducible code?
If you're calling the generate()
, all special tokens are converted to tensors here, before preparing attention mask
+1
@zucchini-nlp I have the same issue. This is effectively what I am doing:
model_name = 'xtts-v2'
prompt = 'This is a test prompt'
config = XttsConfig()
# Load the configuration from the model's config.json file
config.load_json(path)
model = Xtts.init_from_config(config)
model.load_checkpoint(
config, checkpoint_dir=checkpoint_dir, eval=True
)
gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(
audio_path=samples
)
gen = model.inference_stream(
prompt,
language=language,
gpt_cond_latent=gpt_cond_latent,
speaker_embedding=speaker_embedding,
)
for chunk in gen:
print(chunk)
And I get the error:
I opened https://github.com/idiap/coqui-ai-TTS/issues/31 in our Coqui fork. This is due to the XTTS streaming code modifying generate()
and calling internal methods that have been changed in #30624. PRs welcome to fix it on our side, I'm not very familiar with that code.
@eginhard Are you aware of an older version of transformers
where this streaming works?
Anything below 4.41.0 should work. I've just released coqui-tts
version 0.24.1 that limits transformers
to lower versions to temporarily fix that until someone properly updates the streaming code. This issue can probably be closed because I don't think any action is needed here.
No problem, I patched coqui too to handle
System Info
4.41.0 python 3.10
Who can help?
@ArthurZucker @gante
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
This change: https://github.com/huggingface/transformers/commit/7130a22db9033e47b34a5e836b6014d531179f02
Here:
https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L486-L493
is leading to alot of different software to fail with the below error:
A work-around patch is:
E.g. Coqui XTT (no longer maintained) fails like this without the above patch.
What is going on?
Expected behavior
No failure. I expect a Number (as it says is allowed) to be converted properly to a tensor on same device so no failure.