huggingface / parler-tts

Inference and training library for high-quality TTS models.
Apache License 2.0
2.6k stars 265 forks source link

Won't work #36

Open Kwisss opened 2 weeks ago

Kwisss commented 2 weeks ago

first of all congrats on your accomplishments!

but I must be doing something wrong but I can't get it to work, I want to install it in my textgenwebui environment but I get this error:

C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\torch\nn\utils\weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.") You setadd_prefix_space. The tokenizer needs to be converted from the slow tokenizers Using the model-agnostic defaultmax_length(=2580) to control the generation length. We recommend settingmax_new_tokensto control the maximum length of the generation. Callingsampledirectly is deprecated and will be removed in v4.41. Usegenerate` or a custom generation loop instead. --- Logging error --- Traceback (most recent call last): File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\logging__init.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\logging__init__.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\logging__init__.py", line 687, in format record.message = record.getMessage() ^^^^^^^^^^^^^^^^^^^ File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\logging\init__.py", line 377, in getMessage msg = msg % self.args


TypeError: not all arguments converted during string formatting
Call stack:
  File "C:\text-generation-webui-snapshot-2024-04-21\snippet.py", line 17, in <module>
    generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
  File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\parler_tts\modeling_parler_tts.py", line 2608, in generate
    outputs = self.sample(
  File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\transformers\generation\utils.py", line 2584, in sample
    return self._sample(*args, **kwargs)
  File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\transformers\generation\utils.py", line 2730, in _sample
    logger.warning_once(
  File "C:\text-generation-webui-snapshot-2024-04-21\installer_files\env\Lib\site-packages\transformers\utils\logging.py", line 329, in warning_once
    self.warning(*args, **kwargs)
Message: '`eos_token_id` is deprecated in this function and will be removed in v4.41, use `stopping_criteria=StoppingCriteriaList([EosTokenCriteria(eos_token_id=eos_token_id)])` instead. Otherwise make sure to set `model.generation_config.eos_token_id`'
Arguments: (<class 'FutureWarning'>,)` 

It is super vague, and I don't know where to look next. 
My current version  of python is 3.11, torch 2.2.1+cu121, transformers 4.40.0

can anyone point me in the right direction?
thanks for your time!
ferOnti commented 1 week ago

I have same issue in Ubuntu

ylacombe commented 1 week ago

Hey there, could you send a proper traceback and more details on your dataset/environment/parameters ? thanks