Open rajuaryan21 opened 2 months ago
I am also experiencing the same problem. Any assistance would be greatly appreciated.
Hi @rajuaryan21 and @Asitjoshi45, The model exporter notebook is fixed now. I truly apologize for the inconvenience. Cheers.
Yes, it’s working! You’re a lifesaver. The open-source world would be nothing without people like you. I’m glad to be living in an era where people like you exist.
Hi @rmcpantoja, Thanks for the update! I’m glad to hear that the model exporter notebook issue has been resolved. Thank you for taking the time to fix this.
dunno if something happened in the last few days but the export is again not exporting the onnx file... I'm only getting the onnx.json!! Thanks in advance
@SvenSvenson38 I got it working by setting torch==2.4.1 pytorch-lightning==2.2.0 torchaudio==2.4.1 torchtext==0.18.0
@SvenSvenson38
I got it working by setting torch==2.4.1 pytorch-lightning==2.2.0 torchaudio==2.4.1 torchtext==0.18.0
Finally that worked! Thanks!
@SvenSvenson38 I got it working by setting torch==2.4.1 pytorch-lightning==2.2.0 torchaudio==2.4.1 torchtext==0.18.0
Hey, where do I make this change in collab notebook? Any help is appreciated.
I tried changing this here. But it didn't work. See screenshot attached.
I tried changing this here. But it didn't work. See screenshot attached.
For PyTorch-lightning and torch try using == instead of ~=.
changed everything as you said, but got this as output. Still not able to generate model.
Installing... /content/piper/src/python Collecting pip==24.0 Downloading pip-24.0-py3-none-any.whl.metadata (3.6 kB) Downloading pip-24.0-py3-none-any.whl (2.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 57.5 MB/s eta 0:00:00 Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 24.1.2 Uninstalling pip-24.1.2: Successfully uninstalled pip-24.1.2 Successfully installed pip-24.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchaudio 2.5.0+cu121 requires torch==2.5.0, but you have torch 2.4.1 which is incompatible. torchvision 0.20.0+cu121 requires torch==2.5.0, but you have torch 2.4.1 which is incompatible. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.0/16.0 MB 71.0 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 226.2/226.2 MB 5.0 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 2.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 6.9 MB/s eta 0:00:00 Compiling /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx because it changed. [1/1] Cythonizing /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx /usr/local/lib/python3.10/dist-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx tree = Parsing.p_module(s, pxd, full_module_name) performance hint: core.pyx:7:5: Exception check on 'maximum_path_each' will always require the GIL to be acquired. Possible solutions:
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_5.so.3 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc.so.2 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbbind.so.3 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc_proxy.so.2 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_adapter_opencl.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_loader.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_0.so.3 is not a symbolic link
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 31.3 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 49.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 519.2/519.2 kB 31.1 MB/s eta 0:00:00 Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (5.2.0) Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.12.3) Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.16.1) Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.32.3) Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.66.5) Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.6) Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.4.0) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.2.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2024.8.30) Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1) Done!
I tried changing this here. But it didn't work. See screenshot attached.
For PyTorch-lightning and torch try using == instead of ~=.
changed everything as you said, but got this as output. Still not able to generate model.
Installing...
/content/piper/src/python
Collecting pip==24.0
Downloading pip-24.0-py3-none-any.whl.metadata (3.6 kB)
Downloading pip-24.0-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 57.5 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 24.1.2 Uninstalling pip-24.1.2: Successfully uninstalled pip-24.1.2
Successfully installed pip-24.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchaudio 2.5.0+cu121 requires torch==2.5.0, but you have torch 2.4.1 which is incompatible.
torchvision 0.20.0+cu121 requires torch==2.5.0, but you have torch 2.4.1 which is incompatible.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.0/16.0 MB 71.0 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 226.2/226.2 MB 5.0 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 2.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 6.9 MB/s eta 0:00:00
Compiling /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx because it changed.
[1/1] Cythonizing /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx
/usr/local/lib/python3.10/dist-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /content/piper/src/python/piper_train/vits/monotonic_align/core.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
performance hint: core.pyx:7:5: Exception check on 'maximum_path_each' will always require the GIL to be acquired.
Possible solutions:
Declare 'maximum_path_each' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
Use an 'int' return type on 'maximum_path_each' to allow an error code to be returned.
performance hint: core.pyx:38:6: Exception check on 'maximum_path_c' will always require the GIL to be acquired.
Possible solutions:
Declare 'maximum_path_c' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
Use an 'int' return type on 'maximum_path_c' to allow an error code to be returned.
performance hint: core.pyx:42:21: Exception check after calling 'maximum_path_each' will always require the GIL to be acquired.
Possible solutions:
Declare 'maximum_path_each' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
Use an 'int' return type on 'maximum_path_each' to allow an error code to be returned.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
espeak-ng-data libespeak-ng1 libpcaudio0 libsonic0
The following NEW packages will be installed:
espeak-ng espeak-ng-data libespeak-ng1 libpcaudio0 libsonic0
0 upgraded, 5 newly installed, 0 to remove and 49 not upgraded.
Need to get 4,526 kB of archives.
After this operation, 11.9 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 libpcaudio0 amd64 1.1-6build2 [8,956 B]
Get:2 http://archive.ubuntu.com/ubuntu jammy/main amd64 libsonic0 amd64 0.2.0-11build1 [10.3 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 espeak-ng-data amd64 1.50+dfsg-10ubuntu0.1 [3,956 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libespeak-ng1 amd64 1.50+dfsg-10ubuntu0.1 [207 kB]
Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 espeak-ng amd64 1.50+dfsg-10ubuntu0.1 [343 kB]
Fetched 4,526 kB in 1s (5,159 kB/s)
Selecting previously unselected package libpcaudio0:amd64.
(Reading database ... 123622 files and directories currently installed.)
Preparing to unpack .../libpcaudio0_1.1-6build2_amd64.deb ...
Unpacking libpcaudio0:amd64 (1.1-6build2) ...
Selecting previously unselected package libsonic0:amd64.
Preparing to unpack .../libsonic0_0.2.0-11build1_amd64.deb ...
Unpacking libsonic0:amd64 (0.2.0-11build1) ...
Selecting previously unselected package espeak-ng-data:amd64.
Preparing to unpack .../espeak-ng-data_1.50+dfsg-10ubuntu0.1_amd64.deb ...
Unpacking espeak-ng-data:amd64 (1.50+dfsg-10ubuntu0.1) ...
Selecting previously unselected package libespeak-ng1:amd64.
Preparing to unpack .../libespeak-ng1_1.50+dfsg-10ubuntu0.1_amd64.deb ...
Unpacking libespeak-ng1:amd64 (1.50+dfsg-10ubuntu0.1) ...
Selecting previously unselected package espeak-ng.
Preparing to unpack .../espeak-ng_1.50+dfsg-10ubuntu0.1_amd64.deb ...
Unpacking espeak-ng (1.50+dfsg-10ubuntu0.1) ...
Setting up libpcaudio0:amd64 (1.1-6build2) ...
Setting up libsonic0:amd64 (0.2.0-11build1) ...
Setting up espeak-ng-data:amd64 (1.50+dfsg-10ubuntu0.1) ...
Setting up libespeak-ng1:amd64 (1.50+dfsg-10ubuntu0.1) ...
Setting up espeak-ng (1.50+dfsg-10ubuntu0.1) ...
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.4) ...
/sbin/ldconfig.real: /usr/local/lib/libtbb.so.12 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_5.so.3 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc.so.2 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbbind.so.3 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc_proxy.so.2 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_adapter_opencl.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libur_loader.so.0 is not a symbolic link
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_0.so.3 is not a symbolic link
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 31.3 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 49.4 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 519.2/519.2 kB 31.1 MB/s eta 0:00:00
Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (5.2.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.12.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.16.1)
Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.32.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.66.5)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.6)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1)
Done!
I tried changing this here. But it didn't work. See screenshot attached.
For PyTorch-lightning and torch try using == instead of ~=.
I think I just managed to make it work with torch==2.5.0 and torchaudio==2.5.0 but I'm not 100% sure because I first tried with 2.4.1 (didn't work as you said) and then tried this. Let me check tomorrow and I'll re-comment asap.
In the "Install software. 📦" code, change torch to torch==2.5.0, pytorch-lightning to pytorch-lightning==2.2.0, torchaudio to torchaudio==2.5.0 and finally torchtext to torchtext==0.18.0. For now (2024-10-31) doing this works but it may change with newer versions of torch.
If you're lazy to change everything manually, replace the code with this:
#@markdown # <font color="ffc800"> **Install software.** 📦
#@markdown ---
print("\033[93mInstalling...")
!git clone -q https://github.com/rhasspy/piper
%cd /content/piper/src/python
!pip install pip==24.0
!pip install -q cython>=0.29.0 librosa>=0.9.2 numpy>=1.19.0 pytorch-lightning==2.2.0 torch==2.5.0
!pip install -q onnx onnxruntime-gpu
!bash build_monotonic_align.sh
!apt-get install espeak-ng
!pip install -q torchtext==0.18.0
# fixing recent compativility isswes:
!pip install -q torchaudio==2.5.0 torchmetrics==0.11.4
!pip install --upgrade gdown
print("\033[93mDone!")
If there are other problems let me know, have a good day.
In the "Install software. 📦" code, change torch to torch==2.5.0, pytorch-lightning to pytorch-lightning==2.2.0, torchaudio to torchaudio==2.5.0 and finally torchtext to torchtext==0.18.0. For now (2024-10-31) doing this works but it may change with newer versions of torch.
If you're lazy to change everything manually, replace the code with this:
#@markdown # <font color="ffc800"> **Install software.** 📦 #@markdown --- print("\033[93mInstalling...") !git clone -q https://github.com/rhasspy/piper %cd /content/piper/src/python !pip install pip==24.0 !pip install -q cython>=0.29.0 librosa>=0.9.2 numpy>=1.19.0 pytorch-lightning==2.2.0 torch==2.5.0 !pip install -q onnx onnxruntime-gpu !bash build_monotonic_align.sh !apt-get install espeak-ng !pip install -q torchtext==0.18.0 # fixing recent compativility isswes: !pip install -q torchaudio==2.5.0 torchmetrics==0.11.4 !pip install --upgrade gdown print("\033[93mDone!")
If there are other problems let me know, have a good day.
This worked for a few days. But, I see today, the same error. Onnx file is not exporting, just the json file. Seems like, there is again again python library upgrade. Did you experience this too?
I just saw, even the training notebook is giving the same error. Here is the output code.
2024-11-12 23:47:15.720399: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0
.
2024-11-12 23:47:15.737964: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-11-12 23:47:15.758368: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-11-12 23:47:15.764967: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-12 23:47:15.781292: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-12 23:47:17.005605: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
DEBUG:piper_train:Namespace(dataset_dir='/content/drive/MyDrive/colab/piper/Testvoice', checkpoint_epochs=5, quality='high', resume_from_single_speaker_checkpoint=None, logger=True, enable_checkpointing=True, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, num_nodes=1, num_processes=None, devices='1', gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=10000, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, log_every_n_steps=1000, accelerator='gpu', strategy=None, sync_batchnorm=False, precision=32, enable_model_summary=True, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint='/content/drive/MyDrive/colab/piper/Testvoice/lightning_logs/version_0/checkpoints/last.ckpt', profiler=None, benchmark=None, deterministic=None, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', batch_size=75, validation_split=0.0, num_test_examples=0, max_phoneme_ids=None, hidden_channels=192, inter_channels=192, filter_channels=768, n_layers=6, n_heads=2, seed=1234, num_ckpt=0, save_last=True)
/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting Trainer(resume_from_checkpoint=)
is deprecated in v1.5 and will be removed in v1.7. Please pass Trainer.fit(ckpt_path=)
directly instead.
rank_zero_deprecation(
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
DEBUG:piper_train:Checkpoints will be saved every 5 epoch(s)
DEBUG:piper_train:0 Checkpoints will be saved
DEBUG:vits.dataset:Loading dataset: /content/drive/MyDrive/colab/piper/Testvoice/dataset.jsonl
/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py:731: LightningDeprecationWarning: trainer.resume_from_checkpoint
is deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path with trainer.fit(ckpt_path=)
instead.
ckpt_path = ckpt_path or self.resume_from_checkpoint
Restoring states from the checkpoint path at /content/drive/MyDrive/colab/piper/GodZeal/lightning_logs/version_0/checkpoints/last.ckpt
DEBUG:fsspec.local:open file: /content/drive/MyDrive/colab/piper/Testvoice/lightning_logs/version_0/checkpoints/last.ckpt
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/piper/src/python/piper_train/main.py", line 173, in
This notebook here is not working accurately: https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper_model_exporter.ipynb
This is not generating any .onnx file. Just the Model_CARD and .onnx.json file no onnx file.
Here is the output it shows:
/content/piper/src/python Downloading model and his config... Compressing... ./ ./MODEL_CARD ./en_US-raj-high.onnx.json Done!
Few weeks ago, this notebook was working and correctly generating onnx file from ckpt file. is somethig missing or needs to be done now?
Any help is greatly appreciated.