$ rave train --config v2 --db_path . --name J6-madness-1
...
train set: 46 examples
val set: 1 examples
selected gpu: []
Training on mac is not available yet. Use --gpu -1 to train on CPU (not recommended).
so I try with:
$ rave train --config v2 --db_path . --name J6-madness-1 --gpu -1
...
train set: 46 examples
val set: 1 examples
selected gpu: 0
GPU available: True (mps), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/setup.py:200: UserWarning: MPS available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='mps', devices=1)`.
rank_zero_warn(
| Name | Type | Params
-------------------------------------------------------------------
0 | pqmf | CachedPQMF | 16.7 K
1 | encoder | VariationalEncoder | 16.1 M
2 | decoder | GeneratorV2 | 15.5 M
3 | discriminator | CombineDiscriminators | 27.1 M
4 | audio_distance | AudioDistanceV1 | 0
5 | multiband_audio_distance | AudioDistanceV1 | 0
-------------------------------------------------------------------
58.7 M Trainable params
0 Non-trainable params
58.7 M Total params
234.734 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Traceback (most recent call last):
File "/opt/homebrew/bin/rave", line 8, in <module>
sys.exit(main())
^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scripts/main_cli.py", line 28, in main
app.run(train.main)
File "/opt/homebrew/lib/python3.11/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/opt/homebrew/lib/python3.11/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scripts/train.py", line 160, in main
trainer.fit(model, train, val, ckpt_path=run)
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1103, in _run
results = self._run_stage()
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1182, in _run_stage
self._run_train()
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run_train
self._run_sanity_check()
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1267, in _run_sanity_check
val_loop.run()
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 152, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 121, in advance
batch = next(data_fetcher)
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/utilities/fetching.py", line 184, in __next__
return self.fetching_function()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/utilities/fetching.py", line 265, in fetching_function
self._fetch_next_batch(self.dataloader_iter)
File "/opt/homebrew/lib/python3.11/site-packages/pytorch_lightning/utilities/fetching.py", line 280, in _fetch_next_batch
batch = next(iterator)
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/data/dataset.py", line 298, in __getitem__
return self.dataset[self.indices[idx]]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/rave/dataset.py", line 64, in __getitem__
audio = audio.astype(np.float) / (2**15 - 1)
^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/__init__.py", line 305, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'?
Some info on my system:
$ python3 --version
Python 3.11.4
$ python3 -m pip list | grep numpy
numpy 1.24.4
$ python3 -m pip list | grep acids-rave
acids-rave 2.1.16
Tried to train on a Mac M1.
First I got:
so I try with:
Some info on my system: