Closed juancopi81 closed 1 year ago
Hello,
You can add any type of audio that is supported by torchaudio. Not sure about the exhaustive list of audio types supported.
There is no limit to the number of files you can use.
Audio is chunked and padded automatically by setting the num_frames
parameter, no need to chunk it manually.
Hi @Kinyugo,
Thank you very much for you kind and quick response. Sorry for bothering you, I was today giving it a try to the Jupyter Notebook you have in the repo. I keep getting the same error:
ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})
No matter if I install the package from git and in edit mode. Would you know how could one deal with this? Thank you very much. Below the complete Traceback:
`---------------------------------------------------------------------------
ContextualVersionConflict Traceback (most recent call last)
15 frames
/usr/local/lib/python3.8/dist-packages/msanii/scripts/init.py in
/usr/local/lib/python3.8/dist-packages/msanii/scripts/inference.py in
/usr/local/lib/python3.8/dist-packages/msanii/data/init.py in
/usr/local/lib/python3.8/dist-packages/msanii/data/audio_datamodule.py in
/usr/local/lib/python3.8/dist-packages/lightning/init.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/init.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/callbacks/init.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/callbacks/batch_size_finder.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/callbacks/callback.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/utilities/init.py in
/usr/local/lib/python3.8/dist-packages/lightning/pytorch/utilities/imports.py in
/usr/local/lib/python3.8/dist-packages/lightning_utilities/core/imports.py in compare_version(package, op, version, use_base_version) 75 else: 76 # try pkg_resources to infer version ---> 77 pkg_version = Version(pkg_resources.get_distribution(package).version) 78 except TypeError: 79 # this is mocked by Sphinx, so it should return True to generate all summaries
/usr/local/lib/python3.8/dist-packages/pkg_resources/init.py in get_distribution(dist) 464 dist = Requirement.parse(dist) 465 if isinstance(dist, Requirement): --> 466 dist = get_provider(dist) 467 if not isinstance(dist, Distribution): 468 raise TypeError("Expected string, Requirement, or Distribution", dist)
/usr/local/lib/python3.8/dist-packages/pkg_resources/init.py in get_provider(moduleOrReq) 340 """Return an IResourceProvider for the named module or requirement""" 341 if isinstance(moduleOrReq, Requirement): --> 342 return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] 343 try: 344 module = sys.modules[moduleOrReq]
/usr/local/lib/python3.8/dist-packages/pkg_resources/init.py in require(self, *requirements) 884 included, even if they were already activated in this working set. 885 """ --> 886 needed = self.resolve(parse_requirements(requirements)) 887 888 for dist in needed:
/usr/local/lib/python3.8/dist-packages/pkg_resources/init.py in resolve(self, requirements, env, installer, replace_conflicting, extras) 775 # Oops, the "best" so far conflicts with a dependency 776 dependent_req = required_by[req] --> 777 raise VersionConflict(dist, req).with_context(dependent_req) 778 779 # push the new requirements onto the stack
ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})`
And if I pass that, I received the following error:
default_config = OmegaConf.structured(TrainingConfig)
custom_config = OmegaConf.create(dict_config)
config = OmegaConf.merge(default_config, custom_config)
ConfigKeyError: Key 'transforms' not in 'DiffusionTrainingConfig'
full_key: diffusion.transforms
reference_type=DiffusionTrainingConfig
object_type=DiffusionTrainingConfig
Hello Juan,
I have fixed the issue with the transforms. You have to delete the transforms keys from the dictionary. However, I recommend that you reinstall the new repo as it fixes some other issues.
Let me know if you have any other concerns.
PS: The model was originally trained using a custom implementation and not diffusers lib and then ported over to diffusers there might be a few bugs here and there.
Regards, Kinyugo Maina
Hello @Kinyugo,
Great! Now it is working. Thanks you very much. It is running and logging the results in W&B. Can I use this notebook to resume training from the last checkpoint saved in W&B? How?
Thanks again :)
Oh BTY I trained a model and download a cpkt from W&B. Then I copied your HF space and change the checkpoint (I am unsure if I am copying the right file: I am using the checkpoints in the artifacts)
But now it is getting this error:
Traceback (most recent call last):
File "app.py", line 20, in <module>
demo = run_demo(config)
File "/home/user/app/src/msanii/msanii/demo/demo.py", line 51, in run_demo
pipeline = Pipeline.from_pretrained(config.ckpt_path)
File "/home/user/app/src/msanii/msanii/pipeline/pipeline.py", line 218, in from_pretrained
transforms = Pipeline._load_from_checkpoint(
File "/home/user/app/src/msanii/msanii/pipeline/pipeline.py", line 248, in _load_from_checkpoint
target_instance = from_config(checkpoint[f"{prefix}_config"], target)
KeyError: 'transforms_config'
Hello,
I am not sure what could be causing the problem. One way to find out if your checkpoint has the transforms_config
keys is to load it and inspect the keys.
import torch
checkpoint = torch.load("path/to/ckpt", map_location=torch.device("cpu"))
print(checkpoint.keys()) # should have a transforms config key
Hello,
Thanks for your response. I just run the code you suggested. It does not have a transforms config key. Maybe I am downloading the wrong cpkt file? This is what I get:
import torch
checkpoint = torch.load("<path>/model.ckpt", map_location=torch.device("cpu"))
print(checkpoint.keys()) # should have a transforms config key
dict_keys(['epoch', 'global_step', 'pytorch-lightning_version', 'state_dict', 'loops', 'callbacks', 'optimizer_states', 'lr_schedulers', 'MixedPrecisionPlugin', 'hparams_name', 'hyper_parameters'])
Hi Kinyugo,
I find this really interesting, I was hoping to train the code on some of my own audio files. I notices you have a notebook for that. There's this "data_dir": "",
I wanted to ask:
Thanks!