thorstenMueller / Thorsten-Voice

Thorsten-Voice: A free to use, offline working, high quality german TTS voice should be available for every project without any license struggling.
http://www.thorsten-voice.de
Creative Commons Zero v1.0 Universal
545 stars 51 forks source link

TypeError: can't pickle weakref objects + EOFError: Ran out of input #12

Closed ErfolgreichCharismatisch closed 3 years ago

ErfolgreichCharismatisch commented 3 years ago

Ich bin beim letzten Punkt angekommen:

CUDA_VISIBLE_DEVICES="0" python TTS/mozilla_voice_tts/bin/train_vocoder.py --config_path vocoder_config.json Ich erhalte allerdings die Fehlermeldung

 > Using CUDA:  False
 > Number of GPUs:  0
vocoder_config.json
 > Git Hash: 49fe63a
....
 > TRAINING (2020-11-29 12:11:09)
 ! Run is removed from E:/Python/tts/TTS_recipes/TTS/tests/data/TestAusgabe/melgan/pwgan-November-29-2020_12+11PM-49fe63a
Traceback (most recent call last):
  File "TTS/mozilla_voice_tts/bin/train_vocoder.py", line 654, in <module>
    main(args)
  File "TTS/mozilla_voice_tts/bin/train_vocoder.py", line 558, in main
    epoch)
  File "TTS/mozilla_voice_tts/bin/train_vocoder.py", line 104, in train
    for num_iter, data in enumerate(data_loader):
  File "E:\Anaconda\envs\umgebung\lib\site-packages\torch\utils\data\dataloader.py", line 352, in __iter__
    return self._get_iterator()
  File "E:\Anaconda\envs\umgebung\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "E:\Anaconda\envs\umgebung\lib\site-packages\torch\utils\data\dataloader.py", line 801, in __init__
    w.start()
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle weakref objects

(umgebung) E:\Python\tts\TTS_recipes> > Using CUDA:  False
 > Number of GPUs:  0
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "E:\Anaconda\envs\umgebung\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Ideen?

thorstenMueller commented 3 years ago

Hallo @ErfolgreichCharismatisch . Ein solches Problem hatte ich bei meinen Tests nicht. Vielleicht liegt es an Windows als Umgebung - meine Tests führe ich auf Linux durch.

Insofern ist es gut, dass du die Frage parallel im Mozilla Discourse gestellt hast. Dort können wir (und die Profis im Forum) ja weiterschauen. https://discourse.mozilla.org/t/typeerror-cant-pickle-weakref-objects-eoferror-ran-out-of-input/71345

ErfolgreichCharismatisch commented 3 years ago

Ja, es liegt sehr sicher an der Umgebung. Vieleicht nicht an Windows aber zumindest Python.

ErfolgreichCharismatisch commented 3 years ago

dataset set size and properties:

10 Stunden Audio, zwischen 3 und 9 Sekunden lang.

configuration parameters:

{
"github_branch":"* dev",
"restore_path":"E:/Python/tts/TTS_recipes/TTS/tests/data/testAusgabe/dertest-November-30-2020_11+35AM-49fe63a/best_model.pth.tar",
    "run_name": "pwgan",
    "run_description": "parallel-wavegan for german",

       // AUDIO PARAMETERS
       "audio":{
        // stft parameters
        "fft_size": 1024,         // number of stft frequency levels. Size of the linear spectogram frame.
        "win_length": 1024,      // stft window length in ms.
        "hop_length": 256,       // stft window hop-lengh in ms.
        "frame_length_ms": null, // stft window length in ms.If null, 'win_length' is used.
        "frame_shift_ms": null,  // stft window hop-lengh in ms. If null, 'hop_length' is used.

        // Audio processing parameters
        "sample_rate": 22050,   // DATASET-RELATED: wav sample-rate.
        "preemphasis": 0.0,     // pre-emphasis to reduce spec noise and make it more structured. If 0.0, no -pre-emphasis.
        "ref_level_db": 20,     // reference level db, theoretically 20db is the sound of air.

        // Silence trimming
        "do_trim_silence": true,// enable trimming of slience of audio as you load it. LJspeech (true), TWEB (false), Nancy (true)
        "trim_db": 60,          // threshold for timming silence. Set this according to your dataset.
        "do_sound_norm": true,

        // Griffin-Lim
        "power": 1.5,           // value to sharpen wav signals after GL algorithm.
        "griffin_lim_iters": 60,// #griffin-lim iterations. 30-60 is a good range. Larger the value, slower the generation.

        // MelSpectrogram parameters
        "num_mels": 80,         // size of the mel spec frame.
        "mel_fmin": 0.0,        // minimum freq level for mel-spec. ~50 for male and ~95 for female voices. Tune for dataset!!
        "mel_fmax": 8000.0,     // maximum freq level for mel-spec. Tune for dataset!!
        "spec_gain": 20.0,

        // Normalization parameters
        "signal_norm": true,    // normalize spec values. Mean-Var normalization if 'stats_path' is defined otherwise range normalization defined by the other params.
        "min_level_db": -100,   // lower bound for normalization
        "symmetric_norm": true, // move normalization to range [-1, 1]
        "max_norm": 1.0,        // scale normalization to range [-max_norm, max_norm] or [0, max_norm]
        "clip_norm": true,      // clip normalized values into the range.
        "stats_path": "E:/Python/tts/TTS_recipes/TTS/tests/inputs/scale_stats.npy"    // DO NOT USE WITH MULTI_SPEAKER MODEL. scaler stats file computed by 'compute_statistics.py'. If it is defined, mean-std based notmalization is used and other normalization params are ignored
       },
    // DISTRIBUTED TRAINING
    // "distributed":{
    //     "backend": "nccl",
    //     "url": "tcp:////localhost:54321"
    // },

    // MODEL PARAMETERS
    "use_pqmf": false,

    // LOSS PARAMETERS
    "use_stft_loss": true,
    "use_subband_stft_loss": false,  // USE ONLY WITH MULTIBAND MODELS
    "use_mse_gan_loss": true,
    "use_hinge_gan_loss": false,
    "use_feat_match_loss": false,  // use only with melgan discriminators

    // loss weights
    "stft_loss_weight": 0.5,
    "subband_stft_loss_weight": 0.5,
    "mse_G_loss_weight": 2.5,
    "hinge_G_loss_weight": 2.5,
    "feat_match_loss_weight": 25,

    // multiscale stft loss parameters
    "stft_loss_params": {
        "n_ffts": [1024, 2048, 512],
        "hop_lengths": [120, 240, 50],
        "win_lengths": [600, 1200, 240]
    },

    // subband multiscale stft loss parameters
    "subband_stft_loss_params":{
        "n_ffts": [384, 683, 171],
        "hop_lengths": [30, 60, 10],
        "win_lengths": [150, 300, 60]
    },

    "target_loss": "avg_G_loss",  // loss value to pick the best model to save after each epoch

    // DISCRIMINATOR
    "discriminator_model": "parallel_wavegan_discriminator",
    "discriminator_model_params":{
        "num_layers": 10
    },
    "steps_to_start_discriminator": 200000,      // steps required to start GAN trainining.1

    // GENERATOR
    "generator_model": "parallel_wavegan_generator",
    "generator_model_params": {
        "upsample_factors":[4, 4, 4, 4],
        "stacks": 3,
        "num_res_blocks": 30,
        "aux_context_window": 0

    },

    // DATASET
    "data_path": "E:/Python/tts/TensorFlowTTS/Dertest/wavs/",
    "feature_path": null,
    "seq_len": 25600,
    "pad_short": 2000,
    "conv_pad": 0,
    "use_noise_augment": false,
    "use_cache": true,

    "reinit_layers": [],    // give a list of layer names to restore from the given checkpoint. If not defined, it reloads all heuristically matching layers.

    // TRAINING
    "batch_size": 6,       // Batch size for training. Lower values than 32 might cause hard to learn attention. It is overwritten by 'gradual_training'.

    // VALIDATION
    "run_eval": true,
    "test_delay_epochs": 10,  //Until attention is aligned, testing only wastes computation time.
    "test_sentences_file": null,  // set a file to load sentences to be used for testing. If it is null then we use default english sentences.

    // OPTIMIZER
    "epochs": 10000,                // total number of epochs to train.
    "wd": 0.0,                // Weight decay weight.
    "gen_clip_grad": -1,      // Generator gradient clipping threshold. Apply gradient clipping if > 0
    "disc_clip_grad": -1,     // Discriminator gradient clipping threshold.
    "lr_scheduler_gen": "MultiStepLR",   // one of the schedulers from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
    "lr_scheduler_gen_params": {
        "gamma": 0.5,
        "milestones": [100000, 200000, 300000, 400000, 500000, 600000]
    },
    "lr_scheduler_disc": "MultiStepLR",   // one of the schedulers from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
    "lr_scheduler_disc_params": {
        "gamma": 0.5,
        "milestones": [100000, 200000, 300000, 400000, 500000, 600000]
    },
    "lr_gen": 1e-4,                  // Initial learning rate. If Noam decay is active, maximum learning rate.
    "lr_disc": 1e-4,

    // TENSORBOARD and LOGGING
    "print_step": 25,       // Number of steps to log traning on console.
    "print_eval": false,     // If True, it prints loss values for each step in eval run.
    "save_step": 25000,      // Number of training steps expected to plot training stats on TB and save model checkpoints.
    "checkpoint": true,     // If true, it saves checkpoints per "save_step"
    "tb_model_param_stats": false,     // true, plots param stats per layer on tensorboard. Might be memory consuming, but good for debugging.

    // DATA LOADING
    "num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.
    "num_val_loader_workers": 4,    // number of evaluation data loader processes.
    "eval_split_size": 10,

    // PATHS
    "output_path": "E:/Python/tts/TTS_recipes/TTS/tests/data/testAusgabe/melgan/"
}

environment info (OS and Python version, etc.):

Windows 10, Python 3.6.12 |Anaconda, Inc.| (default, Sep 9 2020, 00:29:25) [MSC v.1916 64 bit (AMD64)] on win32

absl-py                  0.10.0
appdirs                  1.4.4
astroid                  2.4.2
astunparse               1.6.3
attrdict                 2.0.1
attrs                    20.3.0
audioread                2.1.8
bokeh                    1.4.0
cachetools               4.1.1
cardboardlint            1.3.0
certifi                  2020.11.8
cffi                     1.14.3
chardet                  3.0.4
click                    7.1.2
clldutils                3.5.4
colorama                 0.4.4
colorlog                 4.6.2
csvw                     1.8.1
cycler                   0.10.0
Cython                   0.29.21
dataclasses              0.7
decorator                4.4.2
Distance                 0.1.3
docopt                   0.6.2
filelock                 3.0.12
Flask                    1.1.2
fuzzywuzzy               0.18.0
g2p-en                   2.1.0
g2pM                     0.1.2.5
gast                     0.3.3
gdown                    3.12.2
german-transliterate     0.1.3               e:\python\tts\tts_recipes\german_transliterate
google-auth              1.22.1
google-auth-oauthlib     0.4.1
google-pasta             0.2.0
grpcio                   1.32.0
h5py                     2.10.0
idna                     2.10
importlib-metadata       2.0.0
inflect                  4.1.0
isodate                  0.6.0
isort                    4.3.21
itsdangerous             1.1.0
jamo                     0.4.1
Jinja2                   2.11.2
joblib                   0.17.0
Keras-Preprocessing      1.1.2
kiwisolver               1.2.0
lazy-object-proxy        1.4.3
librosa                  0.7.2
llvmlite                 0.31.0
Markdown                 3.3.1
MarkupSafe               1.1.1
matplotlib               3.3.2
mccabe                   0.6.1
mkl-fft                  1.2.0
mkl-random               1.1.1
mkl-service              2.3.0
mozilla-voice-tts        0.0.4+3424181       e:\python\tts\tts_recipes\tts
nltk                     3.5
nose                     1.3.7
num2words                0.5.10
numba                    0.48.0
numpy                    1.18.0
oauthlib                 3.1.0
olefile                  0.46
opt-einsum               3.3.0
packaging                20.4
phonemizer               2.2.1
Pillow                   7.2.0
pip                      20.2.4
pooch                    1.2.0
protobuf                 3.13.0
py-espeak-ng             0.1.8
pyasn1                   0.4.8
pyasn1-modules           0.2.8
pycparser                2.20
pylint                   2.5.3
pyparsing                2.4.7
pypinyin                 0.39.1
pysbd                    0.3.3
PySocks                  1.7.1
python-dateutil          2.8.1
pytz                     2020.1
pyworld                  0.2.11.post0
PyYAML                   5.3.1
regex                    2020.10.15
requests                 2.24.0
requests-oauthlib        1.3.0
resampy                  0.2.2
rfc3986                  1.4.0
rsa                      4.6
scikit-learn             0.23.2
scipy                    1.4.1
segments                 2.1.3
setuptools               50.3.0.post20201005
six                      1.15.0
SoundFile                0.10.3.post1
tabulate                 0.8.7
tensorboard              2.4.0
tensorboard-plugin-wit   1.7.0
tensorboardX             2.1
tensorflow               2.3.0
tensorflow-addons        0.11.2
tensorflow-estimator     2.3.0
tensorflow-gpu           2.3.1
tensorflow-gpu-estimator 2.3.0
TensorFlowTTS            0.0
termcolor                1.1.0
TextGrid                 1.5
threadpoolctl            2.1.0
toml                     0.10.2
torch                    1.7.0
torchvision              0.2.2
tornado                  6.1
tqdm                     4.50.2
typed-ast                1.4.1
typeguard                2.9.1
typing-extensions        3.7.4.3
Unidecode                0.4.20
uritemplate              3.0.1
urllib3                  1.25.10
Werkzeug                 1.0.1
wheel                    0.35.1
wincertstore             0.2
wrapt                    1.12.1
zipp                     3.3.1
thorstenMueller commented 3 years ago

Um die Kommunikation für alle Beteiligten einfacher zu machen und nicht gleiche Inhalte an mehren Stellen zu veröffentlichen schließe ich hier und schlage vor die Diskussion im Mozilla Forum (oder anderen) zu belassen.

ErfolgreichCharismatisch commented 3 years ago

Lieber offen lassen, die Diskussion im anderen Forum ist tot.

thorstenMueller commented 3 years ago

Hallo @ErfolgreichCharismatisch .

Eine „Diskussion“ im Forum hat in deinem Thread nicht stattgefunden. Da ich mit allen an deinem Mozilla Thread beteiligten Personen schon sehr konstruktiv, höflich und erfolgreich zusammengearbeitet habe darf ich dir versichern - an ihnen liegt es sicherlich nicht.

Aber solltest Du zukünftig im Mozilla Discourse (oder anderen Foren) höflich und mit fundierten Informationen um Hilfe fragen, wirst Du sicherlich Unterstützung bekommen.

Da der „Issue“ in meinem Repo ohnehin "off-topic" ist und du dein eigenes Dataset verwendest lasse ich ihn geschlossen.

Ich wünsche Dir viel Erfolg

ErfolgreichCharismatisch commented 3 years ago

Hallo @thorstenMueller

eine „Diskussion“ im Forum hat in dem Thread nicht stattgefunden, stattdessen waren deine stets konstruktiven und höflichen Freunde unkonstruktiv und unhöflich.

Nachdem meine Beiträge allesamt konstruktiv und respektvoll waren darf ich dir versichern - an mir liegt es sicherlich nicht.

Aber solltest Du zukünftig ohne versteckte Beleidigungen und Unterstellungen höflich und mit fundierten Informationen ankommen, freue ich mich auf eine Zusammenarbeit.

Das Problem ist nicht "off-topic", da auch du ein eigenes Dataset verwendest, nachdem LJSpeech nur auf englisch ist.

Ich wünsche Dir viel Erfolg.