kan-bayashi / ParallelWaveGAN

Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch
https://kan-bayashi.github.io/ParallelWaveGAN/
MIT License
1.54k stars 339 forks source link

increase LJSpeech data rows #81

Closed ohadianoush closed 4 years ago

ohadianoush commented 4 years ago

hi, i want create my own dataset for persian language. now going to create dataset, but befor that i want increase ljspeech data for testing what happend if my dataset will samaller than its. i keep first 1000 rows and delete data after 1000th row and after runing run.sh i get this error:

Nomalization start. See the progress via dump/dev/norm/normalize.log. Nomalization start. See the progress via dump/eval/norm/normalize.log. Successfully finished normalization of dev set. Successfully finished normalization of eval set. run.pl: job failed, log is in dump/train_nodev/norm/normalize.log ./run.sh: 1 background jobs are failed.

normalize.log: 99%|█████████▉| 12484/12600 [00:21<00:00, 861.87it/s] 100%|█████████▉| 12571/12600 [00:21<00:00, 860.90it/s] 100%|██████████| 12600/12600 [00:21<00:00, 577.12it/s] Traceback (most recent call last): File "/usr/local/bin/parallel-wavegan-normalize", line 11, in load_entry_point('parallel-wavegan', 'console_scripts', 'parallel-wavegan-normalize')() File "/home/ai/ParallelWaveGAN/parallel_wavegan/bin/normalize.py", line 123, in main [delayed(_process_single_file)(data) for data in tqdm(dataset)]) File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 950, in call n_jobs = self._initialize_backend() File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 711, in _initialize_backend self._backend_args) File "/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py", line 517, in configure memmappingexecutor_args) File "/usr/local/lib/python3.6/dist-packages/joblib/executor.py", line 42, in get_memmapping_executor initargs=initargs, env=env) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/reusable_executor.py", line 116, in get_reusable_executor executor_id=executor_id, **kwargs) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/reusable_executor.py", line 153, in init initializer=initializer, initargs=initargs, env=env) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/process_executor.py", line 915, in init self._processes_management_lock = self._context.Lock() File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/context.py", line 225, in Lock return Lock() File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/synchronize.py", line 174, in init super(Lock, self).init(SEMAPHORE, 1, 1) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/synchronize.py", line 90, in init resource_tracker.register(self._semlock.name, "semlock") File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/resource_tracker.py", line 171, in register self.ensure_running() File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/resource_tracker.py", line 143, in ensure_running pid = spawnv_passfds(exe, args, fds_to_pass) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/resource_tracker.py", line 301, in spawnv_passfds return fork_exec(args, _pass) File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/fork_exec.py", line 43, in fork_exec pid = os.fork()

OSError: [Errno 12] Cannot allocate memory

Accounting: time=23 threads=1 Ended (code 1) at Sat Feb 22 09:50:33 +0330 2020, elapsed time 23 seconds

and my ram and gpu memory is free (more than 80% free)

kan-bayashi commented 4 years ago

It might be that cpu memory is full during normalization. How much cpu memory does your machine have?

ohadianoush commented 4 years ago

16gb , and first i open terminal by TOP command and then run bash script. from start to end, max memrory used is 4gb and i have 12 gb free. also ljspeech bash script completed successfully with full data (13.100 row) how it possible it cant work with (1000 row !)

kan-bayashi commented 4 years ago

Hmm. Let me confirm your procedure. You said that you use only 1000 row but the log shows 12600 utterances.

100%|██████████| 12600/12600 [00:21<00:00, 577.12it/s]

Maybe you reuse dump directory which includes previously calculated feature files. You can remove the previous files or change dumpdir e.g. run.sh --dumpdir dump/debug.

ohadianoush commented 4 years ago

i thing its fixed :) thanks a lot