Closed Harsha5524 closed 5 years ago
Not enough information, just says "Killed" that is weird. Did you type CTRL+C ? That would kill it... perhaps you could monitor the memory usage of the app, is it running out of memory compared to the amount of memory that your system has?
You might be able to figure out what's happening by doing tail -f /var/log/kern.log
in a separate shell while running make_dataset.py
.
Alternatively, just run less /var/log/kern.log
after seeing a Killed
output from the system.
On Tue, Feb 12, 2019 at 11:29 AM Chris Lovett notifications@github.com wrote:
Not enough information, just says "Killed" that is weird. Did you type CTRL+C ? That would kill it... perhaps you could monitor the memory usage of the app, is it running out of memory compared to the amount of memory that your system has?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Microsoft/ELL/issues/197#issuecomment-462900363, or mute the thread https://github.com/notifications/unsubscribe-auth/ABPkp7C8eEHRosZK-VUHhyhQf7wMO46Gks5vMxYXgaJpZM4a2eQ0 .
this is my system specifications
Yes, my suggestion should still work, at least as a confirmation that the python script is running out of memory.
Please let us know what the results are so that we can address the issue correctly.
My system is it Necessary for doing this?
Yeah, I see 3.7 gb in your window there, that's pretty low, especially since we are building an audio library of size 2,563,557,026 (and that's the disk size, the numpy memory size is probably larger) and the OS probably takes a bunch of memory. So if you cannot increase the amount of RAM available to this machine then I would recommend lowering the max_files_per_directory you provide to make_training_list to something like 1000 or 500.
Yeah, I see 3.7 gb in your window there, that's pretty low, especially since we are building an audio library of size 2,563,557,026 (and that's the disk size, the numpy memory size is probably larger) and the OS probably takes a bunch of memory. So if you cannot increase the amount of RAM available to this machine then I would recommend lowering the max_files_per_directory you provide to make_training_list to something like 1000 or 500.
when I am using max_files_per_directory 1000 My output like is
~/tutorial/audio$ python make_dataset.py --list_file ~/tutorial/audio/training_list.txt --featurizer compiled_featurizer/mfcc --sample_rate 8000
Transforming 12 files from
ah ha, this time we do see "MemoryError" which confirms our suspicions that this is a memory problem. So you will have to keep decreasing max_files_per_directory until you find one that fits in the available RAM. Training neural networks is known to use lots of RAM, the tutorial probably should state you need at least 16 gb of RAM or more.
ah ha, this time we do see "MemoryError" which confirms our suspicions that this is a memory problem. So you will have to keep decreasing max_files_per_directory until you find one that fits in the available RAM. Training neural networks is known to use lots of RAM, the tutorial probably should state you need at least 16 gb of RAM or more.
I got like this now for max_files_per_directory 500
~/tutorial/audio$ python make_dataset.py --list_file ~/tutorial/audio/training_list.txt --featurizer compiled_featurizer/mfcc --sample_rate 8000
Transforming 12 files from
Ok! How Much can I decrease?
I don't know just keep subtracting 100 until it works. But I'm curious what kind of machine this is that has only 3.7gb RAM. Is there any way to get more RAM for this machine?
I don't know just keep subtracting 100 until it works. But I'm curious what kind of machine this is that has only 3.7gb RAM. Is there any way to get more RAM for this machine?
Now it works on 300......
Thanks for your support.................
Closing this issue.
when I am using this command in ubuntu "python make_dataset.py --list_file audio/training_list.txt --featurizer compiled_featurizer/mfcc --sample_rate 8000"
Like this my output......
/tutorial/audio$ python make_dataset.py --list_file /home/kharsha/tutorial/audio/training_list.txt --featurizer compiled_featurizer/mfcc --sample_rate 8000 Transforming 12 files from ... ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
found 2420 rows
Transforming 1600 files from bed ... found 1600 rows
Transforming 1600 files from bird ... found 1600 rows
Transforming 1600 files from cat ... found 1600 rows
Transforming 1600 files from dog ... found 1600 rows
Transforming 1600 files from down ... found 1600 rows
Transforming 1600 files from eight ... found 1600 rows
Transforming 1600 files from five ... found 1600 rows
Transforming 1600 files from four ... found 1600 rows
Transforming 1600 files from go ... found 1600 rows
Transforming 1600 files from happy ... found 1600 rows
Transforming 1600 files from house ... found 1600 rows
Transforming 1600 files from left ... found 1600 rows
Transforming 1600 files from marvin ... found 1600 rows
Transforming 1600 files from nine ... found 1600 rows
Transforming 1600 files from no ... found 1600 rows
Transforming 1600 files from off ... found 1600 rows
Transforming 1600 files from on ... found 1600 rows
Transforming 1600 files from one ... found 1600 rows
Transforming 1600 files from right ... found 1600 rows
Transforming 1600 files from seven ... found 1600 rows
Transforming 1600 files from sheila ... found 1600 rows
Transforming 1600 files from six ... found 1600 rows
Transforming 1600 files from stop ... found 1600 rows
Transforming 1600 files from three ... found 1600 rows
Transforming 1600 files from tree ... found 1600 rows
Transforming 1600 files from two ... found 1600 rows
Transforming 1600 files from up ... found 1600 rows
Transforming 1600 files from wow ... found 1600 rows
Transforming 1600 files from yes ... found 1600 rows
Transforming 1600 files from zero ... found 1600 rows
Killed
But validation.npz and testing.npz are this files are successfully created...........