ml5js / training-charRNN

Training charRNN model for ml5js
Other
96 stars 46 forks source link

Trining on big files 25+ MB gets killed #13

Open Yyyyaaaannnnoooo opened 5 years ago

Yyyyaaaannnnoooo commented 5 years ago

I'm training the LSTM with some 80 MB files with the specified hyperparameters

python train.py --data_dir=./data    --rnn_size 2048    --num_layers 2   --seq_length 256 --batch_size 128 --output_keep_prob 0.25

but after few minutes the job gets killed. Is the file too big?

lucas-fine commented 4 years ago

I did this while using TOP and after about a minute my computer froze from processor being almost %100 my guess is that your computer killed it because it was to much to handle try an easier command or use a more powerful computer