Closed Praying closed 5 years ago
Hi, this is probably because there is no enough memory on your RAM. It will need roughly 8 GB. If you do not have enough RAM memory, you can instead load it from hard disk by specifying option data_source
to 'npy' or 'npz' in the config.yaml
. And, there is no need to run process_data.sh
in this case.
I change the config and still got a memory error
(musegan) ran@nuc:~/SourceCode/musegan$ ./scripts/run_train.sh "./exp/my_experiment/" "0" musegan.train INFO Using parameters: {'beat_resolution': 12, 'condition_track_idx': None, 'data_shape': [4, 48, 84, 5], 'is_accompaniment': False, 'is_conditional': False, 'latent_dim': 128, 'nets': {'discriminator': 'default', 'generator': 'default'}, 'use_binary_neurons': False} musegan.train INFO Using configurations: {'adam': {'beta1': 0.5, 'beta2': 0.9}, 'batch_size': 64, 'colormap': [[1.0, 0.0, 0.0], [1.0, 0.5, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0], [0.0, 0.5, 1.0]], 'config': './exp/my_experiment//config.yaml', 'data_filename': 'data/train_x_lpd_5_phr.npz', 'data_source': 'npz', 'eval_dir': '/home/ran/SourceCode/musegan/exp/my_experiment/eval', 'evaluate_steps': 100, 'exp_dir': '/home/ran/SourceCode/musegan/exp/my_experiment', 'gan_loss_type': 'wasserstein', 'gpu': '0', 'initial_learning_rate': 0.001, 'learning_rate_schedule': {'end': 50000, 'end_value': 0.0, 'start': 45000}, 'log_dir': '/home/ran/SourceCode/musegan/exp/my_experiment/logs/train', 'log_loss_steps': 100, 'midi': {'is_drums': [1, 0, 0, 0, 0], 'lowest_pitch': 24, 'programs': [0, 0, 25, 33, 48], 'tempo': 100}, 'model_dir': '/home/ran/SourceCode/musegan/exp/my_experiment/model', 'n_dis_updates_per_gen_update': 5, 'n_jobs': 20, 'params': './exp/my_experiment//params.yaml', 'sample_dir': '/home/ran/SourceCode/musegan/exp/my_experiment/samples', 'sample_grid': [8, 8], 'save_array_samples': True, 'save_checkpoint_steps': 10000, 'save_image_samples': True, 'save_pianoroll_samples': True, 'save_samples_steps': 100, 'save_summaries_steps': 0, 'slope_schedule': {'end': 50000, 'end_value': 5.0, 'start': 10000}, 'src_dir': '/home/ran/SourceCode/musegan/exp/my_experiment/src', 'steps': 50000, 'use_gradient_penalties': True, 'use_learning_rate_decay': True, 'use_random_transpose': False, 'use_slope_annealing': False, 'use_train_test_split': False} musegan.train INFO Loading training data. Traceback (most recent call last): File "/home/ran/SourceCode/musegan/scripts/../src/train.py", line 348, in
main() File "/home/ran/SourceCode/musegan/scripts/../src/train.py", line 172, in main trainx, = load_training_data(params, config) File "/home/ran/SourceCode/musegan/scripts/../src/train.py", line 92, in load_training_data data = load_data(config['data_source'], config['data_filename']) File "/home/ran/SourceCode/musegan/src/musegan/data.py", line 29, in load_data return load_data_from_npz(data_filename) File "/home/ran/SourceCode/musegan/src/musegan/data.py", line 17, in load_data_fromnpz data = np.zeros(f['shape'], np.bool) MemoryError
Oops. I forgot that you still need ~8G RAM memory if you would like to load the whole training data from the .npz
/.npy
file. A possible workaround is to manually drop part of the training data when loading it.
emm...... Ok, thanks so much!
(musegan) ran@nuc:~/SourceCode/musegan$ ./scripts/process_data.sh Loading data from '/home/ran/SourceCode/musegan/scripts/../data/train_x_lpd_5_phr.npz'. Saving data to shared memory. ./scripts/process_data.sh: line 5: 1653 Bus error (core dumped) python3 "$DIR/../src/process_data.py" "$DIR/../data/train_x_lpd_5_phr.npz"