Open chuck-park opened 7 years ago
@akadark I'm also experimenting with this and got the same error. For what I've read, control_flow_ops.While has been deprecated since tensorflow 11. Here is a workaround using a virtual environment ( assuming you have anaconda installed)
$ conda create -n tf10_py2.7 python=2.7
$ source activate tf10_py2.7
If you have a Mac:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py2-none-any.whl
If you are using linux
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
$ pip install --upgrade $TF_BINARY_URL
$pip2 install --force-reinstall --upgrade protobuf
At this point you should be able to generate music
@gc-cloud Thanks to your answer, I solved the problem well but I ran into same issue as yours maybe haha. This is my error... If you have any information to help to fix it, re-comment please!
chuck@chuck-PC:~/Downloads/Music_RNN_RBM-master$ python rnn_rbm_generate.py parameter_checkpoints/pretrained.ckpt
Traceback (most recent call last):
File "rnn_rbm_generate.py", line 45, in <module>
main(sys.argv[1])
File "rnn_rbm_generate.py", line 32, in main
song_primer = midi_manipulation.get_song(primer_song)
File "/home/chuck/Downloads/Music_RNN_RBM-master/midi_manipulation.py", line 20, in get_song
song = song[:np.floor(song.shape[0]/num_timesteps)*num_timesteps]
TypeError: slice indices must be integers or None or have an __index__ method
@akadark change line 20 in midi_manipulation.py to this
song = song[:int(np.floor(song.shape[0]/num_timesteps)*num_timesteps)]
With that, you will probably get another float vs integer error, which is what I'm dealing with right now
@gc-cloud I changed a lot of things that I could. So I can't write that in here... I'm so sorry about that. But If you want, I can send you my files that were modified. Please give me your email address. my address is chuck.m.park@gmail.com
I think I managed to make this work with Tensorflow 1.1. Here are all the steps:
1) First, you need to upgrade your python scripts using the tensorflow upgrade utility
2) In midi_manipulation.py change get_song to this
def get_song(path):
#Load the song and reshape it to place multiple timesteps next to each other
song = np.array(midiToNoteStateMatrix(path))
song = song[:int(np.floor(song.shape[0]/num_timesteps)*num_timesteps)]
song = np.reshape(song, [song.shape[0]/num_timesteps, song.shape[1]*num_timesteps])
return song
3) In RBM.py replace the call to deprecated control_flow_ops
#[_, _, x_sample] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter,
# gibbs_step, [ct, tf.constant(k), x], 1, False)
# [_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step,
[_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step,
[ct, tf.constant(k), x], parallel_iterations=1, back_prop=False)
4) In rnn_rbm.py update generate to look like this
def generate(num, x=x, size_bt=size_bt, u0=u0, n_visible=n_visible, prime_length=100):
"""
This function handles generating music. This function is one of the outputs of the build_rnnrbm function
Args:
num (int): The number of time steps to generate
x (tf.placeholder): The data vector. We can use feed_dict to set this to the music primer.
size_bt (tf.float32): The batch size
u0 (tf.Variable): The initial state of the RNN
n_visible (int): The size of the data vectors
prime_length (int): The number of times teps into the primer song that we use befoe beginning to generate music
Returns:
The generated music, as a tf.Tensor
"""
Uarr = tf.scan(rnn_recurrence, x, initializer=u0)
# U = Uarr[np.floor(prime_length/midi_manipulation.num_timesteps), :, :]
U = Uarr[int(np.floor(prime_length / midi_manipulation.num_timesteps)), :, :]
# [_, _, _, _, _, music] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter,
# generate_recurrence, [tf.constant(1, tf.int32), tf.constant(num), U,
# tf.zeros([1, n_visible], tf.float32), x,
# tf.zeros([1, n_visible], tf.float32)])
time_steps = tf.constant(1, tf.int32)
iterations = tf.constant(num)
u_t = tf.zeros([1, n_visible], tf.float32)
music = tf.zeros([1, n_visible], tf.float32)
loop_vars = [time_steps, iterations, U, u_t, x, music]
[_, _, _, _, _, music] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, generate_recurrence,
loop_vars,
shape_invariants=[time_steps.get_shape(), iterations.get_shape(),
U.get_shape(), u_t.get_shape(),
x.get_shape(), tf.TensorShape([None, 780])])
return music
Thank you for your information!! But, unfortunately, My laptop was broken.... I will do it as soon as possible!
Thanks very much!
On Sun, Apr 30, 2017 at 5:36 PM, Gerardo Castaneda <notifications@github.com
wrote:
I think I managed to make this work with Tensorflow 1.1. Here are all the steps:
1.
First, you need to upgrade your python scripts using the tensorflow upgrade utility https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/compatibility 2.
In midi_manipulation.py change get_song to this
def get_song(path):
Load the song and reshape it to place multiple timesteps next to each other
song = np.array(midiToNoteStateMatrix(path)) song = song[:int(np.floor(song.shape[0]/num_timesteps)*num_timesteps)] song = np.reshape(song, [song.shape[0]/num_timesteps, song.shape[1]*num_timesteps]) return song
- In RBM.py replace the call to deprecated control_flow_ops
[, , x_sample] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter,
# gibbs_step, [ct, tf.constant(k), x], 1, False) # [_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step, [_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step, [ct, tf.constant(k), x], parallel_iterations=1, back_prop=False)
In rnn_rbm.py update generate to look like this
def generate(num, x=x, size_bt=size_bt, u0=u0, n_visible=n_visible, prime_length=100): """ This function handles generating music. This function is one of the outputs of the build_rnnrbm function Args: num (int): The number of time steps to generate x (tf.placeholder): The data vector. We can use feed_dict to set this to the music primer. size_bt (tf.float32): The batch size u0 (tf.Variable): The initial state of the RNN n_visible (int): The size of the data vectors prime_length (int): The number of times teps into the primer song that we use befoe beginning to generate music Returns: The generated music, as a tf.Tensor
""" Uarr = tf.scan(rnn_recurrence, x, initializer=u0) # U = Uarr[np.floor(prime_length/midi_manipulation.num_timesteps), :, :] U = Uarr[int(np.floor(prime_length / midi_manipulation.num_timesteps)), :, :] # [_, _, _, _, _, music] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter, # generate_recurrence, [tf.constant(1, tf.int32), tf.constant(num), U, # tf.zeros([1, n_visible], tf.float32), x, # tf.zeros([1, n_visible], tf.float32)]) time_steps = tf.constant(1, tf.int32) iterations = tf.constant(num) u_t = tf.zeros([1, n_visible], tf.float32) music = tf.zeros([1, n_visible], tf.float32) loop_vars = [time_steps, iterations, U, u_t, x, music] [_, _, _, _, _, music] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, generate_recurrence, loop_vars, shape_invariants=[time_steps.get_shape(), iterations.get_shape(), U.get_shape(), u_t.get_shape(), x.get_shape(), tf.TensorShape([None, 780])]) return music
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dshieble/Music_RNN_RBM/issues/4#issuecomment-298261949, or mute the thread https://github.com/notifications/unsubscribe-auth/APMnHhy_XIlFNACQgqdDIcBAKXHAtFmnks5r1Q0AgaJpZM4NMATo .
I got my new laptop! do you have something special?
On Sun, Apr 30, 2017 at 7:10 PM, Chungmin Park chuck.m.park@gmail.com wrote:
Thank you for your information!! But, unfortunately, My laptop was broken.... I will do it as soon as possible!
Thanks very much!
On Sun, Apr 30, 2017 at 5:36 PM, Gerardo Castaneda < notifications@github.com> wrote:
I think I managed to make this work with Tensorflow 1.1. Here are all the steps:
1.
First, you need to upgrade your python scripts using the tensorflow upgrade utility https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/compatibility 2.
In midi_manipulation.py change get_song to this
def get_song(path):
Load the song and reshape it to place multiple timesteps next to each other
song = np.array(midiToNoteStateMatrix(path)) song = song[:int(np.floor(song.shape[0]/num_timesteps)*num_timesteps)] song = np.reshape(song, [song.shape[0]/num_timesteps, song.shape[1]*num_timesteps]) return song
- In RBM.py replace the call to deprecated control_flow_ops
[, , x_sample] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter,
# gibbs_step, [ct, tf.constant(k), x], 1, False) # [_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step, [_, _, x_sample] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, gibbs_step, [ct, tf.constant(k), x], parallel_iterations=1, back_prop=False)
In rnn_rbm.py update generate to look like this
def generate(num, x=x, size_bt=size_bt, u0=u0, n_visible=n_visible, prime_length=100): """ This function handles generating music. This function is one of the outputs of the build_rnnrbm function Args: num (int): The number of time steps to generate x (tf.placeholder): The data vector. We can use feed_dict to set this to the music primer. size_bt (tf.float32): The batch size u0 (tf.Variable): The initial state of the RNN n_visible (int): The size of the data vectors prime_length (int): The number of times teps into the primer song that we use befoe beginning to generate music Returns: The generated music, as a tf.Tensor
""" Uarr = tf.scan(rnn_recurrence, x, initializer=u0) # U = Uarr[np.floor(prime_length/midi_manipulation.num_timesteps), :, :] U = Uarr[int(np.floor(prime_length / midi_manipulation.num_timesteps)), :, :] # [_, _, _, _, _, music] = control_flow_ops.While(lambda count, num_iter, *args: count < num_iter, # generate_recurrence, [tf.constant(1, tf.int32), tf.constant(num), U, # tf.zeros([1, n_visible], tf.float32), x, # tf.zeros([1, n_visible], tf.float32)]) time_steps = tf.constant(1, tf.int32) iterations = tf.constant(num) u_t = tf.zeros([1, n_visible], tf.float32) music = tf.zeros([1, n_visible], tf.float32) loop_vars = [time_steps, iterations, U, u_t, x, music] [_, _, _, _, _, music] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, generate_recurrence, loop_vars, shape_invariants=[time_steps.get_shape(), iterations.get_shape(), U.get_shape(), u_t.get_shape(), x.get_shape(), tf.TensorShape([None, 780])]) return music
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dshieble/Music_RNN_RBM/issues/4#issuecomment-298261949, or mute the thread https://github.com/notifications/unsubscribe-auth/APMnHhy_XIlFNACQgqdDIcBAKXHAtFmnks5r1Q0AgaJpZM4NMATo .
I've got the same error, and I've partially solved thanks to the modifications suggested by gc-cloud in midi_modifications.py, RNN.py and rnn_rbm.py. Now, my problem is the generation of polyphonies (rnn_rbm_generate.py). In fact, if I use main(sys.argv[1]) the answer is IndexError: list index out of range. If I utilize main(sys.argv[1:]), I obtain:
])
Maybe, is it necessary to change also this file? gc-cloud, please, may you help me? I'm not expert in Python and TensorFlow... Laura
Same Error as @marialaura73 . @marialaura73 did you get it solved? Thanks in advanced Hrishikesh
I am stuck in first part of your README.md I have download importing things from pandas to python-midi by pip What is that I was missed?