igormq / ctc_tensorflow_example

CTC + Tensorflow Example for ASR
MIT License
313 stars 183 forks source link

Problems with training deep RNN and with multiple examples #6

Closed razor1179 closed 7 years ago

razor1179 commented 7 years ago

Hi,

I'm currently trying to train the RNN with multiple test inputs, in what format will the 'train_targets' and 'train_inputs' variables have to be to successfully train the network? I thought of concatenating all the inputs but as the train_seq_len and number of targets per input varies I cannot concatenate the entire database.

Also if I increase the number of layers to more than 1 the training cost does not reduce sufficiently (still using only one example), is this because there is not enough data? If that is the case then shouldn't the RNN resort to over-fitting but still predict the correct output?

Regards, Deepak

igormq commented 7 years ago

Hi, @razor1179. Sorry for the delay on answering your question.

If I understood this right, you wan't to train this model with more than 1 data, am I right?

To do this, you need to generate a tensor for input of size [n_batch, max_timesteps, n_features] where max_timesteps will be the maximum length of input along the batch. train_seq_len will be an one dimensional vector(n_batch,) where you'll put the length of each input, and this will help the network at the calculation of the cost function and on unrolling. Finally, your targets must be an sparse tensor of shape (n_batch, max_utterance_size).

For you second question: did you initialize the weights in the proper way?

Best regards, IMQ

zergioguitar commented 7 years ago

Hi, I've was watching this code to try to make my own speech recognition for latin spanish. I'm trying to use audio book files to train my net but I can't get a good result when i switch to another file. I can save and restore the model pretty well but when I use a female voice to train and a male voice to recognize only got some pieces of words. What could be the problem? Should I apply some audio filters? how can I add more features to the inputs? (ex. mffc, fft, amplitud, etc)

Regards,

Sergio

razor1179 commented 7 years ago

Hi @igormq, I followed the instructions to generate a tensor for input of size [n_batch, max_timesteps, n_features], but as multiple training samples do not have the same timestep I padded them with zeros to match the training sample which has the the max_timestep. The train_seq_len however has the actual timestep of each input sample in a one dimensional vector (n_batch). Is this correct? Also in the example you have chosen the MomentumOptimizer, is there any reason why you chose this out of the many Optimizers provided by Ternsorflow?

fangbiyi commented 7 years ago

Hi @igormq,

I've been trying to write a version of my own so as to use CTC loss to process languages. I came across one thing that I cannot figure out even after going through your code and tensorflow official document. It would be great if you could help me:

When we feed the labels (targets) into ctc_loss, how are we organizing the tuple for sparse tensor? My understanding is that each row of indices corresponds to the [batch, time], where, for batch size as 2, it can be [0, 0], [0, 1], [1, 0], [1, 1] and represents the labels of two sentences with two words (or letters) in each one?

Hi @razor1179 @zergioguitar , I noticed that you both had some experience playing with this, can you kindly help me on this one?

Thanks in advance! Biyi

razor1179 commented 7 years ago

@jackyfff ,

From my understanding the targets are usually a 1D list for each utterance, converting it to the appropriate sparse tensor can be done using the code suggested here: http://stackoverflow.com/questions/42127505/tensorflow-dense-to-sparse. Hope this helps.