anuragmishracse / caption_generator

A modular library built on top of Keras and TensorFlow to generate a caption in natural language for any input image.
MIT License
265 stars 120 forks source link

I couldn't get the weights-improvement-48.hdf5 #22

Open suikammd opened 6 years ago

suikammd commented 6 years ago

After training, I noticed I just generate weights as follows: weights-improvement-01.hdf5 weights-improvement-02.hdf5 weights-improvement-03.hdf5 weights-improvement-04.hdf5 Apart from mentioned above, I get no other hdf5 file generated, can anyone tell me what's the problem? I had checked the address of my training dataset is right and the epoch is 50.

suikammd commented 6 years ago

@anuragmishracse I‘m sorry to disturb you. After testing, the captions seem to be composed of similar fragments.

SrivalyaElluru commented 6 years ago

Can you please upload the weights-improvement-48.hdf5 file? The captions generated from the weights-improvement-04.gdf5 is not good.

wrat commented 6 years ago

I am also Facing same problem. After Training network , just generate weights-improvement-01.hdf5 weights-improvement-02.hdf5 weights-improvement-03.hdf5 weights-improvement-04.hdf5 weights-improvement-05.hdf5 not further improvement of weights. Can anyone share solution for same? or Can you please upload the weights-improvement-48.hdf5 file

suikammd commented 6 years ago

Sorry, I only get the weights-improvement-8.hdf5 file and I cannot figure out why.

At2018-03-20 20:12:48,Abhishek Vermanotifications@github.comwrote:

I am also Facing same problem. After Training network , just generate weights-improvement-01.hdf5 weights-improvement-02.hdf5 weights-improvement-03.hdf5 weights-improvement-04.hdf5 weights-improvement-05.hdf5 not further improvement of weights. Can anyone share solution for same? or Can you please upload the weights-improvement-48.hdf5 file

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

wrat commented 6 years ago

@anuragmishracse Can you please see this issue ?

sudhakar-sah commented 6 years ago

@suikammd The reason may be because your loss is not reducing and it is not saving further weight files (we can set this in keras during training)

ShixiangWan commented 6 years ago

@sudhakar-sah After changing 1024 batch size, the loss of network is reducing. But the best weight is not good. For example: image

sudhakar-sah commented 6 years ago

@ShixiangWan I tried this repo few days ago and my loss was not reducing after 2-3 epochs. I am planning to work on this repo. There are few things that we can do for example, we can train a word2vec model on captions and initialize the embedding layer with these weights. are you still working on it ? if you like, we can work on it together ?

Kinghup commented 5 years ago

Do you solve this problem? I have same problem and the loss was not reducing. And only saving the model after 3 epochs. How should i do to reduce the loss?

Kinghup commented 5 years ago

@ShixiangWan I tried this repo few days ago and my loss was not reducing after 2-3 epochs. I am planning to work on this repo. There are few things that we can do for example, we can train a word2vec model on captions and initialize the embedding layer with these weights. are you still working on it ? if you like, we can work on it together ?

Do you work with the repo now? I meet many problems , please help me.