anuragmishracse / caption_generator

A modular library built on top of Keras and TensorFlow to generate a caption in natural language for any input image.
MIT License
265 stars 120 forks source link

Network does not converge, bad captions #9

Open PavlosMelissinos opened 6 years ago

PavlosMelissinos commented 6 years ago

Hello,

I've followed your instructions and started training the network. The loss reaches its minimum value after about 5 epochs and then it starts to diverge again.

After 50 epochs, the generated captions of the best epoch (5th or 6th) look like this:

Predicting for image: 992
2351479551_e8820a1ff3.jpg : exercise lamb Fourth headphones facing pasta soft her soft her soft her soft her soft her dads college soft her dads college soft her her her her her soft her her her her her soft her her her her
Predicting for image: 993
3514179514_cbc3371b92.jpg : fist graffitti soft her soft her Hollywood Fourth Crowd soft her her soft her her her her her soft her her her her her her soft her her her her soft her her her her soft her her her
Predicting for image: 994
1119015538_e8e796281e.jpg : closeout security soft her soft her security fall soft her her her her her fall soft her her her her her her soft her her her her her soft her her her her soft her her her her her
Predicting for image: 995
3727752439_907795603b.jpg : roots college Fourth tree-filled o swing-set places soft her soft her her soft her her soft her her college soft her her her her her her her soft her her her her soft her her her her her her

Any idea what's wrong?

MikhailovSergei commented 6 years ago

hi, I also have faced this problem. Let's work together to avoid this problem. My mail: msaburoj@gmail.com. Waiting for your answer

anuragmishracse commented 6 years ago

It's been a while since I worked on this repo. I'll try to retrain it and reproduce this error sometime next week and see if something needs change.

Meanwhile, @PavlosMelissinos and @MikhailovSergei if you were able to debug this, feel free to update and send a pull request.

MikhailovSergei commented 6 years ago

ok), will try too

MikhailovSergei commented 6 years ago

Hello, do u have the Flickr_30k.trainimages.txt and Flickr_30k.testimages.txt files. I can't find this files in anywhere=( In official web it's unable to download. I have image I need just this files

lopezlaura commented 6 years ago

Hello, I am also facing the exact same problem, please let me know if you find a solution. @MikhailovSergei I have just sent you an email.

MikhailovSergei commented 6 years ago

Hi, I am glad to receive u comment. I have changed batch-size. I set it equal to 1500 instead 32 in capture_generator.py and train_model.py. after 43-45 epoch it can work a little better. Please give me know about u result and if u find some more better ways)))

anuragmishracse commented 6 years ago

@MikhailovSergei @lopezlaura It actually depends on the dataset. Different datasets will ideally require us to tune hyperparameters to get optimal captions. It's not usual that we can reuse the hyperparameters.

Things that you can try:

  1. Changing the batch size, try keeping it 1024
  2. Changing the Learning rate can help you reach an optimum.
  3. Changing the optimization algorithm.

If it helps you improve your model, do post your results here for others to refer to.

MikhailovSergei commented 6 years ago

So what batch_size is better for Flickr 8k?

aashimasingh commented 6 years ago

I am facing the same issue while using Flickr8k and the captions are not making any sense. Particular words are getting repeated in every sentence. Somehow, it is working better on a subset of 100 images rather than the entire dataset. I have tried changing the batch size but it didn't help. Could you give any suggestions?

EriCongMa commented 6 years ago

After I trained the model , it gave me the result as follows:

yielding count: 599098
yielding count: 599099
yielding count: 599100
yielding count: 599101
yielding count: 599102
yielding count: 599103
yielding count: 599104
yielding count: 599105
yielding count: 599106
yielding count: 599107
yielding count: 599108
yielding count: 599109
yielding count: 599110
Epoch 00050: loss did not improve
 - 1177s - loss: 6.7838 - acc: 0.3085
Training complete...

U can see the loss is high and the acc is low. Meanwhile, when I run the test_model, all of the output sentences are the same. I wanna know where to change learning rate and which optimization algorithm can be better?

BTW, can you share ur weight file to me? My email address is macong275262544@outlook.com Thanks very much.

kashyap32 commented 6 years ago

changing a batch size can improve accuracy . try it with 1024. And can you share me model.save file mail - kashyap32raval@gmail.com Thanks!

zbj6633 commented 6 years ago

I am a university student,can you share me model.save file,I want to see the effect. mail - zbj6633@qq.com Thanks!

MikhailovSergei commented 6 years ago

but if we take 1024 batch size it will be overfit

zbj6633 commented 6 years ago

@MikhailovSergei 1024batch need how much memory GPU

b10112157 commented 6 years ago

can ur share me model.save file ?my networks doesn"t also converge mail :b10112157@gmail.com @MikhailovSergei @kashyap32 @army3401 @aashimasingh @lopezlaura

thanks

ShixiangWan commented 6 years ago

My networks doesn't converge, too. So maybe this is a bug. :(

b10112157 commented 6 years ago

do u have else project for "image 2 caption" ? if u have run window10 project,can u give me ?

2018-05-14 9:43 GMT+08:00 Shixiang Wan notifications@github.com:

My networks doesn't converge, too. So maybe this is a bug. :(

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/anuragmishracse/caption_generator/issues/9#issuecomment-388674167, or mute the thread https://github.com/notifications/unsubscribe-auth/AeQNYp2WICrBieuXQn22ME3hw-E2FXnwks5tyOEugaJpZM4QK3bU .

ShixiangWan commented 6 years ago

@b10112157 Sorry, I have no other image caption projects, and no windows 10 image caption projects. But for this, tensorboard screenshot is the following:

image

b10112157 commented 6 years ago

Can u share ur best weight file?

從我的 iPhone 傳送

Shixiang Wan notifications@github.com 於 2018年5月14日 上午9:58 寫道:

@b10112157 Sorry, I have no other image caption projects, and no windows 10 image caption projects. But for this, tensorboard screenshot is the following:

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

ShixiangWan commented 6 years ago

@b10112157 Thanks for your kindly help. This my best weight and model file (epochs=50, batch_size=32): https://drive.google.com/open?id=1DlfecYfiPlViFCh1h9Op_6puaTAKwN0N

b10112157 commented 6 years ago

how accuracy ? and best accuracy epoch where

2018-05-14 10:15 GMT+08:00 Shixiang Wan notifications@github.com:

@b10112157 https://github.com/b10112157 Thanks for your kindly help. This my best weight and model file (epochs=50, batch_size=32): https://drive.google.com/open?id=1DlfecYfiPlViFCh1h9Op_6puaTAKwN0N

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/anuragmishracse/caption_generator/issues/9#issuecomment-388678042, or mute the thread https://github.com/notifications/unsubscribe-auth/AeQNYgjXcMist2uw32-IvAlX4dILGMu9ks5tyOjDgaJpZM4QK3bU .

ShixiangWan commented 6 years ago

@b10112157 As shown in above tensorboard screenshot, the best loss is 5.502 (5th step) and the best accuracy is 0.3267 according to the best loss.

ShixiangWan commented 6 years ago

@army3401 1024 batch need ~4.2GB GPU memory. This is my testing on single K80 GPU: image

b10112157 commented 6 years ago

My gpu is gtx1060 6g, run train batch 1024 have error,but batch 512 is ok

從我的 iPhone 傳送

Shixiang Wan notifications@github.com 於 2018年5月14日 上午10:46 寫道:

@army3401 1024 batch need ~4.2GB GPU memory. This is my testing on single K80 GPU:

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

ShixiangWan commented 6 years ago

@b10112157 Thanks. I am trying batch size 1024, and now the loss curve is apparent better than batch size 32. So maybe small batch size 32 results in the shock.

b10112157 commented 6 years ago

can u share ur batch 1024 weight file ? becaus the batch set 1024 i had oom ,so my batch 512 epoch in 15x is the best ,but the acc only 0.6.

2018-05-14 13:32 GMT+08:00 Shixiang Wan notifications@github.com:

@b10112157 https://github.com/b10112157 Thanks. I am trying batch size 1024, and now the loss curve is apparent better than batch size 32. So maybe small batch size 32 results in the shock.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/anuragmishracse/caption_generator/issues/9#issuecomment-388701531, or mute the thread https://github.com/notifications/unsubscribe-auth/AeQNYuEYSFGdT6YGs6IfK3JKTYmKHHpKks5tyRbagaJpZM4QK3bU .

ShixiangWan commented 6 years ago

@b10112157 This is 1024 batch size whole model file: https://drive.google.com/open?id=1rK5OkeCAb_kJLKR6EKlVqd_HzlZrjrYn

Tensorboard screenshot: image

But I sample and test some pictures just now, the captions are bad. For example: image

b10112157 commented 6 years ago

it"s model okay? mymail :b10112157@gmail.com can u contact me?

2018-05-14 17:37 GMT+08:00 Shixiang Wan notifications@github.com:

@b10112157 https://github.com/b10112157 This is 1024 batch size whole model file: https://drive.google.com/open?id=1rK5OkeCAb_kJLKR6EKlVqd_HzlZrjrYn

Tensorboard screenshot: [image: image] https://user-images.githubusercontent.com/9321757/39989790-77272774-579d-11e8-851c-fad78c723e92.png

But I sample and test some pictures just now, the captions are bad. For example: [image: image] https://user-images.githubusercontent.com/9321757/39989726-4f06024c-579d-11e8-9183-fa8edd32e589.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/anuragmishracse/caption_generator/issues/9#issuecomment-388756897, or mute the thread https://github.com/notifications/unsubscribe-auth/AeQNYpoU4XmYMh9-tMPH3Y8ZMeLDZpAlks5tyVBogaJpZM4QK3bU .

cynthia0811 commented 6 years ago

@ShixiangWan Hey,dear. And have you fix the bad captioning performance with the higher accuracy?

zhenming33 commented 6 years ago

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like 1536144644 1

Kinghup commented 5 years ago

@b10112157 This is 1024 batch size whole model file: https://drive.google.com/open?id=1rK5OkeCAb_kJLKR6EKlVqd_HzlZrjrYn

Tensorboard screenshot: image

But I sample and test some pictures just now, the captions are bad. For example: image

wow! It's great! I have the same problem,and i add the BN layer to stabilize the loss. but the best model's loss is 4.7 and the the acc is 0.37. Do you just adjust the batch size to 1024?

Kinghup commented 5 years ago

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like 1536144644 1

How do you solve the problem? I try you solution but it does't work. The caption is excursive and has no sense . I don't know where is wrong ,please help me.

Kinghup commented 5 years ago

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like 1536144644 1

how do you solve the problem? I try but it doesn't work....

a494456818 commented 5 years ago

I don't think setting batch_size to 32 will converge the training. I made the following settings:

  1. batch_size = 512
  2. @zhenming33 use his method At the 45th epoch convergence, loss is 2.4 +. At the same time, I set batch_size to 1024, 49 epoch convergence, loss to 1.5+ image If you need weight files, please let me know your email address.