Cloud-CV / visual-chatbot

:cloud: :eyes: :speech_balloon: Visual Chatbot
http://visualchatbot.cloudcv.org
188 stars 58 forks source link

Cannot find encoders/decoders files #11

Closed tzs930 closed 5 years ago

tzs930 commented 5 years ago

Hi, I have some troubles to run visual-chatbot. I changed a path string for load_path in chat/constants.py, from 'load_path': 'models/hre-qih-g-10.t7' to models/hre-ques-im-hist-gen-vgg16-14.t7 because I can't see any downloaded file hre-qih-g-10.t7 after I run download_model.sh script.

VISDIAL_CONFIG = {
    'input_json': 'data/chat_processed_params_0.9.json',
    'load_path': 'models/hre-ques-im-hist-gen-vgg16-14.t7',
    'result_path': 'results',
    'gpuid': 0,
    'backend': 'cudnn',
    'proto_file': 'models/VGG_ILSVRC_16_layers_deploy.prototxt',
    'model_file': 'models/VGG_ILSVRC_16_layers.caffemodel',
    'beamSize': 5,
    'beamLen': 20,
    'sampleWords': 0,
    'temperature': 1.0,
    'maxThreads': 500,
    'encoder': 'hre-ques-im-hist',
    'decoder': 'disc'
}

When I run python worker.py, the results are like below:

~/visual-chatbot$ python worker.py 
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 553432081
Successfully loaded models/VGG_ILSVRC_16_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
DataLoader loading h5 file:     data/chat_processed_params_0.9.json
Vocabulary size (with <START>,<END>): 8847

Setting up model..
Encoder:    hre-ques-im-hist
Decoder:    gen
Traceback (most recent call last):
  File "worker.py", line 38, in <module>
    constants.VISDIAL_CONFIG['decoder'],
  File "/home/xai/.local/lib/python2.7/site-packages/PyTorch-4.1.1_SNAPSHOT-py2.7-linux-x86_64.egg/PyTorchHelpers.py", line 20, in __init__
  File "/home/xai/.local/lib/python2.7/site-packages/PyTorch-4.1.1_SNAPSHOT-py2.7-linux-x86_64.egg/PyTorchAug.py", line 255, in __init__
Exception: cannot open encoders/hre-ques-im-hist.lua: No such file or directory

I think encoder/decoder lua files should be generated automatically when I loaded model file, but they are not.

  1. Which file is right for load_path among recent downloaded files? I think this should be updated in chat/constants.py.
  2. Are encoder/decoder lua files generated automatically? or I should download/install them?
abhshkdz commented 5 years ago

You could use the encoder/decoder definitions from here: https://github.com/batra-mlp-lab/visdial/tree/master/encoders https://github.com/batra-mlp-lab/visdial/tree/master/decoders They're not auto-generated.

tzs930 commented 5 years ago

Thank you for your help!! :)