Closed YananSunn closed 2 years ago
Hi,
The module
keyword appears when you use DataParallel for the network model. Our pretrained params are from a single GPU machine, so there was no GPU parallelism involved. You can do one of two things to get around this:
module
at the front of the key names, like it expects.
Hi, thanks for the great work!
I met an error when I tried to run the code with the pretrained model.
Followed the previous issues, I downloaded the files: lmdb_test_s2ag_v2_cache_mfcc_14 lmdb_train_s2ag_v2_cache_mfcc_14 lmdb_val_s2ag_v2_cache_mfcc_14 vocab_models_s2ag vocab_models speaker_models trimodal_gen.pth.tar epoch_290loss-0.0048_model.pth.tar
Then I modified the basepath and the config yml, and tried the command "python main_v2.py --train-s2ag False --config ./config/multimodal_context_v2.yml".
However, I met this error: Traceback (most recent call last): File "main_v2.py", line 147, in
s2ag_epoch=290, make_video=True, save_pkl=True)
File "/data/sunyn/speech2affective_gestures/processor_v2.py", line 1436, in generate_gestures_by_dataset
s2ag_model_found = self.load_model_at_epoch(epoch=s2ag_epoch)
File "/data/sunyn/speech2affective_gestures/processor_v2.py", line 362, in load_model_at_epoch
self.s2ag_generator.load_state_dict(loaded_vars['gen_model_dict'])
File "/data/sunyn/miniconda3/envs/s2ag/lib/python3.7/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
As I noticed that the only difference between the missing keys and the unexpected keys is "module.". Could you help me fix this bug? Thanks so much!