Closed 11WUYU closed 4 years ago
The error is:
File "main.py", line 65, in
You need to inflate the number of depth channels at the initial convolutional layer kernels.
You need to inflate the number of depth channels at the initial convolutional layer kernels.
During the training, I put if not opt.no_train in the main function changed to if opt.no_train, and change the value of no_train in opt.py to true. I don't know whether these changes are correct or not.
@11WUYU , you can specify the pretrain models modality by the following parameter: --pretrain_modality
:
#!/bin/bash
python3 main.py
--root_path ~/
--video_path /dataset/EGO
--annotation_path /Real-time-GesRec/annotation_EgoGestur/egogestureall_but_None.json
--result_path Real-time-GesRec/results
--pretrain_path /Real-time-GesRec/models/egogesture_resnext_101_Depth_32.pth
--dataset egogesture
--sample_duration 32
--learning_rate 0.01
--model resnext
--model_depth 101
--resnet_shortcut B
--batch_size 1
--n_classes 83
--n_finetune_classes 83
--n_threads 16
--checkpoint 1
--modality RGB
--pretrain_modality Depth
--train_crop random
--n_val_samples 1
--test_subset test
--n_epochs 100
--no_train
--no_val
--test \
@ahmetgunduz But it said: unrecognized arguments: --pretrain_modality Depth
@ahmetgunduz Sorry, I used your previous code, there is no pretrain_ modality, I downloaded your current code and found that there are still some problems:
size mismatch for module.layer1.0.conv1.weight: copying a param with shape torch.Size([128, 64, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1, 1]).
can you share your bash script? Probably it is an issue with sample duration.
@ahmetgunduz
I am now using Jester's RGB model to train egogesture's RGB model. Do you think this is correct?
python3 main.py \ --root_path ~/ \ --video_path /dataset/EGO \ --annotation_path Real-time-GesRec-master/annotation_EgoGesture/egogestureall_but_None.json \ --result_path Real-time-GesRec-master/results \ --pretrain_path /Real-time-GesRec-master/models/jester_resnext_101_RGB_32.pth \ --dataset egogesture \ --sample_duration 32 \ --learning_rate 0.01 \ --model resnext \ --model_depth 101 \ --resnet_shortcut B \ --batch_size 1 \ --n_classes 27 \ --n_finetune_classes 83 \ --n_threads 16 \ --checkpoint 1 \ --modality RGB \ --train_crop random \ --n_val_samples 1 \ --test_subset test \ --n_epochs 100 \ --no_train \ --no_val \ --test \
The main thing is that I change if not opt.no_train in to if opt.no_train in main.py .And I change the value of no_train from false to true in opt.py.I'm not sure it's the right thing to do, but it's going to get the training running.
And I don't understand what's the difference between using Jester's RGB model to train the RGB model of eggestre and using eggestre's depth model to train the RGB model of eggestre, which is better?
I downloaded all the models you provided. There's an egogestrure resnext 1.0x RGB 32 checkpoint.pth , is this the RGB classification model of the egogestrure?
Hello dear author, because I want to test the RGB image on EgoGeture dataset, I want to use the RGB model, but because there is no RGB model, I want to use the depth model to train the RGB model, but I encounter the error that the number of channels does not match. How can I solve this?
And I would like to ask the following, can detector also use depth to train? If not, how to train the RGB model of detector?
Here are the parameters I used in training, and I put opt.py No in Train and no Val changed to true, I don't know if it is modified like this, right?
!/bin/bash
python3 main.py \ --root_path ~/ \ --video_path /dataset/EGO \ --annotation_path /Real-time-GesRec/annotation_EgoGestur/egogestureall_but_None.json \ --result_path Real-time-GesRec/results \ --pretrain_path /Real-time-GesRec/models/egogesture_resnext_101_Depth_32.pth \ --dataset egogesture \ --sample_duration 32 \ --learning_rate 0.01 \ --model resnext \ --model_depth 101 \ --resnet_shortcut B \ --batch_size 1 \ --n_classes 83 \ --n_finetune_classes 83 \ --n_threads 16 \ --checkpoint 1 \ --modality Color \ --train_crop random \ --n_val_samples 1 \ --test_subset test \ --n_epochs 100 \ --no_train \ --no_val \ --test \