jbohnslav / deepethogram

Other
99 stars 32 forks source link

InAccurate Predictions? #34

Closed toujames closed 3 years ago

toujames commented 3 years ago

Hi,

I have general questions about the workflow mentioned in the Getting started documents. We're seeing how accurate the model infers the behaviors compared to a manual coder/labeler.

We're using ICC to compare the difference between manual coder and the model. Some behaviors we're getting good ICC and other's not so great.

If we keep correcting the incorrect behaviors the model predicts, would this not be the same as re training the dataset again?

Our method is 1) labeling a video (with the correct behaviors) 2) training using flow generator/feature extractor /sequence 3) Comparing the inference with manual coder, (this is where we are at)

Once we're at step 3 and the model isn't so accurate, can we not just go to 2 instead of importing predictions as labels?

Thank you. James

jbohnslav commented 3 years ago

ICC == intraclass correlation?

If we keep correcting the incorrect behaviors the model predicts, would this not be the same as re training the dataset again?

Yes, all you're doing is labeling more data and re-training the same models. The reason for the "import predictions as labels" function is that by the paper, >90% of the elements of your "ethogram" should be correct, and therefore importing predictions and editing them should be a lot faster than just labeling the whole video from scratch.

Do you find that it's not faster to import predictions? Can you tell me a bit more about the performance issues?

toujames commented 3 years ago

Yeah, ICC = interclass correlation.

I can import the predictions, there's no issue there. My issue isn't performance issue (unless you mean how well the model is able to predict behaviors?), it's the method.

Let's say i have 5 videos I used to train that has been manually labeled with all my behaviors. I then train with all Flow Generator/ Feature Extractor/ and sequence.

I Run inference after sequence on a new video (Let's call it Video A). The problem is it turns out that the behaviors predicted from the mode for Video A aren't as accurate compared with someone who labeled Video A.

Without using Video A as another video to train, how can I make the model better?

jbohnslav commented 3 years ago

My issue isn't performance issue (unless you mean how well the model is able to predict behaviors?)

I do mean that!

The problem is it turns out that the behaviors predicted from the mode for Video A aren't as accurate compared with someone who labeled Video A. Without using Video A as another video to train, how can I make the model better?

I should make a performance guide somewhere. The best way to improve the model would be to have someone label more videos. Everything after that depends a bit on your data and problem.

If you describe a bit better the model performance issues, I can give more customized feedback.

toujames commented 3 years ago

I should make a performance guide somewhere.

That would be awesome. I use your guides religiously. :)

If you find that the model has too many false-positives on rare behaviors, you can decrease the train.loss_weight_exp from 1.0 to 0.5

I'll try to change adjust those parameters.

If you have plenty of data, try moving from deg_f to deg_m

How do I used deg_m instead? on the model list I only see deg_f and other models i have created?

jbohnslav commented 3 years ago

How do I used deg_m instead? on the model list I only see deg_f and other models i have created?

To change to deg_m, you would have to change the preset in the config file. However, this would entail retraining all 3 models, which is quite arduous.

toujames commented 3 years ago

Thanks, I have changed config file to preset to 'deg_m'. I've also changed the loss_weight_exp value to 0.5.

I'll go ahead and retrain to see if I get better performance using those parameters.

toujames commented 3 years ago

So I used the deg_m preset and I got an error trying to run inference after feature_extractor.

Here's the error output: I'm also seeing alot of warnings as well which might be some indication it probably didn't train correctly?

(deg) james@libr-rt2:~$ python -m deepethogram
[2021-02-03 14:56:29,221][deepethogram.gui.main][INFO] - CWD: /home/james/gui_logs/210203_145629_None
[2021-02-03 14:56:29,222][deepethogram.gui.main][INFO] - Configuration used: cmap: deepethogram
control_arrow_jump: 31
label_view_width: 31
notes: null
postprocessor:
  min_bout_length: 2
  type: min_bout
prediction_opacity: 0.2
run:
  type: gui
unlabeled_alpha: 0.1
vertical_arrow_jump: 3

[2021-02-03 14:56:40,486][deepethogram.gui.main][INFO] - loaded project configuration: {'augs': {'LR': 0.5, 'UD': 0.0, 'brightness': 0.25, 'contrast': 0.1, 'crop_size': None, 'degrees': 10, 'normalization': {'N': 2441084928, 'mean': [0.5027036828932421, 0.48513365068507, 0.4612504518190313], 'std': [0.2723013599876207, 0.2523336957666119, 0.23650593991303887]}, 'pad': None, 'random_resize': False, 'resize': [224, 224]}, 'preset': 'deg_m', 'compute': {'batch_size': 32, 'distributed': False, 'gpu_id': 0, 'num_workers': 8}, 'project': {'class_names': ['background', 'PL', 'PC', 'PZ', 'PN', 'HL', 'HC', 'HN', 'HZ', 'H999', 'TL', 'TC', 'TN', 'TZ', 'T999'], 'config_file': '/home/james/Documents/deepethogram_test3/project_config.yaml', 'data_path': 'DATA', 'labeler': 'j_m_k', 'model_path': 'models', 'name': 'deepethogram_test3', 'path': '/home/james/Documents/deepethogram_test3'}, 'sequence': {'filter_length': 15}, 'split': {'file': None, 'reload': True}, 'train': {'loss_weight_exp': 0.5}, 'flow_generator': {'weights': '/home/james/Documents/deepethogram_test3/models/210125_124934_flow_generator_train_None/checkpoint.pt'}}
[2021-02-03 14:56:40,591][deepethogram.gui.main][INFO] - Number finalized labels: 8
[2021-02-03 14:56:41,143][deepethogram.gui.main][INFO] - Record for loaded video: {'flow': None, 'label': '/home/james/Documents/deepethogram_test3/DATA/Max_XX318_B4_front/Max_XX318_B4_front_labels.csv', 'output': None, 'rgb': '/home/james/Documents/deepethogram_test3/DATA/Max_XX318_B4_front/Max_XX318_B4_front.mp4', 'key': 'Max_XX318_B4_front'}
[2021-02-03 14:56:56,028][deepethogram.gui.main][INFO] - inference running with args: ['python', '-m', 'deepethogram.feature_extractor.inference', 'project.config_file=/home/james/Documents/deepethogram_test3/project_config.yaml', 'inference.overwrite=True', 'feature_extractor.weights=/home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/checkpoint.pt', 'flow_generator.weights=/home/james/Documents/deepethogram_test3/models/pretrained/200310_174416_MotionNet_kinetics/checkpoint.pt', 'inference.directory_list=[/home/james/Documents/deepethogram_test3/DATA/Max_XX318_B4_front]']
[2021-02-03 14:56:58,177][__main__][INFO] - configuration used in inference: 
[2021-02-03 14:56:58,183][__main__][INFO] - augs:
  LR: 0.5
  UD: 0.0
  brightness: 0.25
  contrast: 0.1
  crop_size: null
  dali: false
  degrees: 10
  normalization:
    N: 2441084928
    mean:
    - 0.5027036828932421
    - 0.48513365068507
    - 0.4612504518190313
    std:
    - 0.2723013599876207
    - 0.2523336957666119
    - 0.23650593991303887
  pad: null
  random_resize: false
  resize:
  - 224
  - 224
compute:
  batch_size: 32
  dali: false
  distributed: false
  fp16: false
  gpu_id: 0
  num_workers: 8
feature_extractor:
  arch: resnet18
  curriculum: true
  dropout_p: 0.9
  final_activation: sigmoid
  fusion: average
  inputs: both
  n_flows: 10
  n_rgb: 1
  sampler: null
  sampling_ratio: null
  weight_decay: 0
  weights: /home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/checkpoint.pt
flow_generator:
  arch: TinyMotionNet
  flow_loss: MotionNet
  flow_max: 10
  flow_sparsity: false
  input_images: 11
  loss: MotionNet
  max: 10
  n_rgb: 11
  smooth_weight_multiplier: 1.0
  sparsity_weight: 0.0
  type: flow_generator
  weights: /home/james/Documents/deepethogram_test3/models/pretrained/200310_174416_MotionNet_kinetics/checkpoint.pt
inference:
  directory_list:
  - /home/james/Documents/deepethogram_test3/DATA/Max_XX318_B4_front
  ignore_error: true
  overwrite: true
notes: null
preset: deg_m
project:
  class_names:
  - background
  - PL
  - PC
  - PZ
  - PN
  - HL
  - HC
  - HN
  - HZ
  - H999
  - TL
  - TC
  - TN
  - TZ
  - T999
  config_file: /home/james/Documents/deepethogram_test3/project_config.yaml
  data_path: /home/james/Documents/deepethogram_test3/DATA
  labeler: j_m_k
  model_path: /home/james/Documents/deepethogram_test3/models
  name: deepethogram_test3
  path: /home/james/Documents/deepethogram_test3
reload:
  latest: false
  overwrite_cfg: false
  weights: null
run:
  model: feature_extractor
  type: inference
sequence:
  filter_length: 15
  latent_name: null
split:
  file: null
  reload: true
  train_val_test:
  - 0.8
  - 0.2
  - 0.0
train:
  loss_weight_exp: 0.5

[2021-02-03 14:56:58,184][deepethogram.dataloaders][INFO] -  ~~~ augmentations ~~~
[2021-02-03 14:56:58,185][deepethogram.dataloaders][INFO] - {'test': Compose(
    Resize(size=[224, 224], interpolation=bilinear)
    ToTensor()
    Normalize(mean=[0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313], std=[0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887])
),
 'train': Compose(
    Resize(size=[224, 224], interpolation=bilinear)
    RandomHorizontalFlip(p=0.5)
    RandomRotation(degrees=(-10, 10), resample=False, expand=False)
    ColorJitter(brightness=[0.75, 1.25], contrast=[0.9, 1.1], saturation=None, hue=None)
    ToTensor()
    Normalize(mean=[0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313], std=[0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887])
),
 'val': Compose(
    Resize(size=[224, 224], interpolation=bilinear)
    ToTensor()
    Normalize(mean=[0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313, 0.5027036828932421, 0.48513365068507, 0.4612504518190313], std=[0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887, 0.2723013599876207, 0.2523336957666119, 0.23650593991303887])
)}
[2021-02-03 14:56:58,295][deepethogram.projects][INFO] - loading specified weights
[2021-02-03 14:56:58,521][deepethogram.utils][INFO] - loading component spatial from file /home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/checkpoint.pt
[2021-02-03 14:56:58,521][deepethogram.utils][INFO] - loading from checkpoint file /home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/spatial/checkpoint.pt...
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.conv1.weight has different size: pretrained:torch.Size([64, 64, 1, 1]) model:torch.Size([64, 64, 3, 3])
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,551][deepethogram.utils][WARNING] - 0.layer1.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.0.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.bias not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.running_mean not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.running_var not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.conv1.weight has different size: pretrained:torch.Size([64, 256, 1, 1]) model:torch.Size([64, 64, 3, 3])
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,552][deepethogram.utils][WARNING] - 0.layer1.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer1.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.conv1.weight has different size: pretrained:torch.Size([128, 256, 1, 1]) model:torch.Size([128, 64, 3, 3])
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,553][deepethogram.utils][WARNING] - 0.layer2.0.downsample.0.weight has different size: pretrained:torch.Size([512, 256, 1, 1]) model:torch.Size([128, 64, 1, 1])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.weight has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.bias has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.running_mean has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.running_var has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.conv1.weight has different size: pretrained:torch.Size([128, 512, 1, 1]) model:torch.Size([128, 128, 3, 3])
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,554][deepethogram.utils][WARNING] - 0.layer2.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,555][deepethogram.utils][WARNING] - 0.layer2.3.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer2.3.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer2.3.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer2.3.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer2.3.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.conv1.weight has different size: pretrained:torch.Size([256, 512, 1, 1]) model:torch.Size([256, 128, 3, 3])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.downsample.0.weight has different size: pretrained:torch.Size([1024, 512, 1, 1]) model:torch.Size([256, 128, 1, 1])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.weight has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.bias has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.running_mean has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.running_var has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.1.conv1.weight has different size: pretrained:torch.Size([256, 1024, 1, 1]) model:torch.Size([256, 256, 3, 3])
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,556][deepethogram.utils][WARNING] - 0.layer3.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.3.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,557][deepethogram.utils][WARNING] - 0.layer3.3.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.3.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.4.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.4.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.4.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.4.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,558][deepethogram.utils][WARNING] - 0.layer3.4.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.4.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,559][deepethogram.utils][WARNING] - 0.layer3.5.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer3.5.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.conv1.weight has different size: pretrained:torch.Size([512, 1024, 1, 1]) model:torch.Size([512, 256, 3, 3])
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,560][deepethogram.utils][WARNING] - 0.layer4.0.downsample.0.weight has different size: pretrained:torch.Size([2048, 1024, 1, 1]) model:torch.Size([512, 256, 1, 1])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.weight has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.bias has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.running_mean has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.running_var has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.conv1.weight has different size: pretrained:torch.Size([512, 2048, 1, 1]) model:torch.Size([512, 512, 3, 3])
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,561][deepethogram.utils][WARNING] - 0.layer4.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.layer4.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.compression_fc.0.weight not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 0.compression_fc.0.bias not found in model dictionary
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 1.weight has different size: pretrained:torch.Size([700, 512]) model:torch.Size([15, 512])
[2021-02-03 14:57:00,562][deepethogram.utils][WARNING] - 1.bias has different size: pretrained:torch.Size([700]) model:torch.Size([15])
[2021-02-03 14:57:00,907][deepethogram.utils][INFO] - loading component flow from file /home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/checkpoint.pt
[2021-02-03 14:57:00,908][deepethogram.utils][INFO] - loading from checkpoint file /home/james/Documents/deepethogram_test3/models/200410_142156_hidden_two_stream_kinetics_degm/flow/checkpoint.pt...
[2021-02-03 14:57:00,965][deepethogram.utils][WARNING] - 0.layer1.0.conv1.weight has different size: pretrained:torch.Size([64, 64, 1, 1]) model:torch.Size([64, 64, 3, 3])
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.0.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.bias not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.running_mean not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.running_var not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.0.downsample.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.conv1.weight has different size: pretrained:torch.Size([64, 256, 1, 1]) model:torch.Size([64, 64, 3, 3])
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,966][deepethogram.utils][WARNING] - 0.layer1.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer1.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer2.0.conv1.weight has different size: pretrained:torch.Size([128, 256, 1, 1]) model:torch.Size([128, 64, 3, 3])
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer2.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer2.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,967][deepethogram.utils][WARNING] - 0.layer2.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.downsample.0.weight has different size: pretrained:torch.Size([512, 256, 1, 1]) model:torch.Size([128, 64, 1, 1])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.weight has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.bias has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.running_mean has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.0.downsample.1.running_var has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.conv1.weight has different size: pretrained:torch.Size([128, 512, 1, 1]) model:torch.Size([128, 128, 3, 3])
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,968][deepethogram.utils][WARNING] - 0.layer2.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,969][deepethogram.utils][WARNING] - 0.layer2.3.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer2.3.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.conv1.weight has different size: pretrained:torch.Size([256, 512, 1, 1]) model:torch.Size([256, 128, 3, 3])
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.downsample.0.weight has different size: pretrained:torch.Size([1024, 512, 1, 1]) model:torch.Size([256, 128, 1, 1])
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.weight has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.bias has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.running_mean has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,970][deepethogram.utils][WARNING] - 0.layer3.0.downsample.1.running_var has different size: pretrained:torch.Size([1024]) model:torch.Size([256])
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.conv1.weight has different size: pretrained:torch.Size([256, 1024, 1, 1]) model:torch.Size([256, 256, 3, 3])
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,971][deepethogram.utils][WARNING] - 0.layer3.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,972][deepethogram.utils][WARNING] - 0.layer3.3.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.4.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.5.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.5.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.5.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.5.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,973][deepethogram.utils][WARNING] - 0.layer3.5.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer3.5.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.conv1.weight has different size: pretrained:torch.Size([512, 1024, 1, 1]) model:torch.Size([512, 256, 3, 3])
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.downsample.0.weight has different size: pretrained:torch.Size([2048, 1024, 1, 1]) model:torch.Size([512, 256, 1, 1])
[2021-02-03 14:57:00,974][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.weight has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.bias has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.running_mean has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.0.downsample.1.running_var has different size: pretrained:torch.Size([2048]) model:torch.Size([512])
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.conv1.weight has different size: pretrained:torch.Size([512, 2048, 1, 1]) model:torch.Size([512, 512, 3, 3])
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.1.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.conv1.weight not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.bn1.weight not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.bn1.bias not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.bn1.running_mean not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.bn1.running_var not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.bn1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,975][deepethogram.utils][WARNING] - 0.layer4.2.conv2.weight not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn2.weight not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn2.bias not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn2.running_mean not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn2.running_var not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn2.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.conv3.weight not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn3.weight not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn3.bias not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn3.running_mean not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn3.running_var not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.layer4.2.bn3.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.compression_fc.0.weight not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 0.compression_fc.0.bias not found in model dictionary
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 1.weight has different size: pretrained:torch.Size([700, 512]) model:torch.Size([15, 512])
[2021-02-03 14:57:00,976][deepethogram.utils][WARNING] - 1.bias has different size: pretrained:torch.Size([700]) model:torch.Size([15])
/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/flow_generator/models/TinyMotionNet.py:49: UserWarning: ignoring flow div value of 10: setting to 1 instead
  warnings.warn('ignoring flow div value of {}: setting to 1 instead'.format(flow_div))
[2021-02-03 14:57:01,218][deepethogram.projects][INFO] - loading specified weights
[2021-02-03 14:57:01,218][deepethogram.utils][INFO] - loading from checkpoint file /home/james/Documents/deepethogram_test3/models/pretrained/200310_174416_MotionNet_kinetics/checkpoint.pt...
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1.0.weight has different size: pretrained:torch.Size([64, 33, 3, 3]) model:torch.Size([64, 33, 7, 7])
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv1_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2.0.weight has different size: pretrained:torch.Size([128, 64, 3, 3]) model:torch.Size([128, 64, 5, 5])
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,326][deepethogram.utils][WARNING] - conv2_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv2_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv3_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.0.weight has different size: pretrained:torch.Size([512, 256, 3, 3]) model:torch.Size([128, 256, 3, 3])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.0.bias has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.1.weight has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.1.bias has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.1.running_mean has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4.1.running_var has different size: pretrained:torch.Size([512]) model:torch.Size([128])
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,327][deepethogram.utils][WARNING] - conv4_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv4_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv4_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv4_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.0.weight not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.0.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.1.weight not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.1.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.1.running_var not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv5_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv6.0.weight not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv6.0.bias not found in model dictionary
[2021-02-03 14:57:01,328][deepethogram.utils][WARNING] - conv6.1.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6.1.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6.1.running_var not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.0.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.0.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.1.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.1.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.1.running_var not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - conv6_1.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv5.0.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv5.0.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv4.0.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv4.0.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv3.0.weight has different size: pretrained:torch.Size([788, 128, 4, 4]) model:torch.Size([128, 128, 4, 4])
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - deconv2.0.weight has different size: pretrained:torch.Size([404, 64, 4, 4]) model:torch.Size([128, 64, 4, 4])
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - xconv5.0.weight not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - xconv5.0.bias not found in model dictionary
[2021-02-03 14:57:01,329][deepethogram.utils][WARNING] - xconv5.1.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv5.1.bias not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv5.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv5.1.running_var not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv5.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.0.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.0.bias not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.1.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.1.bias not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.1.running_mean not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.1.running_var not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - xconv4.1.num_batches_tracked not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - predict_flow6.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - predict_flow5.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - predict_flow4.weight has different size: pretrained:torch.Size([20, 256, 3, 3]) model:torch.Size([20, 128, 3, 3])
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - upsampled_flow6_to_5.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - upsampled_flow6_to_5.bias not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - upsampled_flow5_to_4.weight not found in model dictionary
[2021-02-03 14:57:01,330][deepethogram.utils][WARNING] - upsampled_flow5_to_4.bias not found in model dictionary
[2021-02-03 14:57:01,552][deepethogram.projects][INFO] - loading specified weights
Traceback (most recent call last):
  File "/home/james/anaconda3/envs/deg/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/feature_extractor/inference.py", line 317, in <module>
    main()
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
    strict=strict,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
    overrides=args.overrides,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
    job_subdir_key=None,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
    ret.return_value = task_function(task_cfg)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/feature_extractor/inference.py", line 292, in main
    thresholds = f['threshold_curves']['val']['optimum'][:]
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/h5py/_hl/group.py", line 264, in __getitem__
    oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'threshold_curves' doesn't exist)"
[2021-02-03 14:57:02,061][deepethogram.gui.main][INFO] - Inference finished.

I didn't have any issues with flow generator or feature extractor during training.

Here's the train.log output for feature_extractor train. train.log

jbohnslav commented 3 years ago

It looks like feature extractor inference didn't work-- it tried to load your DEG_M mode weightsl, but into a DEG_F model. Does your project_config.yaml have preset: deg_m?

toujames commented 3 years ago

Does the single quotes matter? Here's the progject_config.yaml file I used.

augs:
  LR: 0.5
  UD: 0.0
  brightness: 0.25
  contrast: 0.1
  crop_size: null
  degrees: 10
  normalization:
    N: 2441084928
    mean:
    - 0.5027036828932421
    - 0.48513365068507
    - 0.4612504518190313
    std:
    - 0.2723013599876207
    - 0.2523336957666119
    - 0.23650593991303887
  pad: null
  random_resize: false
  resize:
  - 224
  - 224
preset: 'deg_m'
compute:
  batch_size: 32
  distributed: false
  gpu_id: 0
  num_workers: 8
project:
  class_names:
  - background
  - PL
  - PC
  - PZ
  - PN
  - HL
  - HC
  - HN
  - HZ
  - H999
  - TL
  - TC
  - TN
  - TZ
  - T999
  config_file: /home/james/Documents/deepethogram_test3/project_config.yaml
  data_path: DATA
  labeler: j_m_k
  model_path: models
  name: deepethogram_test3
  path: /home/james/Documents/deepethogram_test3
sequence:
  filter_length: 15
split:
  file: null
  reload: true
train:
  loss_weight_exp: 0.5
flow_generator:
  weights: /home/james/Documents/deepethogram_test3/models/210125_124934_flow_generator_train_None/checkpoint.pt
jbohnslav commented 3 years ago

Hmm yes, please remove the quotation marks in the "preset." In YAML files, the fields after the colon are automatically parsed into strings, numbers, bools, etc. I'll write a check for this.

toujames commented 3 years ago

Thanks, I removed the single quotes. I retrain the flow generator without the single quotes, and about to start feature extractor training. Usually I don't see the option to select the trained models under flow generator.

For instance, I'm only seeing the pretrained MotionNet kintetics model that I could select.

See my screenshot Screenshot from 2021-02-08 11-47-07

jbohnslav commented 3 years ago

That's interesting. It looks like the software is not "detecting" your deg_m flow generator. Can you show me the contents of the flow generator run directory?

toujames commented 3 years ago

Sure, here it is: Screenshot from 2021-02-08 15-30-56

jbohnslav commented 3 years ago

Hmm, there is a checkpoint.pt. Can you upload the config.yaml?

toujames commented 3 years ago

Sure, here it is. Had to change the extension name to txt to upload on github. But I gurantee it is labeled as config.yaml

config.yaml.txt

jbohnslav commented 3 years ago

It looks like it is a "TinyMotionNet" not a "MotionNet", which is what we would expect if the model was trained using deg_m. Can you upload the train.log?

toujames commented 3 years ago

Sure. here it is: train.log

jbohnslav commented 3 years ago

Hmm, one more thing: can you upload your project_config.yaml? If there's anything proprietary in it, I'm just looking for the flow_generator section, if it exists.

toujames commented 3 years ago

Not a problem. Same issue with .yaml file. changed to .txt to upload to github.

project_config.yaml.txt

toujames commented 3 years ago

Forgot to mention, I've added the flow_generator section in the project_config.yaml after. It wasn't there during training.

jbohnslav commented 3 years ago

Hmm. I'm not sure what's happening, because although you specified the preset=deg_m, it is still using the TinyMotionNet architecture. Did you use the GUI button or the command line to train?

toujames commented 3 years ago

I used the GUI button to train. I didn't use the command line.

toujames commented 3 years ago

Do you think perhaps I should train from the command line?

jbohnslav commented 3 years ago

Hmm. Whereever you launched the GUI, there is a folder called gui logs with a bunch of folders in it. Each folder comes from one time you ran using the GUI. Could you find that folder? It should have a .log file in there.

toujames commented 3 years ago

There is a bunch of folders. Which instance you want to check? Here's the most recent one. main.log

jbohnslav commented 3 years ago

If you open the .log file, can you find the one from the time that you used the GUI to train the flow generator? If there's nothing useful in that log file, I can just tell you the command line arguments to train the deg_m model. However, it seems like there might be a bug with the presets in the GUI, so it would be helpful to find the log file.

The log will have a line that looks like this in it: [2021-02-01 14:19:48,947][__main__][INFO] - flow_train called with args: ['python', '-m', 'deepethogram.flow_generator.train', 'project.config_file=/media/jim/DATA_SSD/woolf_revision_deepethogram/project_config.yaml']

toujames commented 3 years ago

OK, i'd rather help in finding the bug too. I think i found it. Here it is: main.log

jbohnslav commented 3 years ago

OK! All good there. Just closing the GUI and reopening does not show the MotionNet model you previously trained?

toujames commented 3 years ago

Yes, I've tried that numerous times. Still only showing the pretrained model, not the one I previously trained.

toujames commented 3 years ago

So if I were to do this on the command line it would be: (feature extractor) $ python -m deepethogram.feature_extractor.train project.config_file=/home/james/Documents/deepethogram_degm_deepethogram/project_config.yaml reload.weights=latest

?

jbohnslav commented 3 years ago

There is something going wrong where the GUI assumes that the preset is deg_f. I'm looking into that.

For now, it looks like your flow generator trained fine, so let's instead do

python -m deepethogram.feature_extractor.train project.config_file=(insert here) reload.weights=latest. And can you send me the train.log after the training loop actually starts? Want to make sure the initialization looks good.

jbohnslav commented 3 years ago

Unfortunately, when I just tried locally to add a preset flag and change to deg_m, it finds all the models correctly. For now I can't fix that GUI loading bug for you. There will be a big update in a few weeks that should change the framework a lot, so hopefully it'll be fixed then.

toujames commented 3 years ago

It failed:

Here's the command I used: $ python -m deepethogram.feature_extractor.train project.config_file=/home/james/Documents/deepethogram_degm_deepethogram/project_config.yaml reload.weights=latest

And here's the log on terminal:

(deg) james@libr-rt2:~$ python -m deepethogram.feature_extractor.train project.config_file=/home/james/Documents/deepethogram_degm_deepethogram/project_config.yaml reload.weights=latest
[2021-02-08 16:29:34,634][__main__][INFO] - cwd: /home/james/Documents/deepethogram_degm_deepethogram/models/210208_162934_feature_extractor_train_None
Traceback (most recent call last):
  File "/home/james/anaconda3/envs/deg/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/feature_extractor/train.py", line 675, in <module>
    main()
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
    strict=strict,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
    overrides=args.overrides,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
    job_subdir_key=None,
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
    ret.return_value = task_function(task_cfg)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/feature_extractor/train.py", line 48, in main
    cfg = utils.get_absolute_paths_from_cfg(cfg)
  File "/home/james/anaconda3/envs/deg/lib/python3.7/site-packages/deepethogram-0.0.1.post1-py3.7.egg/deepethogram/utils.py", line 85, in get_absolute_paths_from_cfg
    assert os.path.isfile(cfg.reload.weights)
AssertionError

I also attached the train.log file: train.log

toujames commented 3 years ago

Can you send a screenshot of the full error?

sorry, just updated the comment. I realized i didn't paste the whole thing.

toujames commented 3 years ago

However, if I specify the absolute path in reload weights, it seems to be working.

Here's a screen shot: Screenshot from 2021-02-08 16-36-34

I'll wait for the update and try again with that and come back here to see the resuls of this feature extractor?

jbohnslav commented 3 years ago

Hmm wow I think I messed up my own command line arguments :sweat:. It should work instead if you do python -m deepethogram.feature_extractor.train project.config_file=/home/james/Documents/deepethogram_degm_deepethogram/project_config.yaml reload.latest=True

However, it should work fine with the absolute path for now.

toujames commented 3 years ago

Ok, no worries. I'll post the results of this when it finishes. The specific computer I'm using now does have a CUDA gpu but takes a week or so to train.

Thanks for the help!

jbohnslav commented 3 years ago

What model of GPU is it?

toujames commented 3 years ago

GeForce GTX 1080 Here's the nvidia-smi:

(base) james@libr-rt2:~$ nvidia-smi
Mon Feb  8 16:44:50 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.95.01    Driver Version: 440.95.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 00000000:02:00.0  On |                  N/A |
| 25%   54C    P2    53W / 198W |   1958MiB /  8111MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      7889      G   /usr/lib/xorg/Xorg                           149MiB |
|    0      8345      G   /opt/teamviewer/tv_bin/TeamViewer              1MiB |
|    0      8574      G   compiz                                       133MiB |
|    0     16389      C   python                                      1623MiB |
|    0     36864      G   ...AAAAAAAAAAAAAAgAAAAAAAAA --shared-files    44MiB |
+-----------------------------------------------------------------------------+
jbohnslav commented 3 years ago

Hmm. Ok. Thanks for sticking with it. If I get everything working right, it should not take a week on a 1080-- on my 1080 Ti, the latest version should take only ~12 hours or so to train a deg_m. Week-long models are only for deg_s.

Most of the improvements are in the update, which I need to push out it looks like...

toujames commented 3 years ago

It finished feature extracer training. Here's the train.log train.log

But, same as before, it's not opening the correct models on the GUI.

Can I also run inference on the CLI as well?

Screenshot from 2021-02-11 08-54-04

toujames commented 3 years ago

nvm, I do see in the GUI logs that it's possible to run inference through the CLI and specify the specific weights.

toujames commented 3 years ago

Hey Jim,

We ran ICC with what Deg m inferred with a human, and we actually got worse results compared with degf.

jbohnslav commented 3 years ago

Hmm, how much worse was it? Sorry for the red herring. How many examples of each class do you have? You'll find them in your logs of a recent feature extractor or sequence training run. The line will have class_counts in it, like this:

[2020-12-24 23:27:25,138][deepethogram.data.utils][INFO] - Class counts: [154690 11278 60959 1786 659]

toujames commented 3 years ago

Here's the class counts from sequence train log file: [2021-02-11 16:30:19,265][deepethogram.dataloaders][INFO] - Class counts: [ 12 10462 13280 2752 14160 14342 12262 8603 3537 167 4630 8026 20338 597 11095]

I'm attaching the entire train.log file too train.log

toujames commented 3 years ago

Sorry, didn't answer your other question. Here's ICC for deg_f

## [1] "################## PL ##################"
## [1] "kappa: 0.784591949995287"
## [1] "icc: 0.798378305549772"
## [1] "################## PC ##################"
## [1] "kappa: 0.702582084406169"
## [1] "icc: 0.723244056382844"
## [1] "################## PZ ##################"
## [1] "kappa: 0.375761660394241"
## [1] "icc: 0.401090598881523"
## [1] "################## PN ##################"
## [1] "kappa: 0.485607917479788"
## [1] "icc: 0.361021784859402"
## [1] "################## HL ##################"
## [1] "kappa: 0.016319725776288"
## [1] "icc: 0.0204627002549735"
## [1] "################## HC ##################"
## [1] "kappa: 0.670046330964876"
## [1] "icc: 0.686682641521086"
## [1] "################## HZ ##################"
## [1] "kappa: 0.37162312153297"
## [1] "icc: 0.426455956410084"
## [1] "################## HN ##################"
## [1] "kappa: 0.00515024638820796"
## [1] "icc: 0.00352897203520227"
## [1] "################## H999 ##################"
## [1] "kappa: 0"
## [1] "icc: 0"
## [1] "################## TL ##################"
## [1] "kappa: 0.345398537665712"
## [1] "icc: 0.376046513426642"
## [1] "################## TC ##################"
## [1] "kappa: 0.572856525312433"
## [1] "icc: 0.589873850796547"
## [1] "################## TN ##################"
## [1] "kappa: 0.250044927325514"
## [1] "icc: 0.306198573779484"
## [1] "################## TZ ##################"
## [1] "kappa: 0.00550591218793434"
## [1] "icc: 0.00803691025450219"
## [1] "################## T999 ##################"
## [1] "kappa: -0.030370853500383"
## [1] "icc: -0.0107250856851369"

And here's the ICC for deg_m:

## [1] "################## PL ##################"
## [1] "kappa: 0.441447109407951"
## [1] "icc: 0.459781424982207"
## [1] "################## PC ##################"
## [1] "kappa: 0.665603504704408"
## [1] "icc: 0.687412288190295"
## [1] "################## PZ ##################"
## [1] "kappa: 0.397526919696372"
## [1] "icc: 0.423726281727099"
## [1] "################## PN ##################"
## [1] "kappa: 0.586131634344142"
## [1] "icc: 0.616017619172826"
## [1] "################## HL ##################"
## [1] "kappa: 0.458387158111717"
## [1] "icc: 0.48208536368865"
## [1] "################## HC ##################"
## [1] "kappa: 0.604557143612762"
## [1] "icc: 0.627364611637853"
## [1] "################## HZ ##################"
## [1] "kappa: 0.319451602807302"
## [1] "icc: 0.171631807563798"
## [1] "################## HN ##################"
## [1] "kappa: 0.187231948617621"
## [1] "icc: 0.0733191702584886"
## [1] "################## H999 ##################"
## [1] "kappa: 0"
## [1] "icc: 0"
## [1] "################## TL ##################"
## [1] "kappa: 0.533098419427365"
## [1] "icc: 0.549598328225265"
## [1] "################## TC ##################"
## [1] "kappa: 0.407081962375928"
## [1] "icc: 0.443037881438647"
## [1] "################## TN ##################"
## [1] "kappa: 0.0587178246193621"
## [1] "icc: 0.0807381347178966"
## [1] "################## TZ ##################"
## [1] "kappa: 0.0945495067368679"
## [1] "icc: 0.100209559403888"
## [1] "################## T999 ##################"
## [1] "kappa: -0.0235210695069929"
## [1] "icc: -0.016162940217305"

Each PL, PC, PZ, etc is a behavior,

jbohnslav commented 3 years ago

Hmm, weird-- it should be plenty of data. Also, deg_f vs deg_m seems like a mixed bag-- HL gets much better but TN gets much worse.

jbohnslav commented 3 years ago

Is the split.yaml file the same for deg_f and deg_m?

toujames commented 3 years ago

True, but it's odd that it's getting 'worse'. But yeah, the split.yaml file is the same for deg_f and deg_m. Do you think if we add more videos it will perform better?

jbohnslav commented 3 years ago

It definitely will-- that's about the only sure method of improving the model. You can see the performance increases with scale in Figure 5. Try out the "import predictions as labels" features, so that you only have to fix the errors that deepethogram makes, and let me know how that interface works for you.