jbohnslav / deepethogram

Other
98 stars 32 forks source link

Preventing multiple behaviors per frame #61

Open bwee opened 3 years ago

bwee commented 3 years ago

Is there any way to prevent Deepethogram from labeling a single frame with multiple behaviors?

Thanks!

jbohnslav commented 3 years ago

When I first started DeepEthogram all the datasets available to me were multi-label (1+ at a time) rather than multiclass (1 at a time). We don't describe this in the paper at all, but I built most of what is needed but not everything. In the feature extractor and sequence configurations, there's a field called final_activation. If you change sigmoid to softmax, it will change the final activation as described, and also change the loss weighting scheme appropriately. However, I haven't finished this, so I'm marking it as a feature request.

Remaining things to implement:

bwee commented 3 years ago

Changing sigmoid to softmax helps a lot!

edgarbc commented 3 years ago

Hi there,

I also want to do this (change the last activation to softmax), I assume we need to do this in the config of every model (feature extractor, sequence, etc) and also we dont need to re-train. Correct?

Thanks!

Edgar

edgarbc commented 3 years ago

I changed the config file in both (feature extractor and sequence) train model specs. However, when I press predict, the output is still multiple labeled. What am I doing wrong? Thanks!

jbohnslav commented 3 years ago

You do definitely need to re-train if you use the softmax activation.

edgarbc commented 2 years ago

Hi,

Just to confirm, the way to do this is by changing the config.yaml files (with final_activation = softmax) in the models\XXXXX-YYYYYY_feature_extractor_train\ and models\XXXXX-YYYYYY_sequence_train\ directories and then in the GUI press train in each?

I got confused because once you press train, a new directory (with updated date-time) is created but the new config.yaml there has the default final_activation (sigmoid).

Thanks again!

edgarbc commented 2 years ago

Update: to have more control over settings, I am now running everything on jupyter (using the colab example as a guide). I can change the cfg.feature_extractor.final_activation = 'softmax'. Now re-training all models :)

edgarbc commented 2 years ago

As i mentioned, now I changed feature_extractor.final_activation='softmax' and sequence.final_activation='softmax'. However, in the last step when I want to produce the predictions with postprocessing.postprocess_and_save(cfg) I get a csv file with '1's in more than one behavior for a particular time (row). How can I set this so only one behaviour has a '1' ?

I know I can just take the max of the probabilities but I am wondering if there is already an option to do this.

Thanks!

FKiffer commented 2 years ago

Changing to Softmax helped my project a lot, but it did not fully solve the multiple-label issue. For me this is only a problem when behaviors are similar (even if training metrics point to low false positives/negatives, and generally high accuracy)

One issue I did observe is that after running the feature extractor with final_activation = 'softmax', and also infering with 'softmax', when I prepared the sequence training config by: cfg = configuration.make_sequence_train_cfg(project_path=project_path) cfg.sequence.final_activation = 'softmax' sequence_model = sequence_train(cfg) print(OmegaConf.to_yaml(cfg)) I noticed that the feature extractor field of the config contained a final activation as sigmoid, despite having a sequence field with softmax: feature_extractor: arch: resnet18 curriculum: false dropout_p: 0.25 final_activation: sigmoid final_bn: false fusion: average inputs: both n_flows: 10 n_rgb: 1 sampler: null sampling_ratio: null weights: pretrained

Would this have an effect on training/inference? is it calling a previously trained sigmoidal feature extraction?

I tried again with specifying cfg.feature_extractor.weights = 'latest' but this resulted in the following softmax-softmax_error.txt (in brief): RuntimeError: Expected target size [16, 180], got [16, 4, 180]

Lastly, is there a way in which we can change the thresholds for predicting a specific behavior?

ktyssowski commented 1 year ago

Did you ever figure out the question of where exactly softmax must be specified? I tried switching to softmax, but am still getting a lot of places where multiple behaviors are labeled.