Open lucasjinreal opened 3 years ago
Yes, negative loss values can happen when using the --auto-tune-mtl
option, this is not an issue. This is due to the formulation used for the loss function.
Note that if you train on JAAD with initialization from the OpenPifPaf checkpoint, you should need much fewer epochs to converge.
@taylormordan I set a 100 epochs, but loss came to -31, what's the normal value of loss, is it become close to 0?
If you train for 5-10 epochs, the loss should be around 0 on average over an epoch, but loss values for individual batches may get higher or lower easily. I also observe this behavior.
@taylormordan Will the result become worse if train more epochs?
It will start to overfit at one point. The optimal number of epochs might depend on your hyper-parameter choice though (learning rate, batch size...).
hi can you plz help me with this issue?
python3 -m openpifpaf.train: error: unrecognized arguments: --datasets jaad --jaad-root-dir /content/drive/MyDrive/jaad/JAAD_clips/ --jaad-subset default --jaad-training-set train --jaad-validation-set val --pifpaf-pretraining --detection-bias-prior 0.01 --jaad-head-upsample 2 --jaad-pedestrian-attributes all --fork-normalization-operation power --fork-normalization-duplicates 35 --attribute-regression-loss l1 --attribute-focal-gamma 2
This is what I get after running the script. How have u donwloaded the dataset and what have you done with the annotations?
is this normal?