Open DaBihy opened 3 weeks ago
Hi,
I have the same problem. Although, I resume from the lastest vjepa at epoch 300 (Plot of jepa-loss)
However, looking at regression regularization loss, it seems to be continually optimized over-time.
@icekang thank you for you comment, I can confirm that I have the same thing for reg loss:
The model is learning even though the JEPA loss is increasing. It's counterintuitive, but I think it's normal behavior for such frameworks as I observe the same thing when training BYOL.
Sorry, it was not regression loss, it is regularization loss regarding the variance of the predicted vector
Anyway, I think I should all be decreasing, especially jepa loss which indicates that the predicted feature vector is close to the actual feature vector
Hello Everyone,
I've been working with the V-JEPA model for a self-supervised learning project using a custom video dataset. Initially, the training loss decreases as expected, but starts to increase significantly after reaching a minimum. This behavior persists across multiple training sessions with different hyperparameters.
Configuration:
Data Setup
Data Augmentation
Loss Configuration
Mask Settings
Meta Configuration
Model Configuration
Optimization
Questions:
Any insights or suggestions would be greatly appreciated. Thank you for your support!
Best regards,
@MidoAssran