dgcnz / relaxed-equivariance-dynamics

Code for "Effect of equivariance on training dynamics"
2 stars 0 forks source link

[Meta Issue] (Wang, 2022) Table 1 Reproduction #62

Closed dgcnz closed 5 months ago

dgcnz commented 5 months ago

Description

This issue concerns ONLY the reproduction of Wang 2022 table 1 without manually imposing equivariance (varying alpha, for that see meta issue #61)

Configurations

The models and the SmokePlume datasets have been ported. We need to add experiment files because each symmetry has different decay_rate, out_length, etc (see: https://github.com/Rose-STL-Lab/Approximately-Equivariant-Nets/blob/master/run.sh)

SmokePlume configs:

Model default configs (DO NOT MODIFY THESE FILES, other experiment files rely on the defaults set here):

Experiment configs:

These configs have to be at least tested with trainer.fast_dev_run to ensure that the model even processes data correctly. This doesn't account for model checkpointing and early stopping, so we'll have to add tests to that. Examples can be found in the Makefile's command test_wang2022_table_1 which you can run make test_wang2022_table_1.

Example testing command:

python -m src.train experiment=wang2022/rotation/rgroup +trainer.fast_dev_run=True data.batch_size=8

Tasks

Questions

Legend:

Nesta-gitU commented 5 months ago

Actually alejandro said he didnt nececarily want to see the convnet results for the reproduction

Nesta-gitU commented 5 months ago

Issue: /usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/setup.py:187: GPU available but not used. You can set it by doing Trainer(accelerator='gpu').

dgcnz commented 5 months ago

Issue: /usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/setup.py:187: GPU available but not used. You can set it by doing Trainer(accelerator='gpu').

is this on your laptop? @Nesta-gitU

Nesta-gitU commented 5 months ago

colab

dgcnz commented 5 months ago

done