sidak / otfusion

Model Fusion via Optimal Transport, NeurIPS 2020
130 stars 26 forks source link

How to run resnet example? #6

Open rahimentezari opened 2 years ago

rahimentezari commented 2 years ago

Dear Sidak, Thanks again for your code. I was going to run your an example using your resnet checkpoints. To do this, cifar.zip and resnet_models.zip are extracted and the following command is run (it seems the provided checkpoint are with no BN):

python main.py --gpu-id 0 --model-name resnet18_nobias_nobn --n-epochs 300 --save-result-file sample.csv --sweep-name exp_sample --exact --correction --ground-metric euclidean --weight-stats --activation-histograms --activation-mode raw --geom-ensemble-type acts --sweep-id 21 --act-num-samples 200 --ground-metric-normalize none --activation-seed 21 --prelu-acts --recheck-acc --load-models ./resnet_models/ --ckpt-type best --past-correction --not-squared --dataset Cifar10

However, the code exited with the following error (seems the shortcut is making the trouble):

--------------- At layer index 7 ------------- 

Previous layer shape is  torch.Size([128, 128, 3, 3])
let's see the difference in layer names layer2.0.shortcut.0 layer2.0.shortcut.0
torch.Size([200, 1, 128, 16, 16]) shape of activations generally
reorder_dim is  [1, 2, 3, 0]
In layer layer2.0.shortcut.0.weight: getting activation distance statistics
Statistics of the distance from neurons of layer 1 (averaged across nodes of layer 0):
Statistics of the distance from neurons of layer 1 (averaged across nodes of layer 0): 

Max : 8.675606727600098, Mean : 3.544717311859131, Min : 1.0014023780822754, Std: 1.3794620037078857
shape of layer: model 0 torch.Size([128, 64, 1])
shape of layer: model 1 torch.Size([128, 64, 1])
shape of activations: model 0 torch.Size([128, 16, 16, 200])
shape of activations: model 1 torch.Size([128, 16, 16, 200])
shape of previous transport map torch.Size([128, 128])
Traceback (most recent call last):
  File "main.py", line 159, in <module>
    geometric_acc, geometric_model = wasserstein_ensemble.geometric_ensembling_modularized(args, models, train_loader, test_loader, activations)
  File "/home/rahim/NIPS2021/otfusion/wasserstein_ensemble.py", line 893, in geometric_ensembling_modularized
    avg_aligned_layers = get_acts_wassersteinized_layers_modularized(args, networks, activations, train_loader=train_loader, test_loader=test_loader)
  File "/home/rahim/NIPS2021/otfusion/wasserstein_ensemble.py", line 688, in get_acts_wassersteinized_layers_modularized
    aligned_wt = torch.bmm(fc_layer0_weight_data.permute(2, 0, 1), T_var_conv).permute(1, 2, 0)
RuntimeError: batch1 dim 2 must match batch2 dim 1

Secondly, the code is not working with BatchNorm, is that right?

sidak commented 2 years ago

Dear Rahim,

Hmm, are you using the flag to handle skip connections: --handle-skips? Can you please run with that if not already?

Yes, currently Batchnorm is not supported for simplicity. But it should not be that hard to handle, e.g., one simple idea is that you can 'hit' the batchnorm parameters (which are anyways neuronwise) by the resulting permutation matrix of that layer. I will also give it a try one of these days. Feel free to share your experience if you try this here or over email.

Cheers, Sidak

thegeekymuggle commented 2 years ago

Hi Sidak, I have tried running the code with --handle-skips but still I run into this error.