gpleiss / efficient_densenet_pytorch

A memory-efficient implementation of DenseNets
MIT License
1.51k stars 329 forks source link

Test failed on PyTorch 0.3.1 with CUDA 9.0 #33

Closed DesertsP closed 6 years ago

DesertsP commented 6 years ago

AssertionError in test_forward_training_true_computes_forward_pass: assert almost_equal(layer.norm.running_mean, layer_efficient.norm_running_mean) assert almost_equal(layer.norm.running_var, layer_efficient.norm_running_var)

layer.norm.running_mean = 0.2516 0.0036 -0.6237 0.2686 -1.1193 1.2112 -0.0139 0.0237 [torch.FloatTensor of size 8] layer_efficient.norm_running_mean= 0.2840 -0.0010 -0.7056 0.3032 -1.2588 1.3351 -0.0184 0.0538 [torch.FloatTensor of size 8]

layer.norm.running_var = 0.7604 0.6536 1.3444 0.1388 1.1254 0.1573 1.3377 0.9247 [torch.FloatTensor of size 8]

layer_efficient.norm_running_var = 0.7321 0.6162 1.3844 0.0518 1.1621 0.0664 1.3355 0.9229 [torch.FloatTensor of size 8]

gpleiss commented 6 years ago

Hmmm it's probably a CUDA 9 bug. I'll investigate.

DesertsP commented 6 years ago

I test it on cpu and it failed as well.

gpleiss commented 6 years ago

I think this broke when we tried to make things more efficient in #29. Should be fixed now!