Thanks to pycharm's unused variable highlighting, I saw that tensor2tensor's revnet model has a pretty serious problem in the bottleneck = False code path.
Notice the value of net on line 115 is unused. (Line 117 stomps on the prior value of net.)
So this is an insidious bug. Due to how subtle the problem is, no one's noticed it for ~3.5 years. It correctly created all of the variables, but then incorrectly used x as input in the middle of the block.
tensor2tensor's revnet_38_cifar and revnet_110_cifar models use hparams.bottleneck = False, which means they've been affected by this bug. If tensor2tensor released official versions of those models, you'll need to retrain them from scratch, or remove them. Merging this PR will break those trained models, since they were trained with the incorrect code (and thus rely on the incorrect behavior for inferencing).
Thanks to pycharm's unused variable highlighting, I saw that tensor2tensor's revnet model has a pretty serious problem in the
bottleneck = False
code path.Notice the value of
net
on line 115 is unused. (Line 117 stomps on the prior value ofnet
.)So this is an insidious bug. Due to how subtle the problem is, no one's noticed it for ~3.5 years. It correctly created all of the variables, but then incorrectly used
x
as input in the middle of the block.tensor2tensor's
revnet_38_cifar
andrevnet_110_cifar
models usehparams.bottleneck = False
, which means they've been affected by this bug. If tensor2tensor released official versions of those models, you'll need to retrain them from scratch, or remove them. Merging this PR will break those trained models, since they were trained with the incorrect code (and thus rely on the incorrect behavior for inferencing).