Closed shuida closed 5 years ago
Hi,
Is there any particular reasons to avoid using slim
? In my experience, reimplementing batch_norm
can be tricky and buggy.
Because I have to change the convolution, it's necessary to breakdown it into conv, bias, relu, batchnorm, pooling, etc. I have solved the problem. So I can close the issue. Thank you a lot!
您的代码非常漂亮,特别值得我们学习。由于项目需要,我将您的代码做了一些修改,实现卷积层的时候没有用tf.contrib.slim,直接用的tf.nn.conv2d。 我在卷积网络中加入batch_norm层,执行ema.apply()的时候,预期得到的影子变量名称为 <tf.Variable 'my_convolutional_alexnet/conv1/BatchNorm/moments/Squeeze/ExponentialMovingAverage:0' >
但是实际上得到的是 <tf.Variable 'my_convolutional_alexnet/conv1/BatchNorm/train/my_convolutional_alexnet/conv1/BatchNorm/moments/Squeeze/ExponentialMovingAverage:0' > 也就是说,名称中多出来“train/my_convolutional_alexnet/conv1/BatchNorm/”这一串。
代码中多个作用域分散在多个函数中,经过函数调用形成如下的嵌套关系: