When running inference on AttentionNet, I get two tensors back.
The first one is the tensor I am interested in (1 x 512) and the second one is one I am not interested in (7 x 7 x ?).
It should improve performance if this second tensor is not returned in inference mode.
return out, conv_out
to:
if train:
return out, conv_out
else:
return out
By the way, I'm very impressed with the performance of this AttentionNet-56!
CFP_FP Acc: 0.9822857142857142, AgeDB Acc: 0.9819999999999999, VGG2_FP Acc: 0.9554
CFP_FP Acc: 0.9831428571428571, AgeDB Acc: 0.9814999999999999, VGG2_FP Acc: 0.9538
When running inference on AttentionNet, I get two tensors back. The first one is the tensor I am interested in (1 x 512) and the second one is one I am not interested in (7 x 7 x ?). It should improve performance if this second tensor is not returned in inference mode.
to:
By the way, I'm very impressed with the performance of this AttentionNet-56! CFP_FP Acc: 0.9822857142857142, AgeDB Acc: 0.9819999999999999, VGG2_FP Acc: 0.9554 CFP_FP Acc: 0.9831428571428571, AgeDB Acc: 0.9814999999999999, VGG2_FP Acc: 0.9538