Open shamanez opened 7 years ago
This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!
@skye It would be great if the Object-Detection API maintainers give a description about it .
Fair enough, I'll reopen this as a docs request.
@jch1 @tombstone @derekjchow @jesu9 @dreamdragon FYI
This can be accomplished by using the freeze_variables field in the train proto.
That being said, I'm hesitant about adding this to the docs. From our experience freezing the base feature extractors doesn't train faster nor does it give a better end accuracy. I would not recommend users use this field.
Hi @derekjchow is it possible to explain further what kind of input could be used with freeze_variables?
@derekjchow I would also be interested to see some examples of how to use freeze_variables field.
By the way, the link is broken, here is a new one: https://github.com/tensorflow/models/blob/538f89c4121803645fe72f41ebfd7069c706d954/research/object_detection/protos/train.proto#L51
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables:
@uziela thanks for sharing that. Also I have another question, how are weights initialized when no checkpoints are provided?
@uziela I wanted to train the SSD Inception from scratch. I don't want to use the transfer learning at all. I want to train the network from the scratch. Is there any suggestion I should look into? If I have understood correctly freezing_variables means we will not update the weights of those layers? correct me if I am wrong.
@viralbthakar you understood correctly regarding freezing_variables. If you don't want to use any transfer learning, just delete fine_tune_checkpoint and from_detection_checkpoint fields from the config file. However, note that I am not a tensorflow developer, this is just what I understood from the user guide: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md
@derekjchow ,since "From our experience freezing the base feature extractors doesn't train faster nor does it give a better end accuracy". what does this means? from my learned, fine-tuned seems has been a must way for transfer learning of different task of same field(detect / classification .....). which could get at least near precision of original models with FEWER samples.this seems is already a common sense in DL. so your conclusion is a little bit confused me.
1:if your conclusion is right , does it means we should only can train model from scratch with huge data? (both meta arch(feature exactor) and detect arch(ssd / faster rcnn...))
2:if I want use freeze_variables for more experimentation of fine-turned, any samples can provided??
anyone shared about fine-tuned with TF experience is very appreciated !
thanks in advance!!
@beyondli
Hello @uziela
Following your suggestion, "You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py",
I got the following output from SSD InceptionV2 coco model:
var.op.name: FeatureExtractor/InceptionV2/Conv2d_1a_7x7/depthwise_weights var.op.name: FeatureExtractor/InceptionV2/Conv2d_1a_7x7/pointwise_weights var.op.name: FeatureExtractor/InceptionV2/Conv2d_1a_7x7/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Conv2d_1a_7x7/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Conv2d_2b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Conv2d_2b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Conv2d_2b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Conv2d_2c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Conv2d_2c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Conv2d_2c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3b/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_3c/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_1a_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_1a_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_0/Conv2d_1a_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_1a_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_1a_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4a/Branch_1/Conv2d_1a_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4b/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4c/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4d/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_4e/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_1a_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_1a_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_0/Conv2d_1a_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_1a_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_1a_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5a/Branch_1/Conv2d_1a_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5b/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_1/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0a_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0a_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0a_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0b_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0b_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0b_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0c_3x3/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0c_3x3/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0c_3x3/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_3/Conv2d_0b_1x1/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_3/Conv2d_0b_1x1/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c/Branch_3/Conv2d_0b_1x1/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_2_1x1_256/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_2_1x1_256/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_2_1x1_256/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_2_3x3_s2_512/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_2_3x3_s2_512/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_2_3x3_s2_512/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_3_1x1_128/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_3_1x1_128/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_3_1x1_128/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_3_3x3_s2_256/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_3_3x3_s2_256/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_3_3x3_s2_256/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_4_1x1_128/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_4_1x1_128/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_4_1x1_128/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_4_3x3_s2_256/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_4_3x3_s2_256/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_4_3x3_s2_256/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_5_1x1_64/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_5_1x1_64/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_1_Conv2d_5_1x1_64/BatchNorm/beta var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_5_3x3_s2_128/weights var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_5_3x3_s2_128/BatchNorm/gamma var.op.name: FeatureExtractor/InceptionV2/Mixed_5c_2_Conv2d_5_3x3_s2_128/BatchNorm/beta var.op.name: BoxPredictor_0/BoxEncodingPredictor/weights var.op.name: BoxPredictor_0/BoxEncodingPredictor/biases var.op.name: BoxPredictor_0/ClassPredictor/weights var.op.name: BoxPredictor_0/ClassPredictor/biases var.op.name: BoxPredictor_1/BoxEncodingPredictor/weights var.op.name: BoxPredictor_1/BoxEncodingPredictor/biases var.op.name: BoxPredictor_1/ClassPredictor/weights var.op.name: BoxPredictor_1/ClassPredictor/biases var.op.name: BoxPredictor_2/BoxEncodingPredictor/weights var.op.name: BoxPredictor_2/BoxEncodingPredictor/biases var.op.name: BoxPredictor_2/ClassPredictor/weights var.op.name: BoxPredictor_2/ClassPredictor/biases var.op.name: BoxPredictor_3/BoxEncodingPredictor/weights var.op.name: BoxPredictor_3/BoxEncodingPredictor/biases var.op.name: BoxPredictor_3/ClassPredictor/weights var.op.name: BoxPredictor_3/ClassPredictor/biases var.op.name: BoxPredictor_4/BoxEncodingPredictor/weights var.op.name: BoxPredictor_4/BoxEncodingPredictor/biases var.op.name: BoxPredictor_4/ClassPredictor/weights var.op.name: BoxPredictor_4/ClassPredictor/biases var.op.name: BoxPredictor_5/BoxEncodingPredictor/weights var.op.name: BoxPredictor_5/BoxEncodingPredictor/biases var.op.name: BoxPredictor_5/ClassPredictor/weights var.op.name: BoxPredictor_5/ClassPredictor/biases
I thought BoxPredictor_* is the predictors, not the extractors.
Do I miss anything?
Here is where I add freeze_variables in the config file:
train_config: { batch_size: 2 optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.004 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } fine_tune_checkpoint: "/media/garmin/Data/TensorFlowModels/research/object_detection/pre-trained-models/SSD_InceptionV2/model.ckpt" from_detection_checkpoint: true num_steps: 500000 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } freeze_variables: ".FeatureExtractor." }
Thank you for precious time on my questions.
Hi,
You are right, BoxPredictor is the predictor, not the feature extractor. That's why you need to write a regex that only captures the feature extractor variables. You did everything correctly, but you are missing the star (*) symbols in the regex. So change your "freeze variables" line in the config file by:
freeze_variables: ".*FeatureExtractor.*"
Hello @uziela ,
Thank you so much for the quick response.
I do have the * in the config but the in my last post I forget to modify the regex in this post.
So in my config, the regex is
freeze_variables: ".*FeatureExtractor.*"
After digging into the code,
to show which layers are frozen,
you need add print("var.op.name" + var.op.name)
after if re.match(pattern, var.op.name): # line 53.
The line you suggest prints out all variables.
Another question I have is how to initialize predictor layers' weight randomly, instead of loading from the pre-trained model.
In summary,
I want to
load feature extractor layers' weight but freeze them from training.
initialize predictor layers weight randomly but train them.
I am having the similar issue as
https://github.com/tensorflow/models/issues/3384
and wonder my transfer learning can improve it.
Please let me know if any question or suggestion.
Thank you for precious time on my questions.
I borrowed everything from TF/Model_Zoo (the config files and the model files as is), and started the training (which is what I have been doing as explained above), what am I training then? If there is no detail in the config files like: } freeze_variables: ".FeatureExtractor." } Does that mean everything is getting retrained? When I look into the tensor-board, it looks like the weights are not changing but the biases are changing. Does that mean I am not doing transfer-learning?
What if I want to retrain only the last 2 layer? Is that going to be for both feature_extractor and CNN? Do I need to list all the layers (that huge long list @willSapgreen posted), is there a way to indicate that in a brief'er way?
Also, what are the box_predictors are for? Are they the regressor part of the object_detection?
I am new at object detection. I have been using the medium repos for starters, I started tuning the learning_rate etc... but some of these terms are a bit not clearly indicated?
Thank you very much in advance, and even partial answers are appreciated as I do not have anyone to ask these questions to. 😃 😁
Hello @uziela ,
Thank you so much for the quick response.
I do have the * in the config but the in my last post I forget to modify the regex in this post.
So in my config, the regex is
freeze_variables: ".FeatureExtractor."
After digging into the code,
to show which layers are frozen,
you need add print("var.op.name" + var.op.name)
after if re.match(pattern, var.op.name): # line 53.
The line you suggest prints out all variables.
Another question I have is how to initialize predictor layers' weight randomly, instead of loading from the pre-trained model.
In summary,
I want to
- load feature extractor layers' weight but freeze them from training.
- initialize predictor layers weight randomly but train them.
I am having the similar issue as
3384
and wonder my transfer learning can improve it.
Please let me know if any question or suggestion.
Thank you for precious time on my questions.
excuse me if i want use the initial weights of vgg or zfffor the feature extractor and for object detection is false how ican this
Hi all,
I am trying to detect 90+2(custom) classes using object detection api. I know we can detect 90 classes by default and we can detect 2 custom classes passing new data and Finetune model checkpoint. I want to detect 92 classes at once. Will this solution freeze_variables: ".FeatureExtractor." work for my problem. I am new to object detection api and computer vision. Please help!
@willSapgreen I think you can change your "load_all_detection_checkpoint_vars" line in the config file by: load_all_detection_checkpoint_vars: false
In this way, the model will only load the feature extractor weight.
i confirm
Le ven. 2 nov. 2018 à 09:35, 李明浩 notifications@github.com a écrit :
@willSapgreen https://github.com/willSapgreen I think you can change your "load_all_detection_checkpoint_vars" line in the config file by: load_all_detection_checkpoint_vars: false
In this way, the model will only load the feature extractor weight.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/2203#issuecomment-435307514, or mute the thread https://github.com/notifications/unsubscribe-auth/AneKqBr9_SLOMehY2JPGEAKWWLdeFhWNks5urAPSgaJpZM4O1xrO .
@shamanez SArham
@beyondli
- Freezing the lower layers of the net makes it possible to only perform transfer learning. As @derekjchow said, freezing them does not output satisfactory results thus finetuning the model to your dataset is the better method as it is in this type of retraining, we keep all the layers editable and that is the default settings for the config file.
That is a very import message. I frequently found tutorials on-line about object-detection-api telling that only the predictor layers are trained, which confused me for quite some time. I think that it is better to include this message in the doc of the .config files.
Does fixing some weights reduce the GPU memory usage as fewer gradients are computed and stored? In this case, we can feed larger images without resizing to train the network (which originally cannot fit in the GPU due to memory limit) and can probably increase the performance. Am I right?
@derekjchow ,since "From our experience freezing the base feature extractors doesn't train faster nor does it give a better end accuracy". what does this means? from my learned, fine-tuned seems has been a must way for transfer learning of different task of same field(detect / classification .....). which could get at least near precision of original models with FEWER samples.this seems is already a common sense in DL. so your conclusion is a little bit confused me.
1:if your conclusion is right , does it means we should only can train model from scratch with huge data? (both meta arch(feature exactor) and detect arch(ssd / faster rcnn...))
2:if I want use freeze_variables for more experimentation of fine-turned, any samples can provided??
anyone shared about fine-tuned with TF experience is very appreciated !
thanks in advance!!
As my understanding, "FineTune" doesn't equal to "freeze some layers"; You can finetune by start training with initialization from some trained model(In this way, the model just give the training a good initialization value), and don't freeze any nodes.
However, i still don't know why the freeze is not recommended. Actually i have tested the finetune with feature extracting layers frozen, the training situation is very bad. Can someone explained why not freeze?
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py
Line 46 in edcd29f
for var in variables:
For anyone who is trying to do this: I found that this no longer works, as the function in variable_helper.py is now replaced by tf.contrib.framework.filter_variables
Add a line print(trainable_variables) here to print out the vars instead: https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/research/object_detection/model_lib.py#L362
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py
Line 46 in edcd29f
for var in variables:
In other words, by default if we did not specify any freeze_variables in the configuration file, the pre-trained model weights will be updated during the training process?
Right now, I'm using pre-trained model (SSD_MobiletNet_V1_COCO) to train my own dataset (quite different from COCO) via Transfer Learning. Therefore, fine-tuning is required where I applied regex on the freezevariables: "FeatureExtractor/MobilenetV1/Conv2d([0-9]|10)([_|a-z])?/." to freeze those layers from con2d_1 to conv2d_10*. While training the remaining last few layers from conv2d_11 to conv2d_13, followed by box_predictors.
Anyone kind soul please guide me whether if my thought process of implementing this way is correct?
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
In other words, by default if we did not specify any freeze_variables in the configuration file, the pre-trained model weights will be updated during the training process?
Right now, I'm using pre-trained model (SSD_MobiletNet_V1_COCO) to train my own dataset (quite different from COCO) via Transfer Learning. Therefore, fine-tuning is required where I applied regex on the freezevariables: "FeatureExtractor/MobilenetV1/Conv2d([0-9]|10)([|a-z])?/." to freeze those layers from con2d1 to conv2d_10*. While training the remaining last few layers from conv2d_11 to conv2d_13, followed by box_predictors.
Anyone kind soul please guide me whether if my thought process of implementing this way is correct?
I guess this is how freezing is done. But not sure that above line works to freeze your desired layers. I am training a large custom data which is different from COCO. I couldn't find how to train the entire model from scratch. So, I want to freeze initial layers of ResNet50 as they extract low-level features, which is important for any CNN and train remaining layers of ResNet50 along with Faster RCNN predictors. If you find anything, please let me know.
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
In other words, by default if we did not specify any freeze_variables in the configuration file, the pre-trained model weights will be updated during the training process? Right now, I'm using pre-trained model (SSD_MobiletNet_V1_COCO) to train my own dataset (quite different from COCO) via Transfer Learning. Therefore, fine-tuning is required where I applied regex on the freezevariables: "FeatureExtractor/MobilenetV1/Conv2d([0-9]|10)([|a-z])?/." to freeze those layers from con2d1 to conv2d_10*. While training the remaining last few layers from conv2d_11 to conv2d_13, followed by box_predictors. Anyone kind soul please guide me whether if my thought process of implementing this way is correct?
I guess this is how freezing is done. But not sure that above line works to freeze your desired layers. I am training a large custom data which is different from COCO. I couldn't find how to train the entire model from scratch. So, I want to freeze initial layers of ResNet50 as they extract low-level features, which is important for any CNN and train remaining layers of ResNet50 along with Faster RCNN predictors. If you find anything, please let me know.
Yes, I managed to train the last few convolution layers of MobiletNet (from conv2d_11 - conv2d13), followed by BoxPredictor[0-4] (classifier). Further, In TensorBoard under Histogram tab, I realized that the [conv2d_11-13] weights are changing whereas others are fixed.
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
In other words, by default if we did not specify any freeze_variables in the configuration file, the pre-trained model weights will be updated during the training process? Right now, I'm using pre-trained model (SSD_MobiletNet_V1_COCO) to train my own dataset (quite different from COCO) via Transfer Learning. Therefore, fine-tuning is required where I applied regex on the freezevariables: "FeatureExtractor/MobilenetV1/Conv2d([0-9]|10)([|a-z])?/." to freeze those layers from con2d1 to conv2d_10*. While training the remaining last few layers from conv2d_11 to conv2d_13, followed by box_predictors. Anyone kind soul please guide me whether if my thought process of implementing this way is correct?
I guess this is how freezing is done. But not sure that above line works to freeze your desired layers. I am training a large custom data which is different from COCO. I couldn't find how to train the entire model from scratch. So, I want to freeze initial layers of ResNet50 as they extract low-level features, which is important for any CNN and train remaining layers of ResNet50 along with Faster RCNN predictors. If you find anything, please let me know.
Yes, I managed to train the last few convolution layers of MobiletNet (from conv2d_11 - conv2d13), followed by BoxPredictor[0-4] (classifier). Further, In TensorBoard under Histogram tab, I realized that the [conv2d_11-13] weights are changing whereas others are fixed.
Thanks for the reply. So is this method working? If yes, how to decide how many layers extract low level features in ResNet50 so that I can freeze them and train remaining layers along with predictor? Can you please share the code for freezing? And do you have any idea how to train from scratch without transfer learning or finetuning?
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py
Line 46 in edcd29f
for var in variables:
Does anyone tell me how to get 'all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) '. I can add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py. But how can we print it out? I means we have to call a function that have this line, but that function requires some parameters, which I don't know how we can directly call it. I add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py, but when I run python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
, it doesn't print anything about var.op.name:
, means that function never be called when I run python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
Please tell me how to print all variable names that can be frozen.
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
Does anyone tell me how to get 'all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) '. I can add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py. But how can we print it out? I means we have to call a function that have this line, but that function requires some parameters, which I don't know how we can directly call it. I add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py, but when I run
python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
, it doesn't print anything aboutvar.op.name:
, means that function never be called when I runpython3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
Please tell me how to print all variable names that can be frozen.
do this: https://github.com/tensorflow/models/issues/2203#issuecomment-469967344
@willSapgreen I think you can change your "load_all_detection_checkpoint_vars" line in the config file by: load_all_detection_checkpoint_vars: false
In this way, the model will only load the feature extractor weight.
May I ask a clarification question please? When "load_all_detection_checkpoint_vars: true", does it mean that it will load all the weights from the detection checkpoint, but it will NOT freeze the variables, i.e., those variables are still trainable? Thanks
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
Does anyone tell me how to get 'all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) '. I can add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py. But how can we print it out? I means we have to call a function that have this line, but that function requires some parameters, which I don't know how we can directly call it. I add 'print("var.op.name: " + var.op.name )' to file object_detection/utils/variables_helper.py, but when I run
python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
, it doesn't print anything aboutvar.op.name:
, means that function never be called when I runpython3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
Please tell me how to print all variable names that can be frozen.do this: #2203 (comment)
I add the code in that line, but how can I run it? (python model_lib.py or something like that) Because the code added in a function I don't know how to call this function
While we are waiting for a developer answer, I can share what I figure out. You can freeze variables by adding freeze_variables: option under "train_config" in pipeline config file. For example, freeze_variables: ".FeatureExtractor." will freeze the feature extractor. You can find out all variable names that can be frozen by adding print("var.op.name: " + var.op.name ) statement into the for loop after line 46 in object_detection/utils/variables_helper.py models/research/object_detection/utils/variables_helper.py Line 46 in edcd29f for var in variables:
In other words, by default if we did not specify any freeze_variables in the configuration file, the pre-trained model weights will be updated during the training process?
Right now, I'm using pre-trained model (SSD_MobiletNet_V1_COCO) to train my own dataset (quite different from COCO) via Transfer Learning. Therefore, fine-tuning is required where I applied regex on the freezevariables: "FeatureExtractor/MobilenetV1/Conv2d([0-9]|10)([|a-z])?/." to freeze those layers from con2d1 to conv2d_10*. While training the remaining last few layers from conv2d_11 to conv2d_13, followed by box_predictors.
Anyone kind soul please guide me whether if my thought process of implementing this way is correct?
Hello All, I have the same question, I took a pre trained model (ssd_inception_coco) from model zoo, have my custom data with 2 different classes to detect, that are completely different from the classes in COCO dataset. I configured the file as mentioned in the config pipeline (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md) accordingly.
My questions are :
I am confused between, transfer learning, training from scratch, fine tuning, can some one please elaborate with any example?.
TIA
I add the code in that line, but how can I run it? (python model_lib.py or something like that) Because the code added in a function I don't know how to call this function
@dangthanhhao Just run the training script as you would do normally. You can also exit the script after printing to easily see the layers, and for better visualization:
for layer_aux in trainable_layers:
print(layer_aux)
quit()
And for @dpbnasika:
- Does the above process is called transfer learning?
From the moment you start using any weights that were obtained from previous training tasks (pre-trained model), it's called transfer learning. If you freeze the layers, it's still called transfer learning, the only difference is that those weights from the freezed layers, won't be updated for your purpose.
And if i do not mention any freeze variables with regex and keep everything default, what will happen?
You will update all the weights of all the layers in the model
if it is updating everything, does it come under or consider as transfer learning or training form scratch?
My answer from 1.
How long this process might take if the data is small say 200 training images and 50 evaluation images approximately when I train on a 8gb Nvidia 1080 graphic card.
It will also depend on the resolution of your images, but depends on a lot more in which I will not elaborate.
Transfer Learning: Applying any weights previously obtained from other training tasks. Fine-Tuning: Training your model using transfer learning. Training from scratch: All the weights are randomized under a defined function.
Hi all,
I am trying to detect 90+2(custom) classes using object detection api. I know we can detect 90 classes by default and we can detect 2 custom classes passing new data and Finetune model checkpoint. I want to detect 92 classes at once. Will this solution freeze_variables: ".FeatureExtractor." work for my problem. I am new to object detection api and computer vision. Please help!
Hi @anvesh2953 , Did you find solution for the above?
Hi all, I am trying to detect 90+2(custom) classes using object detection api. I know we can detect 90 classes by default and we can detect 2 custom classes passing new data and Finetune model checkpoint. I want to detect 92 classes at once. Will this solution freeze_variables: ".FeatureExtractor." work for my problem. I am new to object detection api and computer vision. Please help!
Hi @anvesh2953 , Did you find solution for the above?
I think this should work for you. But also you should change no of output classes to 92 in config file.
Hey, all I am calling model_main_tf2.py for training, and in my case model_lib.py doesn't get called instead of that model_lib_v2 is getting called. As there is no code of freeze_variables configuration in the model_lib_v2 i am afraid my layers are getting freeze or not? , can anyone please help
Is it still possible to freeze layers in TF2, using the model_lib_v2? I am trying very hard to do this, in order to reproduce a method from a paper, but I have had no success so far. Apparently, the freeze_variables value is not being accessed anymore, so that the only way to freeze some layers is to dive into the API code and modify it, is it right? Can anyone help?
I found one option to do that go to object_detection/models/keras_model
Choose your model i was working with resnet50 so I choose resnet_v1.py, go-to method resnet_v1_50, and update it as:
print("I am ResNet50") resnetbaseModel = tf.keras.applications.resnet.ResNet50( layers=layers_override, **kwargs) for layer in resnetbaseModel.layers: layer.trainable = False print(" i am freeze") return resnetbaseModel
you can update as per your fine-tuning layers and can remove the print statement it's just to notify me that I am freezing some layers
For Efficientdet B2 I get the following layers (truncated the output). But how do I know which layers too freeze? If I start training without freezing the layers it performs worse on the same of type of object that it should already be pretrained on. I don't want it to "forget" what it already knows at the same time that it should be able to learn new features.
[MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/pointwise_kernel:0' shape=(1, 1, 112, 36) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/bias:0' shape=(36,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/pointwise_kernel:0' shape=(1, 1, 112, 9) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/bias:0' shape=(9,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stem_conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stem_bn/gamma:0' shape=(32,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stem_bn/beta:0' shape=(32,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 32, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/depthwise_bn/gamma:0' shape=(32,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/depthwise_bn/beta:0' shape=(32,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 32, 8) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/se_reduce_conv2d/bias:0' shape=(8,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 8, 32) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/se_expand_conv2d/bias:0' shape=(32,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/project_conv2d/kernel:0' shape=(1, 1, 32, 16) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/project_bn/gamma:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_0/project_bn/beta:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 16, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/depthwise_bn/gamma:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/depthwise_bn/beta:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 16, 4) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/se_reduce_conv2d/bias:0' shape=(4,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 4, 16) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/se_expand_conv2d/bias:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/project_conv2d/kernel:0' shape=(1, 1, 16, 16) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/project_bn/gamma:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_0/block_1/project_bn/beta:0' shape=(16,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/expand_conv2d/kernel:0' shape=(1, 1, 16, 96) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/expand_bn/gamma:0' shape=(96,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/expand_bn/beta:0' shape=(96,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 96, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/depthwise_bn/gamma:0' shape=(96,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/depthwise_bn/beta:0' shape=(96,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 96, 4) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/se_reduce_conv2d/bias:0' shape=(4,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 4, 96) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/se_expand_conv2d/bias:0' shape=(96,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/project_conv2d/kernel:0' shape=(1, 1, 96, 24) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/project_bn/gamma:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_0/project_bn/beta:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/expand_conv2d/kernel:0' shape=(1, 1, 24, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/expand_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/expand_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 144, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/depthwise_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/depthwise_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 144, 6) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/se_reduce_conv2d/bias:0' shape=(6,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 6, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/se_expand_conv2d/bias:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/project_conv2d/kernel:0' shape=(1, 1, 144, 24) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/project_bn/gamma:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_1/project_bn/beta:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/expand_conv2d/kernel:0' shape=(1, 1, 24, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/expand_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/expand_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 144, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/depthwise_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/depthwise_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/se_reduce_conv2d/kernel:0' shape=(1, 1, 144, 6) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/se_reduce_conv2d/bias:0' shape=(6,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/se_expand_conv2d/kernel:0' shape=(1, 1, 6, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/se_expand_conv2d/bias:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/project_conv2d/kernel:0' shape=(1, 1, 144, 24) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/project_bn/gamma:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_1/block_2/project_bn/beta:0' shape=(24,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/expand_conv2d/kernel:0' shape=(1, 1, 24, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/expand_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/expand_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 144, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/depthwise_bn/gamma:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/depthwise_bn/beta:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 144, 6) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/se_reduce_conv2d/bias:0' shape=(6,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 6, 144) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/se_expand_conv2d/bias:0' shape=(144,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/project_conv2d/kernel:0' shape=(1, 1, 144, 48) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/project_bn/gamma:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_0/project_bn/beta:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/expand_conv2d/kernel:0' shape=(1, 1, 48, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/expand_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/expand_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 288, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/depthwise_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/depthwise_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 288, 12) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/se_reduce_conv2d/bias:0' shape=(12,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 12, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/se_expand_conv2d/bias:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/project_conv2d/kernel:0' shape=(1, 1, 288, 48) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/project_bn/gamma:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_1/project_bn/beta:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/expand_conv2d/kernel:0' shape=(1, 1, 48, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/expand_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/expand_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 288, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/depthwise_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/depthwise_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/se_reduce_conv2d/kernel:0' shape=(1, 1, 288, 12) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/se_reduce_conv2d/bias:0' shape=(12,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/se_expand_conv2d/kernel:0' shape=(1, 1, 12, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/se_expand_conv2d/bias:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/project_conv2d/kernel:0' shape=(1, 1, 288, 48) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/project_bn/gamma:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_2/block_2/project_bn/beta:0' shape=(48,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/expand_conv2d/kernel:0' shape=(1, 1, 48, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/expand_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/expand_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 288, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/depthwise_bn/gamma:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/depthwise_bn/beta:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 288, 12) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/se_reduce_conv2d/bias:0' shape=(12,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 12, 288) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/se_expand_conv2d/bias:0' shape=(288,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/project_conv2d/kernel:0' shape=(1, 1, 288, 88) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/project_bn/gamma:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_0/project_bn/beta:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/expand_conv2d/kernel:0' shape=(1, 1, 88, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/expand_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/expand_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 528, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/depthwise_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/depthwise_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 528, 22) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/se_reduce_conv2d/bias:0' shape=(22,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 22, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/se_expand_conv2d/bias:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/project_conv2d/kernel:0' shape=(1, 1, 528, 88) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/project_bn/gamma:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_1/project_bn/beta:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/expand_conv2d/kernel:0' shape=(1, 1, 88, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/expand_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/expand_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 528, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/depthwise_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/depthwise_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/se_reduce_conv2d/kernel:0' shape=(1, 1, 528, 22) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/se_reduce_conv2d/bias:0' shape=(22,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/se_expand_conv2d/kernel:0' shape=(1, 1, 22, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/se_expand_conv2d/bias:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/project_conv2d/kernel:0' shape=(1, 1, 528, 88) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/project_bn/gamma:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_2/project_bn/beta:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/expand_conv2d/kernel:0' shape=(1, 1, 88, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/expand_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/expand_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 528, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/depthwise_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/depthwise_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/se_reduce_conv2d/kernel:0' shape=(1, 1, 528, 22) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/se_reduce_conv2d/bias:0' shape=(22,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/se_expand_conv2d/kernel:0' shape=(1, 1, 22, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/se_expand_conv2d/bias:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/project_conv2d/kernel:0' shape=(1, 1, 528, 88) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/project_bn/gamma:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_3/block_3/project_bn/beta:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/expand_conv2d/kernel:0' shape=(1, 1, 88, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/expand_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/expand_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 528, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/depthwise_bn/gamma:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/depthwise_bn/beta:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 528, 22) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/se_reduce_conv2d/bias:0' shape=(22,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 22, 528) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/se_expand_conv2d/bias:0' shape=(528,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/project_conv2d/kernel:0' shape=(1, 1, 528, 120) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/project_bn/gamma:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_0/project_bn/beta:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/expand_conv2d/kernel:0' shape=(1, 1, 120, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/expand_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/expand_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 720, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/depthwise_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/depthwise_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 720, 30) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/se_reduce_conv2d/bias:0' shape=(30,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 30, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/se_expand_conv2d/bias:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/project_conv2d/kernel:0' shape=(1, 1, 720, 120) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/project_bn/gamma:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_1/project_bn/beta:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/expand_conv2d/kernel:0' shape=(1, 1, 120, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/expand_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/expand_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 720, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/depthwise_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/depthwise_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/se_reduce_conv2d/kernel:0' shape=(1, 1, 720, 30) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/se_reduce_conv2d/bias:0' shape=(30,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/se_expand_conv2d/kernel:0' shape=(1, 1, 30, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/se_expand_conv2d/bias:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/project_conv2d/kernel:0' shape=(1, 1, 720, 120) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/project_bn/gamma:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_2/project_bn/beta:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/expand_conv2d/kernel:0' shape=(1, 1, 120, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/expand_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/expand_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 720, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/depthwise_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/depthwise_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/se_reduce_conv2d/kernel:0' shape=(1, 1, 720, 30) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/se_reduce_conv2d/bias:0' shape=(30,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/se_expand_conv2d/kernel:0' shape=(1, 1, 30, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/se_expand_conv2d/bias:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/project_conv2d/kernel:0' shape=(1, 1, 720, 120) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/project_bn/gamma:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_4/block_3/project_bn/beta:0' shape=(120,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/expand_conv2d/kernel:0' shape=(1, 1, 120, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/expand_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/expand_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 720, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/depthwise_bn/gamma:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/depthwise_bn/beta:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 720, 30) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/se_reduce_conv2d/bias:0' shape=(30,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 30, 720) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/se_expand_conv2d/bias:0' shape=(720,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/project_conv2d/kernel:0' shape=(1, 1, 720, 208) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/project_bn/gamma:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_0/project_bn/beta:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/expand_conv2d/kernel:0' shape=(1, 1, 208, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/expand_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/expand_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 1248, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/depthwise_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/depthwise_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 1248, 52) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/se_reduce_conv2d/bias:0' shape=(52,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 52, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/se_expand_conv2d/bias:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/project_conv2d/kernel:0' shape=(1, 1, 1248, 208) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/project_bn/gamma:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_1/project_bn/beta:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/expand_conv2d/kernel:0' shape=(1, 1, 208, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/expand_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/expand_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 1248, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/depthwise_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/depthwise_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/se_reduce_conv2d/kernel:0' shape=(1, 1, 1248, 52) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/se_reduce_conv2d/bias:0' shape=(52,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/se_expand_conv2d/kernel:0' shape=(1, 1, 52, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/se_expand_conv2d/bias:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/project_conv2d/kernel:0' shape=(1, 1, 1248, 208) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/project_bn/gamma:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_2/project_bn/beta:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/expand_conv2d/kernel:0' shape=(1, 1, 208, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/expand_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/expand_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 1248, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/depthwise_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/depthwise_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/se_reduce_conv2d/kernel:0' shape=(1, 1, 1248, 52) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/se_reduce_conv2d/bias:0' shape=(52,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/se_expand_conv2d/kernel:0' shape=(1, 1, 52, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/se_expand_conv2d/bias:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/project_conv2d/kernel:0' shape=(1, 1, 1248, 208) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/project_bn/gamma:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_3/project_bn/beta:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/expand_conv2d/kernel:0' shape=(1, 1, 208, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/expand_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/expand_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/depthwise_conv2d/depthwise_kernel:0' shape=(5, 5, 1248, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/depthwise_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/depthwise_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/se_reduce_conv2d/kernel:0' shape=(1, 1, 1248, 52) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/se_reduce_conv2d/bias:0' shape=(52,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/se_expand_conv2d/kernel:0' shape=(1, 1, 52, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/se_expand_conv2d/bias:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/project_conv2d/kernel:0' shape=(1, 1, 1248, 208) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/project_bn/gamma:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_5/block_4/project_bn/beta:0' shape=(208,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/expand_conv2d/kernel:0' shape=(1, 1, 208, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/expand_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/expand_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 1248, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/depthwise_bn/gamma:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/depthwise_bn/beta:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/se_reduce_conv2d/kernel:0' shape=(1, 1, 1248, 52) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/se_reduce_conv2d/bias:0' shape=(52,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/se_expand_conv2d/kernel:0' shape=(1, 1, 52, 1248) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/se_expand_conv2d/bias:0' shape=(1248,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/project_conv2d/kernel:0' shape=(1, 1, 1248, 352) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/project_bn/gamma:0' shape=(352,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_0/project_bn/beta:0' shape=(352,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/expand_conv2d/kernel:0' shape=(1, 1, 352, 2112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/expand_bn/gamma:0' shape=(2112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/expand_bn/beta:0' shape=(2112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/depthwise_conv2d/depthwise_kernel:0' shape=(3, 3, 2112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/depthwise_bn/gamma:0' shape=(2112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/depthwise_bn/beta:0' shape=(2112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/se_reduce_conv2d/kernel:0' shape=(1, 1, 2112, 88) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/se_reduce_conv2d/bias:0' shape=(88,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/se_expand_conv2d/kernel:0' shape=(1, 1, 88, 2112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/se_expand_conv2d/bias:0' shape=(2112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/project_conv2d/kernel:0' shape=(1, 1, 2112, 352) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/project_bn/gamma:0' shape=(352,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'stack_6/block_1/project_bn/beta:0' shape=(352,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'top_conv2d/kernel:0' shape=(1, 1, 352, 1408) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'top_bn/gamma:0' shape=(1408,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'top_bn/beta:0' shape=(1408,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'logits/kernel:0' shape=(1408, 1000) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'logits/bias:0' shape=(1000,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 352, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 352, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 120, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 48, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 120, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/kernel:0' shape=(1, 1, 352, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/beta:0' shape=(112,) dtype=float32>
},
0: <tf.Variable 'EfficientDet-D2/bifpn/node_35/5_dn_lvl_5/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_35/5_dn_lvl_5/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_35/5_dn_lvl_5/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_36/5_dn_lvl_4/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_36/5_dn_lvl_4/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_36/5_dn_lvl_4/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_36/5_dn_lvl_4/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_36/5_dn_lvl_4/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_37/5_dn_lvl_3/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_37/5_dn_lvl_3/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_37/5_dn_lvl_3/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_37/5_dn_lvl_3/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_37/5_dn_lvl_3/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_38/5_up_lvl_4/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_38/5_up_lvl_4/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_38/5_up_lvl_4/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_38/5_up_lvl_4/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_38/5_up_lvl_4/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_39/5_up_lvl_5/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_39/5_up_lvl_5/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_39/5_up_lvl_5/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_39/5_up_lvl_5/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_39/5_up_lvl_5/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_40/5_up_lvl_6/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_40/5_up_lvl_6/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_40/5_up_lvl_6/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_40/5_up_lvl_6/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_40/5_up_lvl_6/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_41/5_up_lvl_7/post_combine/separable_conv/depthwise_kernel:0' shape=(3, 3, 112, 1) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_41/5_up_lvl_7/post_combine/separable_conv/pointwise_kernel:0' shape=(1, 1, 112, 112) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_41/5_up_lvl_7/post_combine/separable_conv/bias:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_41/5_up_lvl_7/post_combine/batchnorm/gamma:0' shape=(112,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'EfficientDet-D2/bifpn/node_41/5_up_lvl_7/post_combine/batchnorm/beta:0' shape=(112,) dtype=float32>
}]
FYI, for training from a general config file (as in the tutorial) I had to go to the eager_train_step() method in model_lib_v2.py and print(trainable_variables) under its definition on line 309. For SSD Resnet50 v1 fpn, this gave the output
[MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/kernel:0' shape=(3, 3, 256, 24) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/bias:0' shape=(24,) dtype=float32>
},
...
MirroredVariable:{
0: <tf.Variable 'ResNet50V1_FPN/FeatureMaps/top_down/smoothing_1_batchnorm/gamma:0' shape=(256,) dtype=float32>
}, MirroredVariable:{
0: <tf.Variable 'ResNet50V1_FPN/FeatureMaps/top_down/smoothing_1_batchnorm/beta:0' shape=(256,) dtype=float32>
}]
Is it still possible to freeze layers in TF2, using the model_lib_v2? I am trying very hard to do this, in order to reproduce a method from a paper, but I have had no success so far. Apparently, the freeze_variables value is not being accessed anymore, so that the only way to freeze some layers is to dive into the API code and modify it, is it right? Can anyone help?
I also think freeze_variables (along with other parameters) is no longer being accessed. I opened a new issue here. Trying to find a hacky way to implement it for the time being.
Edit: found a temporary solution and wrote it here
@viralbthakar you understood correctly regarding freezing_variables. If you don't want to use any transfer learning, just delete fine_tune_checkpoint and from_detection_checkpoint fields from the config file. However, note that I am not a tensorflow developer, this is just what I understood from the user guide: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md
sure that's the way to do it? Found two helpful sources, but I understood it, that you can freeze the weights of layers by giving a string with the layer name there. I think with this hint: https://github.com/tensorflow/models/issues/2203#issuecomment-882113439 you can see, which layer weights
you can use
train_config: { batch_size: 1 freeze_variables: ["resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/weights"] }
So we must find for the model you choose the layers, which you want to freeze
There are few types of transfer learning . When using TF Object Detection API do we fine tune weights of the feature extractor when using already trained models on COCO Data Set .
Because sometimes we can just keep the weight set of feature extractor fix while training only the predictor layers .