xingyizhou / CenterNet

Object detection, 3D detection, and pose estimation using center point detection:
MIT License
7.25k stars 1.92k forks source link

怎么得到网络结构? #43

Open wangshankun opened 5 years ago

wangshankun commented 5 years ago

不是项目本身问题,学习时候遇到问题;不知道大家遇到过没有,我用torch.save保存的模型,用netron打开没有结构化显示。ctdet_coco_dla_1x.pth本身也只有weight,更无法查看结构了,不知道有什么好的方法可以看到网络结构。

YU-FAITH commented 5 years ago

Can u speak English??

Sundrops commented 5 years ago

Here is the caffe prototxt of Centernet_dlav0_34. If you want to get caffe prototxts of Centernet with other backbones, you can convert the pytorch model to caffe model via PytorchToCaffe

wangshankun commented 5 years ago

Can u speak English??

how to shown model structure? same to https://github.com/xingyizhou/CenterNet/issues/40

wangshankun commented 5 years ago

Here is the caffe prototxt of Centernet_dlav0_34. If you want to get caffe prototxts of Centernet with other backbones, you can convert the pytorch model to caffe model via PytorchToCaffe

Thanks for your effort !

Sundrops commented 5 years ago

It's more convenient for you to study with the caffe prototxt of Centernet_res_18.

Tim5Tang commented 5 years ago

@Sundrops i use the PytorchToCaffe to convert the Centernet but failed, did you have the fallow question Traceback (most recent call last): File "example/resnet_pytorch_2_caffe.py", line 13, in resnet18.load_state_dict(checkpoint) File "/home/*/anaconda3/envs/CenterNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResNet: Missing key(s) in state_dict: "conv1.weight", "bn1.weight", "bn1.bias", "bn1.running_mean", "bn1.running_var", "layer1.0.conv1.weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.1.conv1.weight", "layer1.1.bn1.weight", "layer1.1.bn1.bias", "layer1.1.bn1.running_mean", "layer1.1.bn1.running_var", "layer1.1.conv2.weight", "layer1.1.bn2.weight", "layer1.1.bn2.bias", "layer1.1.bn2.running_mean", "layer1.1.bn2.running_var", "layer2.0.conv1.weight", "layer2.0.bn1.weight", "layer2.0.bn1.bias", "layer2.0.bn1.running_mean", "layer2.0.bn1.running_var", "layer2.0.conv2.weight", "layer2.0.bn2.weight", "layer2.0.bn2.bias", "layer2.0.bn2.running_mean", "layer2.0.bn2.running_var", "layer2.0.downsample.0.weight", "layer2.0.downsample.1.weight", "layer2.0.downsample.1.bias", "layer2.0.downsample.1.running_mean", "layer2.0.downsample.1.running_var", "layer2.1.conv1.weight", "layer2.1.bn1.weight", "layer2.1.bn1.bias", "layer2.1.bn1.running_mean", "layer2.1.bn1.running_var", "layer2.1.conv2.weight", "layer2.1.bn2.weight", "layer2.1.bn2.bias", "layer2.1.bn2.running_mean", "layer2.1.bn2.running_var", "layer3.0.conv1.weight", "layer3.0.bn1.weight", "layer3.0.bn1.bias", "layer3.0.bn1.running_mean", "layer3.0.bn1.running_var", "layer3.0.conv2.weight", "layer3.0.bn2.weight", "layer3.0.bn2.bias", "layer3.0.bn2.running_mean", "layer3.0.bn2.running_var", "layer3.0.downsample.0.weight", "layer3.0.downsample.1.weight", "layer3.0.downsample.1.bias", "layer3.0.downsample.1.running_mean", "layer3.0.downsample.1.running_var", "layer3.1.conv1.weight", "layer3.1.bn1.weight", "layer3.1.bn1.bias", "layer3.1.bn1.running_mean", "layer3.1.bn1.running_var", "layer3.1.conv2.weight", "layer3.1.bn2.weight", "layer3.1.bn2.bias", "layer3.1.bn2.running_mean", "layer3.1.bn2.running_var", "layer4.0.conv1.weight", "layer4.0.bn1.weight", "layer4.0.bn1.bias", "layer4.0.bn1.running_mean", "layer4.0.bn1.running_var", "layer4.0.conv2.weight", "layer4.0.bn2.weight", "layer4.0.bn2.bias", "layer4.0.bn2.running_mean", "layer4.0.bn2.running_var", "layer4.0.downsample.0.weight", "layer4.0.downsample.1.weight", "layer4.0.downsample.1.bias", "layer4.0.downsample.1.running_mean", "layer4.0.downsample.1.running_var", "layer4.1.conv1.weight", "layer4.1.bn1.weight", "layer4.1.bn1.bias", "layer4.1.bn1.running_mean", "layer4.1.bn1.running_var", "layer4.1.conv2.weight", "layer4.1.bn2.weight", "layer4.1.bn2.bias", "layer4.1.bn2.running_mean", "layer4.1.bn2.running_var", "fc.weight", "fc.bias". Unexpected key(s) in state_dict: "epoch", "state_dict".

Sundrops commented 5 years ago

No. Maybe you can try resnet18.load_state_dict(checkpoint["state_dict"]).

monoloxo commented 5 years ago

git上有一个netron的工具,可以在线查看主流框架的网络结构。

XzlXcc commented 5 years ago

git上有一个netron的工具,可以在线查看主流框架的网络结构。

Have you visualized it through the tool?

wangshankun commented 5 years ago

git上有一个netron的工具,可以在线查看主流框架的网络结构。

最开始就是用netron了 ctdet_coco_dla_1x.pth在torchsummary、tensorboardX、make_dot、Netron都没办法正常显示,最多显示一部分node

djshen commented 5 years ago

I visualized ctdet_coco_dla_2x by directly modifying pytorch and centernet code.

XzlXcc commented 5 years ago

I visualized ctdet_coco_dla_2x by directly modifying pytorch and centernet code.

Hi, which tool do you use to visualize it?

djshen commented 5 years ago

I modified torch/nn/modules/module.py Specifically, I add some code in Module.__call__ function to trace the inputs and outputs of the modules (store each of the tensors or modules and give it a unique name). Then I print the connections between inputs and modules in dot format, and visualize it by graphviz.

2018newyangyu commented 5 years ago

I modified torch/nn/modules/module.py Specifically, I add some code in Module.__call__ function to trace the inputs and outputs of the modules (store each of the tensors or modules and give it a unique name). Then I print the connections between inputs and modules in dot format, and visualize it by graphviz.

@djshen Can you visiualize the hourglass?

djshen commented 5 years ago

@2018newyangyu https://i.imgur.com/0GVHoDU.png

JacobYuan7 commented 5 years ago

@djshen sorry to interrupt, but I can't open this link even through a VPN, can you offer a new link?

djshen commented 5 years ago

@2018newyangyu

Click ![](https://i.imgur.com/0GVHoDU.png)
monoloxo commented 4 years ago

@lsccccc Sorry can not provide more help, I just used it to view the network map of yolo

liukaigua commented 4 years ago

@lsccccc Sorry can not provide more help, I just used it to view the network map of yolo

thank u anyway

zjp99 commented 4 years ago

I modified torch/nn/modules/module.py Specifically, I add some code in Module.__call__ function to trace the inputs and outputs of the modules (store each of the tensors or modules and give it a unique name). Then I print the connections between inputs and modules in dot format, and visualize it by graphviz.

can you show your code?

clemente0731 commented 4 years ago

Here is the caffe prototxt of Centernet_dlav0_34. If you want to get caffe prototxts of Centernet with other backbones, you can convert the pytorch model to caffe model via PytorchToCaffe

THANKS FOR YOUR JOB!!! :)

Vankeee commented 4 years ago

@djshen Thanks for sharing the pics! I have a question about the nodes. What does P_weight_1 mean?(Sorry, I am newbee)

djshen commented 4 years ago

@djshen Thanks for sharing the pics! I have a question about the nodes. What does P_weight_1 mean?(Sorry, I am newbee)

In this graph, the node name starting with "P_" is a parameter and is passed to the corresponding operation such as a convolution.

Vankeee commented 4 years ago

@djshen Thanks.I see that. P_weight_1 is the parameter of the first covolution layer.