lgarithm / crystalnet

crystalnet -- a mini core AI library (being refactored, see https://github.com/lgarithm/stdnn-ops)
MIT License
16 stars 3 forks source link

load pre-trained vgg16 model #31

Closed lgarithm closed 6 years ago

lgarithm commented 6 years ago
lgarithm commented 6 years ago
1        conv4_3_W                float32                  2359296          (3, 3, 512, 512)
2        conv5_1_b                float32                  512              (512,)
3        conv1_2_b                float32                  64               (64,)
4        conv5_2_b                float32                  512              (512,)
5        conv1_1_W                float32                  1728             (3, 3, 3, 64)
6        conv5_3_b                float32                  512              (512,)
7        conv5_2_W                float32                  2359296          (3, 3, 512, 512)
8        conv5_3_W                float32                  2359296          (3, 3, 512, 512)
9        conv1_1_b                float32                  64               (64,)
10       fc7_b                    float32                  4096             (4096,)
11       conv5_1_W                float32                  2359296          (3, 3, 512, 512)
12       conv1_2_W                float32                  36864            (3, 3, 64, 64)
13       conv3_2_W                float32                  589824           (3, 3, 256, 256)
14       conv4_2_b                float32                  512              (512,)
15       conv4_1_b                float32                  512              (512,)
16       conv3_3_W                float32                  589824           (3, 3, 256, 256)
17       conv2_1_b                float32                  128              (128,)
18       conv3_1_b                float32                  256              (256,)
19       conv2_2_W                float32                  147456           (3, 3, 128, 128)
20       fc6_b                    float32                  4096             (4096,)
21       fc8_b                    float32                  1000             (1000,)
22       conv4_3_b                float32                  512              (512,)
23       conv2_2_b                float32                  128              (128,)
24       fc6_W                    float32                  102760448        (25088, 4096)
25       fc8_W                    float32                  4096000          (4096, 1000)
26       fc7_W                    float32                  16777216         (4096, 4096)
27       conv3_2_b                float32                  256              (256,)
28       conv4_2_W                float32                  2359296          (3, 3, 512, 512)
29       conv3_3_b                float32                  256              (256,)
30       conv3_1_W                float32                  294912           (3, 3, 128, 256)
31       conv2_1_W                float32                  73728            (3, 3, 64, 128)
32       conv4_1_W                float32                  1179648          (3, 3, 256, 512)
total dims: 138357544
#!/usr/bin/env python3
# 
# inspect vgg16_weights.npz
#
from functools import reduce
import operator as op 

import numpy as np

ws = np.load('vgg16_weights.npz')
tot_dim = 0
for idx, name in enumerate(ws.files):
    w = ws[name]
    dim = reduce(op.mul, w.shape, 1)
    tot_dim += dim 
    print('%-8d %-24s %-24s %-16d %s' % (idx + 1, name, w.dtype, dim, w.shape))    

print('total dims: %d' % tot_dim)
lgarithm commented 6 years ago

The original implementation of vgg16 didn't scale input to [0, 1]

lgarithm commented 6 years ago

for a single image inference: tensorflow: 0.9s crystalnet: 1.44s