Closed Hiroaki-K4 closed 3 years ago
I solved this problem by commenting out network_predict (net, X) in darknet.c. However, an error occurred while doing ./test_yolo4 of tkDNN. The error message is as follows.
Not supported field: batch=64
Not supported field: subdivisions=64
Not supported field: momentum=0.949
Not supported field: decay=0.0005
Not supported field: angle=0
Not supported field: saturation = 1.5
Not supported field: exposure = 1.5
Not supported field: hue=.1
Not supported field: learning_rate=0.001
Not supported field: burn_in=1000
Not supported field: max_batches = 500500
Not supported field: policy=steps
Not supported field: steps=400000,450000
Not supported field: scales=.1,.1
Not supported field: mosaic=1
New NETWORK (tkDNN v0.5, CUDNN v7.603)
Reading weights: I=4 O=32 KERNEL=3x3x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=32 KERNEL=1x1x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=128 KERNEL=3x3x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Not supported field: stopbackward=800
Reading weights: I=1024 O=1024 KERNEL=1x1x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=2048 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: random=1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5
====================== NETWORK MODEL ======================
N. Layer type input (H*W,CH) output (H*W,CH)
0 Conv2d 512 x 512, 4 -> 512 x 512, 32
1 ActivationMish 512 x 512, 32 -> 512 x 512, 32
2 Conv2d 512 x 512, 32 -> 256 x 256, 64
3 ActivationMish 256 x 256, 64 -> 256 x 256, 64
4 Conv2d 256 x 256, 64 -> 256 x 256, 64
5 ActivationMish 256 x 256, 64 -> 256 x 256, 64
6 Route 256 x 256, 64 -> 256 x 256, 64
7 Conv2d 256 x 256, 64 -> 256 x 256, 64
8 ActivationMish 256 x 256, 64 -> 256 x 256, 64
9 Conv2d 256 x 256, 64 -> 256 x 256, 32
10 ActivationMish 256 x 256, 32 -> 256 x 256, 32
11 Conv2d 256 x 256, 32 -> 256 x 256, 64
12 ActivationMish 256 x 256, 64 -> 256 x 256, 64
13 Shortcut 256 x 256, 64 -> 256 x 256, 64
14 Conv2d 256 x 256, 64 -> 256 x 256, 64
15 ActivationMish 256 x 256, 64 -> 256 x 256, 64
16 Route 256 x 256, 128 -> 256 x 256, 128
17 Conv2d 256 x 256, 128 -> 256 x 256, 64
18 ActivationMish 256 x 256, 64 -> 256 x 256, 64
19 Conv2d 256 x 256, 64 -> 128 x 128, 128
20 ActivationMish 128 x 128, 128 -> 128 x 128, 128
21 Conv2d 128 x 128, 128 -> 128 x 128, 64
22 ActivationMish 128 x 128, 64 -> 128 x 128, 64
23 Route 128 x 128, 128 -> 128 x 128, 128
24 Conv2d 128 x 128, 128 -> 128 x 128, 64
25 ActivationMish 128 x 128, 64 -> 128 x 128, 64
26 Conv2d 128 x 128, 64 -> 128 x 128, 64
27 ActivationMish 128 x 128, 64 -> 128 x 128, 64
28 Conv2d 128 x 128, 64 -> 128 x 128, 64
29 ActivationMish 128 x 128, 64 -> 128 x 128, 64
30 Shortcut 128 x 128, 64 -> 128 x 128, 64
31 Conv2d 128 x 128, 64 -> 128 x 128, 64
32 ActivationMish 128 x 128, 64 -> 128 x 128, 64
33 Conv2d 128 x 128, 64 -> 128 x 128, 64
34 ActivationMish 128 x 128, 64 -> 128 x 128, 64
35 Shortcut 128 x 128, 64 -> 128 x 128, 64
36 Conv2d 128 x 128, 64 -> 128 x 128, 64
37 ActivationMish 128 x 128, 64 -> 128 x 128, 64
38 Route 128 x 128, 128 -> 128 x 128, 128
39 Conv2d 128 x 128, 128 -> 128 x 128, 128
40 ActivationMish 128 x 128, 128 -> 128 x 128, 128
41 Conv2d 128 x 128, 128 -> 64 x 64, 256
42 ActivationMish 64 x 64, 256 -> 64 x 64, 256
43 Conv2d 64 x 64, 256 -> 64 x 64, 128
44 ActivationMish 64 x 64, 128 -> 64 x 64, 128
45 Route 64 x 64, 256 -> 64 x 64, 256
46 Conv2d 64 x 64, 256 -> 64 x 64, 128
47 ActivationMish 64 x 64, 128 -> 64 x 64, 128
48 Conv2d 64 x 64, 128 -> 64 x 64, 128
49 ActivationMish 64 x 64, 128 -> 64 x 64, 128
50 Conv2d 64 x 64, 128 -> 64 x 64, 128
51 ActivationMish 64 x 64, 128 -> 64 x 64, 128
52 Shortcut 64 x 64, 128 -> 64 x 64, 128
53 Conv2d 64 x 64, 128 -> 64 x 64, 128
54 ActivationMish 64 x 64, 128 -> 64 x 64, 128
55 Conv2d 64 x 64, 128 -> 64 x 64, 128
56 ActivationMish 64 x 64, 128 -> 64 x 64, 128
57 Shortcut 64 x 64, 128 -> 64 x 64, 128
58 Conv2d 64 x 64, 128 -> 64 x 64, 128
59 ActivationMish 64 x 64, 128 -> 64 x 64, 128
60 Conv2d 64 x 64, 128 -> 64 x 64, 128
61 ActivationMish 64 x 64, 128 -> 64 x 64, 128
62 Shortcut 64 x 64, 128 -> 64 x 64, 128
63 Conv2d 64 x 64, 128 -> 64 x 64, 128
64 ActivationMish 64 x 64, 128 -> 64 x 64, 128
65 Conv2d 64 x 64, 128 -> 64 x 64, 128
66 ActivationMish 64 x 64, 128 -> 64 x 64, 128
67 Shortcut 64 x 64, 128 -> 64 x 64, 128
68 Conv2d 64 x 64, 128 -> 64 x 64, 128
69 ActivationMish 64 x 64, 128 -> 64 x 64, 128
70 Conv2d 64 x 64, 128 -> 64 x 64, 128
71 ActivationMish 64 x 64, 128 -> 64 x 64, 128
72 Shortcut 64 x 64, 128 -> 64 x 64, 128
73 Conv2d 64 x 64, 128 -> 64 x 64, 128
74 ActivationMish 64 x 64, 128 -> 64 x 64, 128
75 Conv2d 64 x 64, 128 -> 64 x 64, 128
76 ActivationMish 64 x 64, 128 -> 64 x 64, 128
77 Shortcut 64 x 64, 128 -> 64 x 64, 128
78 Conv2d 64 x 64, 128 -> 64 x 64, 128
79 ActivationMish 64 x 64, 128 -> 64 x 64, 128
80 Conv2d 64 x 64, 128 -> 64 x 64, 128
81 ActivationMish 64 x 64, 128 -> 64 x 64, 128
82 Shortcut 64 x 64, 128 -> 64 x 64, 128
83 Conv2d 64 x 64, 128 -> 64 x 64, 128
84 ActivationMish 64 x 64, 128 -> 64 x 64, 128
85 Conv2d 64 x 64, 128 -> 64 x 64, 128
86 ActivationMish 64 x 64, 128 -> 64 x 64, 128
87 Shortcut 64 x 64, 128 -> 64 x 64, 128
88 Conv2d 64 x 64, 128 -> 64 x 64, 128
89 ActivationMish 64 x 64, 128 -> 64 x 64, 128
90 Route 64 x 64, 256 -> 64 x 64, 256
91 Conv2d 64 x 64, 256 -> 64 x 64, 256
92 ActivationMish 64 x 64, 256 -> 64 x 64, 256
93 Conv2d 64 x 64, 256 -> 32 x 32, 512
94 ActivationMish 32 x 32, 512 -> 32 x 32, 512
95 Conv2d 32 x 32, 512 -> 32 x 32, 256
96 ActivationMish 32 x 32, 256 -> 32 x 32, 256
97 Route 32 x 32, 512 -> 32 x 32, 512
98 Conv2d 32 x 32, 512 -> 32 x 32, 256
99 ActivationMish 32 x 32, 256 -> 32 x 32, 256
100 Conv2d 32 x 32, 256 -> 32 x 32, 256
101 ActivationMish 32 x 32, 256 -> 32 x 32, 256
102 Conv2d 32 x 32, 256 -> 32 x 32, 256
103 ActivationMish 32 x 32, 256 -> 32 x 32, 256
104 Shortcut 32 x 32, 256 -> 32 x 32, 256
105 Conv2d 32 x 32, 256 -> 32 x 32, 256
106 ActivationMish 32 x 32, 256 -> 32 x 32, 256
107 Conv2d 32 x 32, 256 -> 32 x 32, 256
108 ActivationMish 32 x 32, 256 -> 32 x 32, 256
109 Shortcut 32 x 32, 256 -> 32 x 32, 256
110 Conv2d 32 x 32, 256 -> 32 x 32, 256
111 ActivationMish 32 x 32, 256 -> 32 x 32, 256
112 Conv2d 32 x 32, 256 -> 32 x 32, 256
113 ActivationMish 32 x 32, 256 -> 32 x 32, 256
114 Shortcut 32 x 32, 256 -> 32 x 32, 256
115 Conv2d 32 x 32, 256 -> 32 x 32, 256
116 ActivationMish 32 x 32, 256 -> 32 x 32, 256
117 Conv2d 32 x 32, 256 -> 32 x 32, 256
118 ActivationMish 32 x 32, 256 -> 32 x 32, 256
119 Shortcut 32 x 32, 256 -> 32 x 32, 256
120 Conv2d 32 x 32, 256 -> 32 x 32, 256
121 ActivationMish 32 x 32, 256 -> 32 x 32, 256
122 Conv2d 32 x 32, 256 -> 32 x 32, 256
123 ActivationMish 32 x 32, 256 -> 32 x 32, 256
124 Shortcut 32 x 32, 256 -> 32 x 32, 256
125 Conv2d 32 x 32, 256 -> 32 x 32, 256
126 ActivationMish 32 x 32, 256 -> 32 x 32, 256
127 Conv2d 32 x 32, 256 -> 32 x 32, 256
128 ActivationMish 32 x 32, 256 -> 32 x 32, 256
129 Shortcut 32 x 32, 256 -> 32 x 32, 256
130 Conv2d 32 x 32, 256 -> 32 x 32, 256
131 ActivationMish 32 x 32, 256 -> 32 x 32, 256
132 Conv2d 32 x 32, 256 -> 32 x 32, 256
133 ActivationMish 32 x 32, 256 -> 32 x 32, 256
134 Shortcut 32 x 32, 256 -> 32 x 32, 256
135 Conv2d 32 x 32, 256 -> 32 x 32, 256
136 ActivationMish 32 x 32, 256 -> 32 x 32, 256
137 Conv2d 32 x 32, 256 -> 32 x 32, 256
138 ActivationMish 32 x 32, 256 -> 32 x 32, 256
139 Shortcut 32 x 32, 256 -> 32 x 32, 256
140 Conv2d 32 x 32, 256 -> 32 x 32, 256
141 ActivationMish 32 x 32, 256 -> 32 x 32, 256
142 Route 32 x 32, 512 -> 32 x 32, 512
143 Conv2d 32 x 32, 512 -> 32 x 32, 512
144 ActivationMish 32 x 32, 512 -> 32 x 32, 512
145 Conv2d 32 x 32, 512 -> 16 x 16, 1024
146 ActivationMish 16 x 16, 1024 -> 16 x 16, 1024
147 Conv2d 16 x 16, 1024 -> 16 x 16, 512
148 ActivationMish 16 x 16, 512 -> 16 x 16, 512
149 Route 16 x 16, 1024 -> 16 x 16, 1024
150 Conv2d 16 x 16, 1024 -> 16 x 16, 512
151 ActivationMish 16 x 16, 512 -> 16 x 16, 512
152 Conv2d 16 x 16, 512 -> 16 x 16, 512
153 ActivationMish 16 x 16, 512 -> 16 x 16, 512
154 Conv2d 16 x 16, 512 -> 16 x 16, 512
155 ActivationMish 16 x 16, 512 -> 16 x 16, 512
156 Shortcut 16 x 16, 512 -> 16 x 16, 512
157 Conv2d 16 x 16, 512 -> 16 x 16, 512
158 ActivationMish 16 x 16, 512 -> 16 x 16, 512
159 Conv2d 16 x 16, 512 -> 16 x 16, 512
160 ActivationMish 16 x 16, 512 -> 16 x 16, 512
161 Shortcut 16 x 16, 512 -> 16 x 16, 512
162 Conv2d 16 x 16, 512 -> 16 x 16, 512
163 ActivationMish 16 x 16, 512 -> 16 x 16, 512
164 Conv2d 16 x 16, 512 -> 16 x 16, 512
165 ActivationMish 16 x 16, 512 -> 16 x 16, 512
166 Shortcut 16 x 16, 512 -> 16 x 16, 512
167 Conv2d 16 x 16, 512 -> 16 x 16, 512
168 ActivationMish 16 x 16, 512 -> 16 x 16, 512
169 Conv2d 16 x 16, 512 -> 16 x 16, 512
170 ActivationMish 16 x 16, 512 -> 16 x 16, 512
171 Shortcut 16 x 16, 512 -> 16 x 16, 512
172 Conv2d 16 x 16, 512 -> 16 x 16, 512
173 ActivationMish 16 x 16, 512 -> 16 x 16, 512
174 Route 16 x 16, 1024 -> 16 x 16, 1024
175 Conv2d 16 x 16, 1024 -> 16 x 16, 1024
176 ActivationMish 16 x 16, 1024 -> 16 x 16, 1024
177 Conv2d 16 x 16, 1024 -> 16 x 16, 512
178 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
179 Conv2d 16 x 16, 512 -> 16 x 16, 1024
180 ActivationLeaky 16 x 16, 1024 -> 16 x 16, 1024
181 Conv2d 16 x 16, 1024 -> 16 x 16, 512
182 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
183 Pooling 16 x 16, 512 -> 16 x 16, 512
184 Route 16 x 16, 512 -> 16 x 16, 512
185 Pooling 16 x 16, 512 -> 16 x 16, 512
186 Route 16 x 16, 512 -> 16 x 16, 512
187 Pooling 16 x 16, 512 -> 16 x 16, 512
188 Route 16 x 16, 2048 -> 16 x 16, 2048
189 Conv2d 16 x 16, 2048 -> 16 x 16, 512
190 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
191 Conv2d 16 x 16, 512 -> 16 x 16, 1024
192 ActivationLeaky 16 x 16, 1024 -> 16 x 16, 1024
193 Conv2d 16 x 16, 1024 -> 16 x 16, 512
194 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
195 Conv2d 16 x 16, 512 -> 16 x 16, 256
196 ActivationLeaky 16 x 16, 256 -> 16 x 16, 256
197 Upsample 16 x 16, 256 -> 32 x 32, 256
198 Route 32 x 32, 512 -> 32 x 32, 512
199 Conv2d 32 x 32, 512 -> 32 x 32, 256
200 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
201 Route 32 x 32, 512 -> 32 x 32, 512
202 Conv2d 32 x 32, 512 -> 32 x 32, 256
203 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
204 Conv2d 32 x 32, 256 -> 32 x 32, 512
205 ActivationLeaky 32 x 32, 512 -> 32 x 32, 512
206 Conv2d 32 x 32, 512 -> 32 x 32, 256
207 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
208 Conv2d 32 x 32, 256 -> 32 x 32, 512
209 ActivationLeaky 32 x 32, 512 -> 32 x 32, 512
210 Conv2d 32 x 32, 512 -> 32 x 32, 256
211 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
212 Conv2d 32 x 32, 256 -> 32 x 32, 128
213 ActivationLeaky 32 x 32, 128 -> 32 x 32, 128
214 Upsample 32 x 32, 128 -> 64 x 64, 128
215 Route 64 x 64, 256 -> 64 x 64, 256
216 Conv2d 64 x 64, 256 -> 64 x 64, 128
217 ActivationLeaky 64 x 64, 128 -> 64 x 64, 128
218 Route 64 x 64, 256 -> 64 x 64, 256
219 Conv2d 64 x 64, 256 -> 64 x 64, 128
220 ActivationLeaky 64 x 64, 128 -> 64 x 64, 128
221 Conv2d 64 x 64, 128 -> 64 x 64, 256
222 ActivationLeaky 64 x 64, 256 -> 64 x 64, 256
223 Conv2d 64 x 64, 256 -> 64 x 64, 128
224 ActivationLeaky 64 x 64, 128 -> 64 x 64, 128
225 Conv2d 64 x 64, 128 -> 64 x 64, 256
226 ActivationLeaky 64 x 64, 256 -> 64 x 64, 256
227 Conv2d 64 x 64, 256 -> 64 x 64, 128
228 ActivationLeaky 64 x 64, 128 -> 64 x 64, 128
229 Conv2d 64 x 64, 128 -> 64 x 64, 256
230 ActivationLeaky 64 x 64, 256 -> 64 x 64, 256
231 Conv2d 64 x 64, 256 -> 64 x 64, 54
232 Yolo 64 x 64, 54 -> 64 x 64, 54
233 Route 64 x 64, 128 -> 64 x 64, 128
234 Conv2d 64 x 64, 128 -> 32 x 32, 256
235 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
236 Route 32 x 32, 512 -> 32 x 32, 512
237 Conv2d 32 x 32, 512 -> 32 x 32, 256
238 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
239 Conv2d 32 x 32, 256 -> 32 x 32, 512
240 ActivationLeaky 32 x 32, 512 -> 32 x 32, 512
241 Conv2d 32 x 32, 512 -> 32 x 32, 256
242 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
243 Conv2d 32 x 32, 256 -> 32 x 32, 512
244 ActivationLeaky 32 x 32, 512 -> 32 x 32, 512
245 Conv2d 32 x 32, 512 -> 32 x 32, 256
246 ActivationLeaky 32 x 32, 256 -> 32 x 32, 256
247 Conv2d 32 x 32, 256 -> 32 x 32, 512
248 ActivationLeaky 32 x 32, 512 -> 32 x 32, 512
249 Conv2d 32 x 32, 512 -> 32 x 32, 54
250 Yolo 32 x 32, 54 -> 32 x 32, 54
251 Route 32 x 32, 256 -> 32 x 32, 256
252 Conv2d 32 x 32, 256 -> 16 x 16, 512
253 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
254 Route 16 x 16, 1024 -> 16 x 16, 1024
255 Conv2d 16 x 16, 1024 -> 16 x 16, 512
256 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
257 Conv2d 16 x 16, 512 -> 16 x 16, 1024
258 ActivationLeaky 16 x 16, 1024 -> 16 x 16, 1024
259 Conv2d 16 x 16, 1024 -> 16 x 16, 512
260 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
261 Conv2d 16 x 16, 512 -> 16 x 16, 1024
262 ActivationLeaky 16 x 16, 1024 -> 16 x 16, 1024
263 Conv2d 16 x 16, 1024 -> 16 x 16, 512
264 ActivationLeaky 16 x 16, 512 -> 16 x 16, 512
265 Conv2d 16 x 16, 512 -> 16 x 16, 1024
266 ActivationLeaky 16 x 16, 1024 -> 16 x 16, 1024
267 Conv2d 16 x 16, 1024 -> 16 x 16, 54
268 Yolo 16 x 16, 54 -> 16 x 16, 54
===========================================================
GPU free memory: 4444.99 mb.
New NetworkRT (TensorRT v6.01)
Float16 support: 1
Int8 support: 1
DLAs: 2
Selected maxBatchSize: 1
GPU free memory: 4136.89 mb.
Building tensorRT cuda engine...
serialize net
create execution context
Input/outputs numbers: 4
input index = 0 -> output index = 3
Data dim: 1 4 512 512 1
Data dim: 1 54 16 16 1
RtBuffer 0 dim: Data dim: 1 4 512 512 1
RtBuffer 1 dim: Data dim: 1 54 64 64 1
RtBuffer 2 dim: Data dim: 1 54 32 32 1
RtBuffer 3 dim: Data dim: 1 54 16 16 1
Error reading file yolo4/layers/input.bin with n of float: 1048576 seek: 0 size: 4194304
/home/nvidia/tkDNN/src/utils.cpp:58
Aborting...
Thanks!
I solved it by comment out test_inference.
Hi. I could convert weights file to rt file. But my 4 channel model can't detect anything. Does the conversion method support 4 channels? Thanks!
Hi @Hiroaki-K4 Do you still have the problem? It should support that.
Hi @mive93 Yes, I have the problem. Do I need to change the source code? Please help me!
Hi @Hiroaki-K4 How did you solve the issue?
@poornimajd No, it hasn't been resolved yet.
@Hiroaki-K4 I think it is a dependency issue.Was able to solve it when tried it on rtx 2080 ti with cuda 10.
@poornimajd oh, really? Did you change either tkDNN or somewhere in the darknet that exports weights?
@poornimajd ok. Is it possible to detect the expected object with the created rt file?
Yes it is possible.
@poornimajd Oh, Great! Did you comment out what I said earlier in this issue?
No.
@poornimajd ok, thank you. I will try it again.
Have you solved the problem when input channel is 4 !
@najingligong1111 yes, I have solved. You need to comment out network_predict (net, X) in darknet.c when you export darknet weights file. You also have to comment out below line in yolo4.cpp when you run ./test_yolo4.
int ret = testInference(input_bins, output_bins, net, netRT);
Hi. I want to convert a darknet model trained on 4 channels to an rt file. I modified the inside of darknet to train the model in 4 channels with rgb plus depth. So I tried exporting the model in the link repository, but a segmentation fault occurred. https://git.hipert.unimore.it/fgatti/darknet Where should I change? terminal message is below.
Thanks!