chullhwan-song / Reading-Paper

151 stars 26 forks source link

Selective Kernel Networks #232

Open chullhwan-song opened 4 years ago

chullhwan-song commented 4 years ago

https://arxiv.org/abs/1903.06586

chullhwan-song commented 4 years ago

abstract

Introduction

Related Work

Methods

image

Selective Kernel Convolution

def SKConv(input, M, r, L=32, stride=1, is_training=True): input_feature = input.get_shape().as_list()[3] d = max(int(input_feature / r), L) net = input with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu): for i in range(M): net = slim.conv2d(net, input_feature, [3+i2, 3+i2], rate=1+i, stride=stride) net = slim.batch_norm(net, decay=0.9, center=True, scale=True, epsilon=1e-5, updates_collections=tf.GraphKeys.UPDATE_OPS, is_training=is_training) net = tf.nn.relu(net) if i == 0: fea_U = net else: fea_U = tf.add(fea_U, net) gap = tflearn.global_avg_pool(fea_U) fc = slim.fully_connected(gap, d, activationfn=None) fcs = fc for in range(M): fcs = slim.fully_connected(fcs, input_feature, activationfn=None) if == 0: att_vec = fcs else: att_vec = tf.add(att_vec, fcs) att_vec = tf.expand_dims(att_vec, axis=1) att_vec = tf.expand_dims(att_vec, axis=1) att_vec_softmax = tf.nn.softmax(att_vec) fea_v = tf.multiply(fea_U, att_vec_softmax) return fea_v


### Network Architecture
![image](https://user-images.githubusercontent.com/40360823/67671723-d89dd880-f9b9-11e9-9e41-f3a0d8507dc1.png)

## Experiments
### ImageNet Classification
![image](https://user-images.githubusercontent.com/40360823/67672732-e7858a80-f9bb-11e9-8048-221216641ee3.png)
![image](https://user-images.githubusercontent.com/40360823/67672772-fec47800-f9bb-11e9-9c2a-bf88cbd08ad6.png)

## 결론/리뷰
*  개인적으로는 **Selective Kernel** (SK)이라는 근사한(?) 이름을 붙였지만, 또한 이름만 보고서는 dynamic하게 kernel 크기를 그때그대 붙여서 사용하는 형태(이런게 가능할까라고..첨에 생각하기도..)라고 착각하기싶기도하다.
* 결론적으로 제생각에는 multi-kernel SENET이라는게 더 가까운 개념이라고 느꼈다.