junqiangchen / HC18-Automated-measurement-of-fetal-head-circumference

HC18—Automated measurement of fetal head circumference
https://hc18.grand-challenge.org/
25 stars 13 forks source link

Keras 版的DenseVnet #6

Closed 975150313 closed 1 year ago

975150313 commented 4 years ago

我参照您的网络,写了一个keras版的,有些地方简化比较严重,我也不确定对不对,跑了一下模型效果还可以,我不太确定您那个 output_map的作用,我只使用了最后一层 `from future import division from keras.layers import (Input, BatchNormalization, Conv2D, Dropout, Conv2DTranspose, Add, Concatenate) from keras.models import Model

def Vnet(pretrained_weights=None, shape=(512, 512, 1), num_class=1, is_training=True, stage_num=5, thresh=0.5): """ DenseVnet 网络构建 :param pretrained_weights:是否加载预训练参数 :param shape: 输入图像尺寸(w,h,c),c是通道数 :param num_class: 数据集的类别总数 :param is_training: 是否是训练模式 :param stage_num: Vnet的网络深度,即stage的总数,论文中为5 :return: Vnet网络模型 """ keep_prob = 0.5 if is_training else 1.0 # dropout概率 rate = 1 - keep_prob inputs = Input(shape) layer0 = Conv2D(16, 3, padding='same', activation='relu')(inputs) layer1 = Dropout(rate)(BatchNormalization()(Conv2D(16, 3, padding='same', activation='relu')(layer0))) layer1 = Dropout(rate)(BatchNormalization()(Conv2D(16, 3, padding='same', activation='relu')(layer1))) layer1 = Add()([layer0, layer1]) down1 = Dropout(rate)(BatchNormalization()(Conv2D(32, 3, strides=(2, 2), padding='same', activation='relu')(layer1))) layer2 = Dropout(rate)(BatchNormalization()(Conv2D(32, 3, padding='same', activation='relu')(down1)))

layer2 = Dropout(rate)(BatchNormalization()(Conv2D(32, 3, padding='same', activation='relu')(layer2)))
layer2 = Add()([down1, layer2])
down2 = Dropout(rate)(BatchNormalization()(Conv2D(64, 3, strides=(2, 2), padding='same', activation='relu')(layer2)))
layer3 = Dropout(rate)(BatchNormalization()(Conv2D(64, 3, padding='same', activation='relu')(down2)))
layer3 = Dropout(keep_prob)(BatchNormalization()(Conv2D(64, 3, padding='same', activation='relu')(layer3)))
layer3 = Add()([down2, layer3])
down3 = Dropout(rate)(BatchNormalization()(Conv2D(128, 3, strides=(2, 2), padding='same', activation='relu')(layer3)))
layer4 = Dropout(rate)(BatchNormalization()(Conv2D(128, 3, padding='same', activation='relu')(down3)))
layer4 = Dropout(rate)(BatchNormalization()(Conv2D(128, 3, padding='same', activation='relu')(layer4)))
layer4 = Add()([down3, layer4])
down4 = Dropout(rate)(BatchNormalization()(Conv2D(256, 3, strides=(2, 2), padding='same', activation='relu')(layer4)))

layer5 = Dropout(rate)(BatchNormalization()(Conv2D(256, 3, padding='same', activation='relu')(down4)))
layer5 = Dropout(rate)(BatchNormalization()(Conv2D(256, 3, padding='same', activation='relu')(layer5)))
layer5 = Add()([down4, layer5])
down5 = Dropout(rate)(BatchNormalization()(Conv2D(512, 3, strides=(2, 2), padding='same', activation='relu')(layer5)))

layer6 = Dropout(rate)(BatchNormalization()(Conv2D(512, 3, padding='same', activation='relu')(down5)))
layer6 = Dropout(rate)(BatchNormalization()(Conv2D(512, 3, padding='same', activation='relu')(layer6)))
layer6 = Add()([down5, layer6])

deconv1 = Conv2DTranspose(256, 3, strides=(2, 2), padding='same', activation='relu')(layer6)

layer7 = Concatenate(axis=3)([layer5, deconv1])
layer7 = Dropout(keep_prob)(BatchNormalization()(Conv2D(256, 3, padding='same', activation='relu')(layer7)))
layer7 = Dropout(keep_prob)(BatchNormalization()(Conv2D(256, 3, padding='same', activation='relu')(layer7)))
layer7 = Dropout(keep_prob)(BatchNormalization()(Conv2D(256, 3, padding='same', activation='relu')(layer7)))
layer7 = Add()([deconv1, layer7])

outputs1 = Conv2D(num_class, 1, activation="sigmoid", padding='same')(
    Conv2DTranspose(512, 1, strides=(16, 16), padding='same')(layer7))

deconv2 = Conv2DTranspose(128, 3, strides=(2, 2), padding='same', activation='relu')(layer7)
layer8 = Concatenate(axis=3)([layer4, deconv2])
layer8 = Dropout(keep_prob)(BatchNormalization()(Conv2D(128, 3, padding='same', activation='relu')(layer8)))
layer8 = Dropout(keep_prob)(BatchNormalization()(Conv2D(128, 3, padding='same', activation='relu')(layer8)))
layer8 = Dropout(keep_prob)(BatchNormalization()(Conv2D(128, 3, padding='same',activation='relu')(layer8)))
layer8 = Add()([deconv2, layer8])

outputs2 = Conv2D(num_class, 1, activation="sigmoid", padding='same')(
    Conv2DTranspose(512, 1, strides=(8, 8), padding='same')(layer8))

deconv3 = Conv2DTranspose(64, 3, strides=(2, 2), padding='same', activation='relu')(layer8)
layer9 = Concatenate(axis=3)([layer3, deconv3])
layer9 = Dropout(keep_prob)(BatchNormalization()(Conv2D(64, 3, padding='same', activation='relu')(layer9)))
layer9 = Dropout(keep_prob)(BatchNormalization()(Conv2D(64, 3, padding='same', activation='relu')(layer9)))
layer9 = Dropout(keep_prob)(BatchNormalization()(Conv2D(64, 3, padding='same', activation='relu')(layer9)))
layer9 = Add()([deconv3, layer9])

outputs3 = Conv2D(num_class, 1, activation="sigmoid", padding='same')(
    Conv2DTranspose(64, 1, strides=(4, 4), padding='same')(layer8))

deconv4 = Conv2DTranspose(32, 3, strides=(2, 2), padding='same', activation='relu')(layer9)

layer10 = Concatenate(axis=3)([layer2, deconv4])
layer10 = Dropout(keep_prob)(BatchNormalization()(Conv2D(32, 3, padding='same', activation='relu')(layer10)))
layer10 = Dropout(keep_prob)(BatchNormalization()(Conv2D(32, 3, padding='same', activation='relu')(layer10)))
layer9 = Add()([deconv4, layer10])

outputs4 = Conv2D(num_class, 1, activation="sigmoid", padding='same')(
    Conv2DTranspose(32, 1, strides=(2, 2), padding='same')(layer9))
deconv5 = Conv2DTranspose(16, 3, strides=(2, 2), padding='same', activation='relu')(layer10)
layer11 = Concatenate(axis=3)([layer1, deconv5])
layer11 = Dropout(keep_prob)(BatchNormalization()(Conv2D(16, 3, padding='same', activation='relu')(layer11)))
layer11 = Dropout(keep_prob)(BatchNormalization()(Conv2D(16, 3, padding='same', activation='relu')(layer11)))
layer11 = Add()([deconv5, layer11])
outputs = Conv2D(num_class, 1, activation="sigmoid", padding='same')(layer11)
model = Model(inputs=inputs, outputs=outputs)
print(model.output_shape)
return model

if name == 'main': model = Vnet(shape=(512, 512, 1), num_class=1, stage_num=5, thresh=0.5) model.summary()

model.save("D:/a.h5")

# model.save_weights("D:/b.h5")

`

975150313 commented 4 years ago

我找到一个Keras版本的Unethttps://github.com/qubvel/segmentation_models,并且还另外集成了Unet,PSPNet ,Linknet ,FPN 这几个网络,他们的后端可以自由切换,比如vgg resnet ,inceptionv3,densenet121,同时我参考了周宗伟的Unet++https://github.com/MrGiovanni/UNetPlusPlus,通过修改网络,我把这个unet++集成到里面去了,目前还没共享到github上,建议您也把VNET 集成到Keras,这样会有更多的人使用您的网络

junqiangchen commented 3 years ago

我找到一个Keras版本的Unethttps://github.com/qubvel/segmentation_models,并且还另外集成了Unet,PSPNet ,Linknet ,FPN 这几个网络,他们的后端可以自由切换,比如vgg resnet ,inceptionv3,densenet121,同时我参考了周宗伟的Unet++https://github.com/MrGiovanni/UNetPlusPlus,通过修改网络,我把这个unet++集成到里面去了,目前还没共享到github上,建议您也把VNET 集成到Keras,这样会有更多的人使用您的网络

好的,非常感谢您的建议,我会抽空把网络集成到Keras版本中去。 谢谢