Open tkone2018 opened 4 years ago
Hello, Assuming you have your environment configured according to this file everything else is provided in this repo.
In order to use our model on your own data, you will have to run evaluate.py. The input arguments are self-explanatory.
Trained models are provided provided for under and over-exposed images. The models are very small (<5MB).
The models where 'curriculum' trained on 2 simulated datasets and 2 real datasets. These models are provided as "one size fits all". There is no need to perform any retraining on your own data if you intend to use them for comparison on ill exposure problems. Also, the initial weights are random so that retraining would yield slightly different (inconsistent) results. Furthermore, we believe it would be unfair to most classic methods if the models were trained for each dataset individually.
If you intend to use the same model/training/loss function on another problem (say dehazing, denoising) please let us know and we will provide you with the training code.
aha, i want to use your model to train my dataset,you mean you will not open the train file and model file right now? but i still thank you
at present , work for image enhance algrithom,haha ,such as dehazing ,deblur,contrast enhance
i really want to you give me the model and train code, haha
Ok, so it is pretty forward then.
The network structure is already available in the hd5f files. The loss function is already provided here. Fine-tuning should be no problem. All you have to do is to build your own dataloaders (it's your data after all) and train.
Just keep in mind that this network was intended for exposure adjustment purposes. It is a VERY small model and probably won't be as good as others for general image-to-image transformation tasks.
i will try to realize it ,thank you ,i might need your help in the later
hello, i write your train code have many problem, can you share the train.py and the model.py and the dataset? i want to train the dataset that you used,really really thank you
ok, i want to know the nums about filters per layer,can you share?thx
Sorry for the delay, I am locked out of our local repositories during the pandemic.
I puzzled the model definition together from unorganized scripts I had lying around. I have not actually tested it. Once again, you would be better of using the models we already provide in this repo, but anyway, have a go!
def dilated_cell_module(units, x, activ='elu', conv_module_type=1):
if conv_module_type ==1:
# as in steffens - lars 2018
dc0 = Conv2D(units//4, 1, strides=1, dilation_rate = 1, activation=activ, padding='same')(x)
dc1 = Conv2D(units//4, 3, strides=1, dilation_rate = 1, activation=activ, padding='same')(x)
dc2 = Conv2D(units//4, 3, strides=1, dilation_rate = 2, activation=activ, padding='same')(x)
dc4 = Conv2D(units//4, 3, strides=1, dilation_rate = 4, activation=activ, padding='same')(x)
m1 = Concatenate(axis=-1, )([dc0, dc1, dc2, dc4])
return m1
elif conv_module_type==2:
dc1 = Conv2D(units//4, 3, strides=1, dilation_rate = 1, activation=activ, padding='same')(x)
dc2 = Conv2D(units//4, 3, strides=1, dilation_rate = 2, activation=activ, padding='same')(x)
dc4 = Conv2D(units//4, 3, strides=1, dilation_rate = 4, activation=activ, padding='same')(x)
dc8 = Conv2D(units//4, 3, strides=1, dilation_rate = 8, activation=activ, border_mode='same')(x)
m1 = Concatenate(axis=-1, )([dc8, dc1, dc2, dc4])
dc0 = Conv2D(units, 1, strides=1, dilation_rate = 1, activation=activ, padding='same')(m1)
return dc0
def build_dense_u_model31(img_rows=None, img_cols=None, activ='tanh', conv_module_type=2, img_channels = 3):
first_input = Input(shape=( img_rows, img_cols, img_channels))
dcm1 = dilated_cell_module(32, first_input, activ, conv_module_type)
l1 = Conv2D(32, 3, strides=2, dilation_rate = 1, activation=activ, padding='same')(dcm1) #128
l1 = dilated_cell_module(64, l1, activ, conv_module_type)
l2 = Conv2D(32, 3, strides=2, dilation_rate = 1, activation=activ, padding='same')(l1) # 64
l2 = dilated_cell_module(128, l2, activ, conv_module_type)
l3 = Conv2D(32, 3, strides=2, dilation_rate = 1, activation=activ, padding='same')(l2) #32
l3 = dilated_cell_module(256,l3, activ, conv_module_type)
l3 = upsampling_cell_module(32, l3, 2) #64
l4 = Concatenate(axis=-1, )([l3, l2])
l4 = dilated_cell_module(32, l4, activ, conv_module_type)
l5 = upsampling_cell_module(32, l4, 2) #128
l5 = dilated_cell_module(32,l5, activ, conv_module_type)
l6 = Concatenate(axis=-1, )([l1, l5]) #128
l6 = upsampling_cell_module(32, l6, 2)
l6 = dilated_cell_module(32,l6, activ, conv_module_type)
r1 = Conv2D(32, 3, strides=1, dilation_rate = 1, activation=activ, padding='same')(dcm1)
r2 = Conv2D(32, 3, strides=1, dilation_rate = 1, activation=activ, padding='same')(r1)
m1 = Concatenate(axis=-1, )([l6, r2])
m1 = BatchNormalization()(m1)
c1 = Conv2D(32, 3, strides=1, dilation_rate = 1, activation=activ, padding='same')(m1)
c2 = Conv2D(32, 1, strides=1, dilation_rate = 1, activation=activ, padding='same')(c1)
o1 = Conv2D(3, 1, strides=1, name='out', activation=activ, padding='same')(c2)
model = Model(inputs=first_input, outputs=o1)
model.summary()
#plot_model(model, to_file='phi_net.png', show_shapes=True)
return model
anyway, really thank you,is the train.py file not hard to realize it ?
haha, i am really sorry for many questions to need your help, i want to know about your dataset construct and dataset partitions, thank u
We usually use a 70 train,10 validation, and 20% test split. We start from simulated data and then, after the model fits that data, we continue fine tunning on real ill exposed images.
I believe back when we published this paper we used simulated data based on
We also used real data from
If you are redoing the experiments in 2020, you could also include other datasets listed here.
really thank you, you give me more and more help
can you find the train.py file? i dowmload the SICE dataset,but i do not know how to dataloader the dataset ,55555, i need your help
the question is that i need to separate train the under exposure and over exposure model ?
Yes. Sorry, I forgot about that. We trained separate models for under and over-exposure.
so, can you share me the full files , the model.py ,train.py data_loader.py ? 55555, i really need your help, i only realize your codes, not to improve your model. so can you share ?
my brother, can you help me ?555
hello ,thanks for your codes, do you open the train file and model file ? thx