Work In Progress, Results can't be replicated yet with the models here
model-tiramasu-67-func-api.py
feel free to open issues for suggestions:)
What is The One Hundred Layers Tiramisu?
An extension of DenseNets to deal with the problem of semantic segmentation.
Fully Convolutional DensNet = (Dense Blocks + Transition Down Blocks) + (Bottleneck Blocks) + (Dense Blocks + Transition Up Blocks) + Pixel-Wise Classification layer
pip install -r requirements.txt
DenseBlock:
BatchNormalization
+ Activation [ Relu ]
+ Convolution2D
+ Dropout
TransitionDown:
BatchNormalization
+ Activation [ Relu ]
+ Convolution2D
+ Dropout
+ MaxPooling2D
TransitionUp:
Deconvolution2D
(Convolutions Transposed)
data_loader.py
to crop images to 224, 224
as in the paper implementation.model-tiramasu-67-func-api.py
or python model-tirmasu-56.py
for now to generate each models file.python train-tirmasu.py
to start training:
data_loader
included for the CamVidDataset
helper.py
contains two methods normalized
and one_hot_it
, currently for the CamVid Taskgit clone git@github.com:alexgkendall/SegNet-Tutorial.git
/CamVid
to here, or change the DataPath
in data_loader.py
to the above directoryThe run python data_loader.py
to generate these two files:
/data/train_data.npz/
and /data/train_label.npz
Models | Acc | Loss | Notes |
---|---|---|---|
FC-DenseNet 67 | 150 Epochs, RMSPROP |
[x] FC-DenseNet 103
[x] FC-DenseNet 56
[x] FC-DenseNet 67
[ ] Replicate Test Accuracy CamVid Task
[ ] Replicate Test Accuracy GaTech Dataset Task
[ ] Requirements
Original Results Table: