Closed ahundt closed 7 years ago
Thanks @ahundt , it's great! Just please remove the slurm.sh file. And may I ask, what result did you get for the densenet models?
I'm currently running verification that training the atrous resnet 50 you supply on the extended berkeley dataset with the Adam optimizer gets the same results as https://github.com/aurora95/Keras-FCN/issues/4, but with 11k images it has been taking several days and is only on epoch 137. Is something like 2485s per epoch expected?
Unfortunately, at this point both Atrous DenseNet and DenseNetFCN are performing noticeably worse than the resnet models, one factor here may be the benefits of imagenet based pre-trained weights, which are not yet integrated here. The other may be the adam optimizer configuration, which I'm testing as mentioned above.
Despite the DenseNet performance issues, the improvements of this pull request are an initial implementation of coco training support (i.e. the pipeline runs), a couple useful functionality extensions, and a fair amount of cleanup.
Great! Thanks @ahundt !
About your experiments, you don't need to run so many epochs on the augmented dataset. Basically you just need to keep the total iteration number the same, which leads to about 25 or 30 epochs. And for DenseNet, I think pre-trained weights is a important factor. Using pre-trained weights should largely increase the result.
Anyway, thanks for your contribution!
I've made a number of improvements and have initial integration of ms_coco.
The paths are now set to utilize the results of the automated download and setup at:
https://github.com/ahundt/tf-image-segmentation/tree/ahundt-keras/
Keras-FCN also has a new
train_coco.py
which is configured to train the new densenet based networks and coco. The originaltrain.py
has some new functionality utilized bytrain_coco.py
but the current defaults oftrain.py
should essentially do something as good or better than what was in the original atrous resnet training script before this pull request.