MarvinTeichmann / ConvCRF

This repository contains the reference implementation for our proposed Convolutional CRFs.
MIT License
564 stars 133 forks source link

About speed comparison? #8

Open IzouGend opened 6 years ago

IzouGend commented 6 years ago

Thanks for your excellent work! I noticed convcrf is run on gpu, while fullcrf is run on cpu. Is it unfair to compare two algorithms on different platforms?

MarvinTeichmann commented 6 years ago

The reason for this comparison is that there is simply no working fullcrf implementation on gpu available. CRFasRNN, DeepLab and other segmentation systems utilizing CRFs use the very same CPU implementation. So it is a fair comparison since it improves the state of the art.

suhangpro commented 6 years ago

Thanks for the excellent work too! I have another question regarding the speed comparison:

you use a 4x4 average pooling before doing message passing, effectively reducing the computation 16 folds. But there isn't such a step in densecrf, right? In you arXiv draft, the only thing that seems to be relevant is the "gaussian blur" described in sec.4.2. Is that refering to this pooling operation?

MarvinTeichmann commented 6 years ago

Yes, densecrf does bilinear downsampling internally.

suhangpro commented 6 years ago

Thanks for the prompt reply!

I understand that there is some interpolation happening when mapping the pixels onto a permutohedral lattice. Are you referring to that? There downsampling is done in the high-dimensional lattice space. In ConvCRF, the downsampling is done on image space (XY) instead.

hrtan commented 5 years ago

Hi, Marvin:

I notice that in your paper, you post the speed of ConvCRF with different receptive field size from 3 to 13. Here is my question, why there's no more larger conv-size than size-13 ? And is the speed posted in table.1 indicate that the time cost for just one iteration?

MarvinTeichmann commented 5 years ago

Hi Alex,

the performance does not improve past 13. Also, if you go bigger then 21 (I think it was 21) I ran into GPU memory issues (@ 11GB). The memory consumption also increases quadratically with filter size. So there is no reason why you want to go past 11.