Open ThomasLengeling opened 7 years ago
Yeah, will be available no later than the end of May, possibly after nips ddl. Thanks.
Hi, I am confused about the tricks you made the ICNET to achieve at 30FPS on PSPNET50. In your paper you introduced the details of compression method you choose. But In Section 6.1, you said "When directly compressing PSPNet50 following our previously described procedure, 170ms inference time is yielded with mIoU reducing to 67.9% as shown in Table 3. They indicate that only model compression has almost no chance to achieve realtime performance under the condition of keeping decent segmentation quality. In what follows, we take the model compressed PSPNet50, which is reasonably accelerated, as our baseline system for comparison." So, I am confused if the speedup of the ICNET is come from the compressed PSPNet50 model you take and not come from the compression method you take?
My understanding is that the speedup is largely from an optimisation of model depth balanced against image resolution - so parallel inference is run using: 1: full resolution through 4 layer model, 2: 1/2 resolution through 8 layer model, and 3: 1/4 resolution through 16 layer model. The inference is combined to produce the output in ~30 ms.
mark
mark, looking forward for it.
mark
still waiting for u lol
mark
any update?
Dear all, thanks for the patience, the evaluation code is ready now.
HI, can you please let me know how to implement same feature on iOS/Android mobile application for background subtraction.
Hi Hengshuang,
I am wondering if by any chance your code is available for test? Thank you