zhou13 / lcnn

LCNN: End-to-End Wireframe Parsing
MIT License
506 stars 94 forks source link

infer too slow #43

Closed M-crazy closed 3 years ago

M-crazy commented 3 years ago

效果很好,但是好慢好慢,GPU测试图均耗时3s左右,难以工业落地。

zhou13 commented 3 years ago

The inference speed of LCNN is 12 images per sec on a 1080Ti. There're other following up works that address the issues on efficiency.

lucy3589 commented 1 year ago

效果很好,但是好慢好慢,GPU测试图均耗时3s左右,难以工业落地。

请你关于这个问题,您最后怎么解决的呢?

lucy3589 commented 1 year ago

The inference speed of LCNN is 12 images per sec on a 1080Ti. There're other following up works that address the issues on efficiency.

inference speed is containt pre_process_time and post_process_time? And you speed up wireframe detection alg?

guker commented 3 months ago

The inference speed of LCNN is 12 images per sec on a 1080Ti. There're other following up works that address the issues on efficiency.

inference speed is containt pre_process_time and post_process_time? And you speed up wireframe detection alg?

same question for you @zhou13

zhou13 commented 3 months ago

It has been a long time since I worked on this project so I might be wrong, but I think 12 im/sec in the network time excluding pre/post_processing.