Closed zhaoyang-lv closed 8 months ago
Yes, the original paper promised 5s, but the HF says 10s - and actual output takes 20s
Is this due to video rendering lag? Why not also expose the 3d model for download? https://github.com/3DTopia/OpenLRM/issues/14
Hi @zhaoyang-lv, Thanks for your suggestions! Currently, we haven't got time to implement this logic. But indeed, the quantitative evaluation is necessary for benchmarking, and we will consider releasing the evaluation together with the training code.
Hi @yosun , Thank you for your interest.
First of all, OpenLRM is an open-source implementation of the amazing LRM paper. The performance of OpenLRM does NOT represent the original LRM.
For inference speed, we do observe an inference speed of around 5-6s for OpenLRM with an A100 GPU locally. On the HF demo, the whole pipeline including video rendering takes about 8s
with A10G, and 10s
is still a conservative estimation. The time displayed when you hit the "Generate" button is the expected total time from history which includes queueing and background removal operations.
For 3D model export, plz refer to this issue https://github.com/3DTopia/OpenLRM/issues/14.
It helps to document the inference indeeds takes 5s on a100 (curious also results for a6000 h100 etc). Would excluding vid rendering, simply output mesh reduce the total to about 5s then?
I did not find information about the evaluation of the released models. Are there any evaluations of the released models, that can be related to the original paper in the same settings?