hyn2028 / llm-cxr

Official code for "LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and Generation"
https://arxiv.org/abs/2305.11490
Apache License 2.0
111 stars 11 forks source link

Speed up the inference time #4

Closed o0t1ng0o closed 1 year ago

o0t1ng0o commented 1 year ago

Hi @hyn2028,

I found that it takes about 5 seconds to evaluate each sample by using "generate_llmcxr.py". The test set of MIMIC-CXR includes about 3k\~7k images. It requires about 4\~9 hours to inference the whole test set.

Do we have other way to speed up the inference process?

hyn2028 commented 1 year ago

Hello. We also took that much time in the generation process for testing. This is because the structure of the model (LLM) is not a very efficient structure for image generation. I'm sorry I can't help. This is a limitation of the model.

o0t1ng0o commented 1 year ago

Oh. Iseeee. Thank you for your reply.