UCSC-VLAA / MedTrinity-25M

This is the official repository of our paper "MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine“
164 stars 15 forks source link

How to run an inference? #2

Closed satheeshKOLA532 closed 2 weeks ago

satheeshKOLA532 commented 3 weeks ago
          Can anyone send the inference code to load the checkpoints (provided in model zoo) & use it to build the chat interface for Medical images? (Please provide the provision to set the system prompt, image+question, only question, passing chat history (session data) for carrying context throughout the conversation)

Originally posted by @satheeshKOLA532 in https://github.com/UCSC-VLAA/MedTrinity-25M/issues/1#issuecomment-2308014155

yunfeixie233 commented 3 weeks ago

Hi,

The inference code is already included in the README of our repository. Currently, we do not support a chat interface. However, we might consider adding this feature in the future.