Can anyone send the inference code to load the checkpoints (provided in model zoo) & use it to build the chat interface for Medical images? (Please provide the provision to set the system prompt, image+question, only question, passing chat history (session data) for carrying context throughout the conversation)
The inference code is already included in the README of our repository. Currently, we do not support a chat interface. However, we might consider adding this feature in the future.
Originally posted by @satheeshKOLA532 in https://github.com/UCSC-VLAA/MedTrinity-25M/issues/1#issuecomment-2308014155