Open xp1632 opened 1 week ago
microscopy field
that now they can also use ImageJ in Chaldene and Jupyter NotebookExpected Input and Output:
1. Hybrid Input - Text:
- This part is easy, we have a lot existing LLM model that deal with
text ---> text
text --> code
,text --> use manual
,text --> documentation
, these only require different datasets for fine-tuning2. Hybrid Input - Marked Image
- Text description could be not specific enough for certain tasks
- It's more intuitive if the end-user can send an image they want to process with certain marks
3. Output
- The output could be code, user manual, documentation
- or we can also provide a visual workflow so the user can further understand it
technical direction
in #75Multimodal Large Language Model
Expected Input and Output:
1. Hybrid Input - Text:
text ---> text
text --> code
,text --> use manual
,text --> documentation
, these only require different datasets for fine-tuning2. Hybrid Input - Marked Image
3. Output