-
# Bug Report
{
"model": "llava-llama3:8b",
"prompt": "tell me a story!",
"stream": false
} this is the post body …
-
Hi:
I want to upload more than one pictures during training/inference. So I wonder does MiniCPM-Llama3-V 2.5 support multi-picture uplodaing? If it is, how can I achieve that?
-
ANIMINA users add photos to their stories but we don't use those photos as input. Example: If somebody uploads a couple of photos with dogs then we can use that information to assume that this person …
-
**Describe the bug**
When selecting the GPT-4 multimodal model, it is not possible to upload images in the input box.
**Screenshots**
![image](https://github.com/janhq/jan/assets/236159/170e8d5b-…
-
## MENTOR
- Matthew Artz
## BRIEF DESCRIPTION
Mural maps can be very complex and vast, containing a lot of important information that is forgotten overtime. A multimodal LLM assistant could be useful…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related iss…
-
### What is the issue?
Downloading of models on slow networks stops too frequently
(base) igor@macigor ~ % ollama run llava:7b
pulling manifest
pulling 170370233dd5... 23% ▕███ ▏ 9…
-
`st.experimental_rerun` will be removed after 2024-04-01.
Debug: Handling user request for session state: {'discussion': '', 'rephrased_request': '', 'api_key': '', 'agents': [], 'whiteboard': '', '…
-
I thought I've intalled the node correctly, and downloaded the needed models, but the Omost tool doesn't generate the json code, please help~
```
You are using ollama as the engine, no API key is re…
-
I started using this one and really like it,
https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/README.md
If you decide to add it I really l…