-
I use the following code in my PreProcessor under a Mirth Connect Channel:
// initialize a first_iteration flag.
// we don't want to add a carriage return
// before the MSH segment
var first_ite…
-
I use the following code in my PreProcessor under a Mirth Connect Channel:
// initialize a first_iteration flag.
// we don't want to add a carriage return
// before the MSH segment
var first_ite…
-
I use the following code in my PreProcessor under a Mirth Connect Channel:
// initialize a first_iteration flag.
// we don't want to add a carriage return
// before the MSH segment
var first_ite…
-
I want to change the tokenizer so that it can be applied to Korean
I would appreciate it if you could change LLM_PATH and additionally let me know which part of the code should be modified.
-
### Feature request
https://github.com/OpenBMB/MiniCPM is the smallest multimodal model available. The latest version, https://huggingface.co/openbmb/MiniCPM-V-2, appears to be able to understand G…
-
### What happened?
I was trying to use Fireworks https://fireworks.ai/models/fireworks/phi-3-vision-128k-instruct model with litellm but the `litellm.completion_cost` fails because the model is missi…
-
I use the following code in my PreProcessor under a Mirth Connect Channel:
// initialize a first_iteration flag.
// we don't want to add a carriage return
// before the MSH segment
var first_ite…
-
Hi,
I've got next error during fine-tuning:
Traceback (most recent call last):
File "/home/kiosk1/.cache/pypoetry/virtualenvs/phi3-ivOQmoER-py3.9/lib/python3.9/site-packages/peft/peft_model.p…
-
Hi 👋🏻 Do you have any inference examples that I could use?
-
### 🚀 The feature, motivation and pitch
i.e. instead of this:
https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_chat.py#L138-L140
allow multiple images.
Idea is …