-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…
-
How to use planner with phi 3 vision.Give an code example in c#.
sorry to bother you.I am new is this arena
-
### Describe the bug
14:38:32-701185 INFO Loading "microsoft_Phi-3-medium-128k-instruct"
14:38:32-710507 INFO TRANSFORMERS_PARAMS=
{ 'low_cpu_mem_usage': True,
'torch_dtype': torch.b…
-
Seems that microsoft/Phi-3.5-vision-instruct not working with below config
```
torchrun --nproc_per_node=1 \
src/training/train.py \
--lora_enable True \
--vision_lora True \
-…
-
Thanks so much for your work on this!
How can I deploy this fine-tuned model (expose via API endpoint)? Can I use vLLM or a library like this: https://github.com/EricLBuehler/mistral.rs, which sup…
-
### Describe the bug
`Viewing image...
Traceback (most recent call last):
File "D:\Python\~OpenInterpreter\lib\site-packages\interpreter\core\respond.py", line 79, in respond
for chunk in …
-
Do you have any plans to support multimodal LLMs, such as MiniGPT-4/MiniGPT v2 (https://github.com/Vision-CAIR/MiniGPT-4/) and LLaVA (https://github.com/haotian-liu/LLaVA/)? That would be a significan…
-
HI, I am trying to add the LLM functionality to the android compatible devices. Can anyone tell me how to build for the android? Also, is any help on the front of Multimodal LLM deployment on mobile w…
-
This is an issue to collect requests for model abliterations.
No one is required to abliterate your request, but it does make for a good place to check if someone else has used this process on the…
-
环境:debian 11, gcc 10.2, cuda 12.0, cudnn 8.8
### 运行结果:
```shell
[INFO] fastdeploy/runtime/runtime.cc(264)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::GPU.
be…