-
mlx 0.13.1
mlx-lm 0.13.1
mlx-vlm 0.0.5
```
import mlx.core as mx
from mlx_vlm import load, generate
model_path = "ml…
-
Really appreciate the authors for open-sourcing this great project, but there is a little question for me about the time consume for the intentqa and nextqa dataset (which is what I'm work on)
The lo…
-
[Here](https://github.com/opendatalab/HA-DPO/blob/42f72c536984c6ded016e89b70266f29f2428f33/ha_dpo/models/llava-v1_5/train_dpo.py#L218) Why does this variable A need to be multiplied by two here?
Be…
tbbbk updated
6 months ago
-
One of the reviewers comment that, this work addresses "yes or no" hallucination instead of a general hallucination problem, e.g., hallucination in captions.
I'm not very clear about his/her comment,…
-
```bash
[2024-05-14T03:15:53Z ERROR winit::platform_impl::platform] X11 error: XError {
description: "BadAlloc (insufficient resources for operation)",
error_code: 11,
…
-
I use this server config:
```{
"host": "0.0.0.0",
"port": 8085,
"api_key": "api_key",
"models": [
{
"model": "models/phi3_mini_model/phi3_mini_model.gguf",
…
-
### Your current environment
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC ve…
-
Thanks for your work
I would like to know which effect would be better between continuous fine-tuning and fine-tuning multiple instructions at once?
-
Hi team,
I have been successfully running llava-next video, but it suddenly stopped working after I pulled the latest changes from the repo two days ago. I have been trying to resolve the issue but…
-
Hi, The Image to Prompt doesn't work correctly for generating images from the output prompt, it loops without outputting anything
I use your workflow, Ollama version 1.30, Win 10, ComfyUI: 2092[96b4…