-
I try to run the `evaluation` step to follow the code in the `readme`.
First, I download the checkpoints from the huggingface. And then set `--model-path /data2/TinyMed/pretrained_weights/Tinymed-ph…
-
### 软件环境
```Markdown
- paddlepaddle-gpu: 3.0.0.dev20240728
- paddlenlp: 3.0.0b0.post0
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
运行示例代码开启block_attn时,动态图报错如下:
T…
-
Trying to follow the instruction [here](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336)
stuck at first step:
xtuner convert pth_to_h…
-
There is a new version of the Amazing LLava model that uses Llama 3 or Phi-3:
https://huggingface.co/collections/MBZUAI/llava-llama-3-and-phi-3-mini-662b38b972e3e3e4d8f821bb
https://github.com/m…
-
Hi, I'm trying to use the phi-3-vision model following the documentation. I am able to run the following demo code with most of the models, such as `internvl` and `llava`, but I failed with `phi-3-vis…
-
Please integrate phi 3 with llava as it is equivalent to llama 3 on benchmarks
-
For every model I've downloaded, the speed saturates my bandwidth (~13MB/sec) until it hits 98/99%. Then the download slows to a few tens of KB/s and takes hour(s) to finish.
I've tried multipl…
-
I have a intel CPU that supports a number of AVX features, but most of them are not picked up when using ollama. Below is the llama.log file:
system info: AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_…
-
Tested and it has very good quality for the size:
https://huggingface.co/openbmb/MiniCPM-V-2
-
Hi, I've been exploring this repo for the past couple of days and I find your work here really amazing. I'm curious if there are any plans to add support for the Phi-3-vision-128k-instruct model to th…