-
### Your current environment
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base=api_base,
model_name="microsoft/Phi-3-vision-128k-instruct",
model_kwargs={"stop": ["."]…
-
*********************************************
温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度
*********************************************
## 环境
**-【环境信息】**
nvidia@nvidia-desktop:~/FastDeploy/python$ jet…
-
**Describe the bug**
The official VLLM Wiki claims support for Phi-3-Vision (microsoft/Phi-3-vision-128k-instruct, Phi3VForCausalLM), but when I try to run it I get the following error:
`[rank0]: Tr…
-
[https://github.com/OpenBMB/MiniCPM-V](MiniCPM-V)
> [2024.05.24] We release the [MiniCPM-Llama3-V 2.5 gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](htt…
ycyy updated
5 months ago
-
One feature that will be super useful in Prompty is to enable adding base_url parameter in the “openai” type in model configuration. One of the cool features of AI Toolkit for VS Code extension is to…
-
# Compatibility Report
- Name of the game with compatibility issues: Halo: The Master Chief Collection
- Steam AppID of the game: 976730
## System Information
- GPU:
- Driver/LLVM version:
- …
-
`from PIL import Image
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
import torch
MODEL_NAME = "openbmb/MiniCPM-V-2_6"
image = Image.open("dubu.png").con…
-
The following error occurred while running the script finetune_moe.sh:
The model has moe layers, but None of the param groups are marked as MoE. Create a param group with 'moe' key set to True before…
-
While the basic structure for multimodality integration is there in the code, I cannot find a suitable model to run it with. Most models' projectors are either too low resolution (~400px), or the unde…
-
I'm wondering if there's functionality in beartype for doing something like this:
```python
#!/usr/bin/env python3
""" This code does not work because these features are not implemented. """
impor…