-
I am on os debian 12.
I run commad "/home/nick/AutoGPTDiet/autogpt/$ ./scripts/llamafile/serve.py
and get error below
```
type(llamafile) =
Traceback (most recent call last):
File "/home/nic…
-
If I want to change the default model dolphin-2.2.1-mistral-7b.Q5_K_M.gguf to another model like as Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf or a custom-defined model, how should I modify it?
-
[https://huggingface.co/upstage/solar-pro-preview-instruct](https://huggingface.co/upstage/solar-pro-preview-instruct)
Solar released a new 22b model, and this thing is crazy powerful. I was just won…
-
### Your current environment
vllm version: 0.5.4
gpu 24GB memory
### 🐛 Describe the bug
```bash
CUDA_VISIBLE_DEVICES=0 vllm serve mistralai/Mistral-7B-Instruct-v0.3 --api-key yyy --port 1…
-
Hi, I am trying to run the `Llama-3.1 8b + Unsloth 2x faster finetuning.ipynb` you provided in the README. However, when I use google colab to run the second cell I got this error:
``` bash
------…
-
I have thoroughly tested MoA (with one layer) on some objective benchmarks (less subjective compared to MT-bench), such as GSM8K, HotpotQA.
It seems that when the LLMs are 7B-level, it does not work …
-
Hi I am running a fresh install of H2O-GPT:
Running script is: python generate.py --inference_server="vllm:0.0.0.0:5001" --guest_name="" --enable_tts=False --enable_stt=False --base_model=mistral…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### 🐛 Describe the bug
Some models like mistralai/Mistr…
-
### Python -VV
```shell
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[29], line …
-
**Title:** Evaluation Code Produces Identical Results with Different Caching Methods
**Description:**
It seems the evaluation code leads to the same result with different caching methods. I used…