-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
(MindSpore) [root@fd428729b7cb46b089e3705e66eecb16-task0-0 LLaMA-Factory]# llamafactory-cli train example…
-
After running the default flow on Mistral in vLLM, there is a large (>100MB) report json in the directory I ran the commands. This seems quite heavy-weight, especially for a json file.
Instead, I …
mgoin updated
2 months ago
-
See https://github.com/premAI-io/prem-app/issues/514
-
### Your current environment
```
# reproduce codes:
from vllm import LLM, SamplingParams
import datasets
raw_datasets = datasets.load_dataset( "truthful_qa", 'generation')
questions = [i['qu…
-
### Environment
Conda environment:
python=3.10
mergekit commit f086664c983ad8b5f126d40ce2e4385f9e65f32c (latest as of yesterday)
transformers from git @ git+https://github.com/huggingface/transfo…
-
**Describe the bug**
I want to use local llms to evaluate my rag app, I have tried Ollama and HuggingFace models but neither of them is working.
Ragas version: 0.1.11
Python version: 3.11.3
**…
-
So that `llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from .models/mistral-7b-instruct-v0.1.Q4_K_M.gguf...` is only shown once
```python
from funcchain import chain…
-
The current version requires an Internet connection to download the models when it is first used after deployment.
Will the future add ways to deploy without the Internet? This will make LibrePhoto…
-
Looks promising:
```
[INST]
create a DOT graph to decide a mortgage loan. if credit score is greater than 700 then check years employed. else reject.
if years employed is greater than 3 then …
-
* https://github.com/TheR1D/shell_gpt/wiki/Ollama
* config
`CHAT_CACHE_PATH=C:\Users\Y00655~1\AppData\Local\Temp\chat_cache
CACHE_PATH=C:\Users\Y00655~1\AppData\Local\Temp\cache
CHAT_CACHE_LENGTH=…