-
### Your current environment
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Gather data sampling…
-
cat .env
LLM_NAME="Ollama"
OLLAMA_MODEL_NAME="qwen:7b"
OLLAMA_BASE_URL="http://192.168.2.205:11434"
MIN_RELEVANCE_SCORE=0.3
BOT_TOPIC="OpenIM"
URL_PREFIX="http://192.168.2.205:11434"
USE_PREPRO…
-
```
Unsloth: Offloading input_embeddings to disk to save VRAM
Unsloth: Offloading input_embeddings to disk to save VRAM
Traceback (most recent call last):
File "/data/llmodel/Tools/software_inst…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux
### P…
-
![image](https://github.com/mudler/LocalAI/assets/150896511/07a3e040-66d7-4a89-ae44-871343733ae2)
AI reply not makesense
-
作者你好,有两个问题想咨询下:
1. 为什么 https://github.com/QwenLM/Qwen-Agent/blob/main/qwen_agent/llm/function_calling.py#L271 里的prompt不是使用训练时候的react格式,有点困惑
2. 根据Qwen的finetune_example_demo, 训练的时候工具的结果 好像被统一到observat…
-
### Describe the issue
Issue:
How to run inference for llava-next-72b/llava-next-110b?
There are too many versions of your llava, and it seems that the code is not compatible, and there are mul…
-
Detected CUDA files, patching ldflags
Emitting ninja build file /home/zzg-cx/.cache/torch_extensions/py38_cu116/inference_core_ops/build.ninja...
Building extension module inference_core_ops...
All…
-
### System Info
Ubuntu 24.04
Transformers 4.46.2
Accelerator 1.1.1
Safetensor 0.4.5
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own …
-
### System Info
- `transformers` version: 4.46.0
- Platform: Linux-5.15.0-97-generic-x86_64-with-glibc2.35
- Python version: 3.12.3
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.…