-
I tried to merge adapter_model.safetensors and unsloth.Q8_0.gguf using your tool. Both were taken from here: https://huggingface.co/klei1/bleta-8b Got this error:
![image](https://github.com/user-…
-
what happened,here is my pip list
aiofiles 23.2.1
aiohttp 3.8.5
aiosignal 1.3.1
altair 5.1.1
annotated-types 0.5.0
…
-
With GPU support, running inference with even larger models is more relevant. Attempting to import ONNX models that are spread across multiple files fails badly.
-
```2019-12-06 00:18:07] Using single-device training
[2019-12-06 00:18:07] [data] Loading vocabulary from JSON/Yaml file /191206/source_vocab.yml
[2019-12-06 00:18:08] [data] Setting vocabulary size…
sdlmw updated
2 years ago
-
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/n…
-
Run Qwen-7B-Chat model get the error: **ModuleNotFoundError: No module named 'transformers_modules.Qwen-7B-Chat'**
![image](https://github.com/intel-analytics/BigDL/assets/30886638/ad07bd87-8fbc-45…
-
I followed the training steps to train the llama2 model, but encountered the following error. I searched a lot, but still couldn't solve it.
```
UndefinedError File "/home/hs/anaconda3/envs/onebit/…
-
## Current issue
- [x] Infeasible to merge multiple views
- [x] Cannot support multiple views (e.g. [A, B] -> View -> Shape cannot be inferenced)
- [x] Parallelize Attention QKV Projection
…
-
```python
Trying to load custom node /home/musclez/ComfyUI/custom_nodes/ComfyUI-Open-Sora
Loading bitsandbytes native library from: /home/musclez/ComfyUI/.venv/lib/python3.11/site-packages/bitsandby…
-
### .env
# Generic
TEXT_EMBEDDINGS_MODEL=sentence-transformers/all-MiniLM-L6-v2
TEXT_EMBEDDINGS_MODEL_TYPE=HF # LlamaCpp or HF
USE_MLOCK=false
# Ingestion
PERSIST_DIRECTORY=db
DOCUMENTS_DIRE…