-
It is quite strange.
I have deployed the container of ollama and I can access to the bash shell and load models and chat with them. But when I install Ollama in the local system (the same that is ru…
-
### Bug description
Using a StreamingDataloader with `num_workers=0` works, but resuming the state does not. There is an explicit length check for the state that fails.
Using `num_workers=0` is …
-
When using GGUF and llama.cpp, is there a specific vocab file I should use, or can I use "ggml-vocab-llama.gguf"?
The number of kv groups is different in TinyLlama so I suspect that I need to use a…
-
Hi, great work!
I have been conducting passkey tests on several models. The TinyLlama-1.1B-Chat-v1.0(2k) model successfully passed the 20k and, after fine-tuning, the 125k tests with a 60% accuracy…
-
Thanks for this amazing project. Using the experimental Mixtral branch of mergekit, I was able to MoE-ify the chat version of TinyLlama to make [TinyMix-8x1b-chat](https://huggingface.co/eastwind/tiny…
-
**Describe the bug**
Unable to run a model on my M1 Pro 16GB. Tried both Mistral and TinyLlama on both 0.4.3 and the nightly version.
**Steps to reproduce**
Steps to reproduce the behavior:
1…
-
i wanted to know when will TinyLlama 1.5T checkpoint will be released in `README.md` it's says
2023-10-31 and (Today (November 4, 2023) is 4 days after October 31, 2023)
-
![image](https://github.com/jmorganca/ollama/assets/45925152/368ba9e2-8113-46e7-9192-43f27ff91fb9)
I do have cuda drivers installed:
![image](https://github.com/jmorganca/ollama/assets/45925152/b…
-
## 🐛 Bug
Experimenting with [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on Android. I was able to quantize and compile the model but `prepare_libs.sh` fail…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…