-
llama.cpp has LLaVA support for a while now
-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/homanp/superagent/discussions/categories/ideas) didn'…
-
Hi thanks for the ollama integration for local models . I was wondering if it's possible to use local models in the colab environment? The reason being not everyone has access to good processing compu…
-
Trying with mistralai/Mixtral-8x7B-Instruct-v0.1 it seems that the chat works fine. when i add a document in my workspace the responses are not related to the document i have added at all
-
> The P4-Card is visible in the devicemanger and i have installed the newest [vulkan-drivers](https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html?v=t) and c…
-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
- have windows 10
- have pycharm
- have LM Studio running a server
-…
-
-
llama3 released
would be happy to use with llama.cpp
https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6
https://github.com/meta-llama/llama3
-
Hi,
When I leave the "Agg to empty" option unchecked, api requests occur, but each time there are three requests, with the first and last requests being empty.
However, when I select the "Agg to…
-
support for phi 2: https://huggingface.co/microsoft/phi-2
If possible, its a small very capable llm, all the advantages