-
The `h2ogpt` linux installation method as [given here](https://github.com/h2oai/h2ogpt?tab=readme-ov-file#get-started) is as follows:
### A. Variable export instructions:
`export PIP_EXTRA_INDEX_…
-
Can you please add local LLM support, please?
Ollama support will be nice too.
Thank you.
-
### Describe the issue as clearly as possible:
When using `models.llamacpp` and creating JSON using a Pydantic model I get an error when generating the first result (see code to reproduce below). I h…
-
Kinda self explanatory from the title, right now each python version for a given target builds llama.cpp independently. This artificially limits how many platforms we can support by blowing up ci buil…
-
# Expected Behavior
I tried to install llama via poetry and it didnt work
# Current Behavior
it just prompted some information that i dont understand, tried checking, asked for help and it …
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Very interested in this project and I am grateful for your development.
Installs well but doesn't run.
Another user gets same error here.
https://gist.github.com/mberman84/9b3c281ae5e3e92b7e…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ Yes] I am running the latest code. Development is very rapid so there are no tagged versions as…
-
I've tried using llamacpp in both docker and native versions using the provided guides:
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md
https://g…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…