-
The Llama3 shared codebase demo currently handles prefill input prep, looped prefill, decode input prep, decode trace capture, and decode trace execution.
The Llama3 demo should be refactored to use …
-
I followed the step-by-step instructions but upon running make run to check if it all worked, I get the following error message:
File "/home/ubuntu/.cache/pypoetry/virtualenvs/private-gpt-Wtvj2B-w-…
-
to @davisagli:
@ericof and me discussed with others on the Plone Beethoven Sprint 2024 options to enhance the user story & experience using cookiecutter and the Plone Distributions chooser forms.
…
acsr updated
6 months ago
-
We’re so happy to have you on board with the LADy project, Calder! We use the issue pages for many purposes, but we really enjoy noting good articles and our findings on every aspect of the project.
…
-
### What behavior of the library made you think about the improvement?
Generation cannot be interleaved with function calls and return values.
### How would you like it to behave?
We should a…
lapp0 updated
8 months ago
-
Argilla integration, dataset integration etc.
detail to follow.
pngwn updated
2 months ago
-
Get sessionId from the frontend in `index.tsx` and pass it through `sendLLMRequest` in `api.ts`. Add metadata to llm endpoint in `main.py` and pass to `llm.py` - you might have to edit `models.py` acc…
-
Hi @langchain4j
**Is your feature request related to a problem? Please describe.**
Issues related:
- [888](https://github.com/langchain4j/langchain4j/issues/888)
- [752](https://github.com/la…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
When running the docker instance of privategpt with Ollama, I get an error saying: TypeError: missin…
-
## Why RAG
Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of LLMs by incorporating a retrieval mechanism into the generative process. This approach allows the model…