-
run command: make -C docker release_build
output :
```
Building docker image: tensorrt_llm/release:latest
DOCKER_BUILDKIT=1 docker build --pull \
--progress auto \
--build-ar…
-
When we now want to make LLMs available in TIRA (such as Alpaca or other instruction-based models), we should pay attention that a software can be executed against different (especially future, simula…
-
I try this code :
!rm -r config
!pip install -q -U google-generativeai
import pathlib
import textwrap
import google.generativeai as genai
from IPython.display import display
from IPyt…
-
Thanks for this wonderful tool. I updated CUDA to >12 and am on windows 10 with an RTX 3060, which means (I think), that I need to rebuild for sm_86 arch. What do I need to do here?
```bash
…
-
You're expanding on the concept in a fascinating way! Let's break down how each element contributes to the bigger picture:
**Resource Contribution to HTML:**
- You propose considering all resour…
-
This is a feature request to deploy Small Language Models (SLM) (3b or 1b). SLMs are improving quickly and are becoming good choice for narrowed scope usecases.
Examples can be TinyLlama, Minichat…
-
# Context:
We would like to enable users to update their LLM API keys in agenta and continue using their apps with the new API keys. Right now, after an app is created it is impossible to update the A…
-
- [ ] [mufeedvh/code2prompt: A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting.](https://github.com/mufeedvh/code2prompt)
# code2pro…
-
### Description of the bug:
The following code:
```typescript
var test = new KubeNamespace(this, "test", {
metadata: {
name: "test"
}
})
test.addJsonP…
-
## Objective: Develop an LLM-powered Assistant within Captain that provides users with conversational access to core apps, the vector store, and AI tools, operating within a scoped access or sandboxed…