-
### System Info
- CPU architecure : x86_64
- GPU properties
- GPU name : 4x L4 setup
- GPU memory size : 96GB
- Libraries
- TensorRT-LLM branch or tag : main
- TensorRT version : 0.16…
-
### Relevant Page(s)
https://help.kagi.com/kagi/ai/llm-benchmark.html
### Description
It would be nice if the table on the llm benchmark page was sortable
not sure if this helps? https://t…
-
e.g.
top_p
temperature
etc.
OpenAI completion parameters
https://platform.openai.com/docs/api-reference/chat/create
-
At least some portion of the most recent LLM results should be stored on the investigation object and loaded when returning to the investigation, with the ability to "re-run" which clears those result…
-
The time range used for the LLM should be locked to the start/end of the investigation. TBD exactly how these time ranges relate, but the existing implementation just always uses "now" for the end of …
-
### System Info
- CPU: x86_64, Intel(R) Xeon(R) Platinum 8470
- CPU/Host memory size: 1TB
- GPU:
4xH100 96GB
- Libraries
TensorRT-LLM: main, 0.15.0 (commit: b7868dd1bd1186840e3755b97ea3d3a73dd…
-
# React
- Azure用のパラメータ保存用のUIを作成する
- OpenAI/Azureを切り替えるトグルのUIを作成
# Backend
- バックエンド側でAzureのパラメータを取り込むリクエストボディを追加し、それをInterpreterにセットするロジックの実装
- API Keyなどを再起動したときに呼び出してセットするようにする
-
### Describe the bug
Hello, we are trying to configure the local machine for the local Host with DeepSeek. We can go to our
Local Host website but we can't select something in the second menu. We n…
-
### Description
Sorry for the silly question .
Does Kotaemon has in built local LLM .I am not connected to any model yet document analysis is working .
How do I connect to my local LLM .i see o…
-
When I tried to use our LLM model in basic chat bot, and "talk to bot", it says > regardless model is selected.