-
### What happened?
I would like to begin by expressing my sincere gratitude to the authors for their dedication and effort in developing this work.
To provide context for the issue I am encounter…
-
Hi 您好,我根据您的代码,对 whisper-large-v3-turbo 这个模型进行编译部署,报错如下,我看 24.09-trtllm-python-py3 支持的 tensorrt-llm 是0.13.0.您那边测试是成功的吗?
```
Traceback (most recent call last):
File "/workspace/TensorRT-LLM/exam…
xqun3 updated
16 hours ago
-
Hi, awesome project!
I'm on the doorstep of my first query, but I'm stuck.
This is the Ollama server API endpoint:
```bash
curl http://10.4.0.100:33821/api/version
{"version":"0.4.2"}
```
T…
-
## Description
(A clear and concise description of what the bug is.)
Model artifacts are in the (TRT-LLM) LMI model format:
` aws s3 ls ***
PRE 1/
2024-10-25 14:59:…
-
https://github.com/dh1011/llm-term
-
- [ ] [awesome-llm-planning-reasoning/README.md at main · samkhur006/awesome-llm-planning-reasoning](https://github.com/samkhur006/awesome-llm-planning-reasoning/blob/main/README.md?plain=1)
# awesom…
-
I realize OpenVINO was originally made for vision models but I'm interested in using OpenVINO for fine-tuning LLMs. It appears there is support to fine-tune for ViT models but not for language models…
-
### Create std_function for LLM Confirmation including:
- [std_functions](https://github.com/SparkRibeiro21/charmie_ws/blob/main/src/charmie_std_functions/charmie_std_functions/task_ros2_and_std_fu…
-
I have run a vllm proxy server with my fine-tuned local llm, and have the URL for the vLLM proxy server. How can I use it within Knowleadge-Table in the same way as OpenAI servers. THX
-
**Research Question** What is the best way to connect LLM outputs to a reactive UI?
Potential solutions to research, explore and experiment:
- Application state control
- JSON schema to update UI…