-
### 🐛 Describe the bug
Hi,
We use `torch.compile` to run GPTJ3.6B model training on our GPU platforms, while we got some dynamo errors and the process aborted. The error is happening when runnin…
-
ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI ,
Same Tool call is working fine with ollama for the same …
-
### Do you need to file an issue?
- [ ] I have searched the existing issues and this bug is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
### Describe the issue
Ask what version of pyautogen will support 'register_for_llm' later, because I'm using the local model chatGLM, needs openai float:
if base_currency == quote_currency:
…
-
can ollama URL be configured to point to remote box?
or try use ssh tunnel to make remote ollama appear to be local
-
### System Info
I am experimenting with TRT LLM and `flan-t5` models. My simple goal is to build engines with different configurations and tensor parallelism, then review performance. Have a DGX syst…
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
### Describe the bug
when providing an assistant ID for GPTAssistantAgent. the code pathway at line 117 always has a None value for variables "instructions" and "specified_tools". this is because the…
-
用以下方式验证glm4-9b-chat模型的输出,serving端报错
curl --request POST \
--url http://127.0.0.1:8000/v1/chat/completions \
--header 'content-type: application/json' \
--data '{
"model": "glm-4-9…
-
Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.
llm_…