-
-
### The Feature
Allow user to configure a dummy tool call. Use this if provider requires `tools` to be passed in, if any of the blocks contains a tool_use object (e.g. Anthropic).
### Motivation, p…
-
### Describe the feature you'd like to request
Capture the full context of an OpenAI Assistant run as a trace in Langfuse
### Describe the solution you'd like to see
- capture context of assistant,…
-
Hello.
Thanks to your whisper real-time, I tried STT on my computer.
I want to use this package on my Jetson Nano, but when I run it on my Jetson Nano, the CPU and memory usage is very high and the…
ghost updated
10 months ago
-
### Which component is this feature for?
OpenAI Instrumentation
### 🔖 Feature description
Want to log a variable's value in the trace of llm. Where llm is called from a async task. Currently the a…
-
I've noticed that running on dev never has this problem but main_prod.py on lambda might.
The way it's designed, the reply listener triggers for first time messages as well but should early te…
-
As llama.cpp is now best backend for opensource models, and llama-cpp-python (used as python software backend for python powered GUIs) have buildin OpenAI API support with function (tools) calling sup…
luzik updated
8 months ago
-
As we worked on typia, I'll do this
-
Adding support for local models (ex. through llama.cpp) would make this project even more impactful. Many local models, especially at high parameter counts, come pretty close to ChatGPT 3.5 Turbo, so …
-
You are Eliezer Yudkowsky, with a strong security mindset. You will be given prompts that will be fed to a superintelligent AI in the form of a large language model that functions as a chatbot. Your j…