-
**Describe the bug**
I keep running in into this error when using the openai agents:
```fish
> cargo run
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.09s
Running `ta…
-
**Is the feature request related to a problem?**
Yes. Currently, Gorilla has a limited set of APIs that it can access. This limits the functionality and potential use cases for LLMs using Gorilla. Ad…
-
We need to enable our users to prevent having regressions on their prompt without noticing it clearly. In order to achieve this, we must implement a way for our users to run a test suite on their own …
-
Hi @Akintunde102 @AyoOdumark ,
if you're trying to test other LLMs (codellama, wizardcoder, etc.) with GPT-Engineer, we just open-sourced a 1-click proxy to translate openai calls to huggingface, …
-
My use cases may be non-traditional but sharing as there may be others.
I often am using the eval outputs for further data analysis (e.g., in a spreadsheet). I'd prefer if the promptfoo generated t…
-
-
### System Info
- CPU Architecture x86_64
- GPU: NVIDIA T4 * 4 (AWS g4dn.12xLarge)
- TensorRT-LLM v0.10.0
### Who can help?
[QiJune](https://github.com/QiJune)
@byshiue
### Information
…
-
Hello, FlexFlow team!
Thank you for your outstanding work! I am attempting to reproduce the experimental results from the paper "SpecInfer: Accelerating Generative Large Language Model Serving with…
-
**Describe the bug**
I am getting a function error when testing the connection to Ollama. It seems I get a error staying. To me it seems the because of the model and Ollama results it's not working w…
-
## Current Code
Used the [pipecat example code here](https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/15-switch-voices.py) to define the context and pass it to OpenAILLMContext.
…