-
@HishamYahya to experiment with dspy optimization using the chess game to insights on the best configuration for automatic prompt optimization.
In particular, we need to understand:
- best choic…
-
https://ollama.com/ is a way to run llama locally. It serves models at a local url.
I'd like to use it in dspy_nodes but maybe some modifications would be needed
i'd be happy to make a pull reques…
-
I was trying to run the notebook using groq/Llama3 API
```
from llama_index.llms.groq import Groq
llm = Groq(model="llama3-8b-8192", api_key="your_api_key")
from llama_index.core import Setti…
-
### Which component is this feature for?
All Packages
### 🔖 Feature description
Instrument [DSPy](https://github.com/stanfordnlp/dspy) framework
### 🎤 Why is this feature needed ?
-
### ✌️ How d…
-
Environment:
python==3.10.14
a new conda env, just `pip install dspy-ai`
Error:
File "./Dspy/prodect.py", line 1, in
import dspy
File "./miniconda3/envs/dspy/lib/python3.12/site-packag…
-
Hi, I have started to use DSPy, unfortunately I am running into issues with the ReAct-module.
I have this code:
```
class Mod(dspy.Module):
def __init__(self):
super().__init__()…
-
-
## 🚀 Feature Request
**💡 Got a brilliant idea?**
Using either OpenInference or OpenLLMetry, add an instrumented for DSPY. This is the first of several test features we'll use to determine which is…
-
According to the comment if users are not specifying `prompt_model` and `task_model`, the globally configured LM will be used. However, it seems not being the case while executing the code:
```
im…
-
### The Feature
Allow users to use litellm in spark notebooks without needing to explicitly set api key / api base.
### Motivation, pitch
Make it easier for users to use DSPy in databricks n…