According to the comment if users are not specifying prompt_model and task_model, the globally configured LM will be used. However, it seems not being the case while executing the code:
import dspy
from dspy.datasets.gsm8k import GSM8K, gsm8k_metric
# Set up the LM.
lm = dspy.OpenAI(model="gpt-4o-mini", max_tokens=250)
dspy.settings.configure(lm=lm)
# Load math questions from the GSM8K dataset.
gsm8k = GSM8K()
gsm8k_trainset, gsm8k_devset = gsm8k.train[:10], gsm8k.dev[:10]
class CoT(dspy.Module):
def __init__(self):
super().__init__()
self.prog = dspy.ChainOfThought("question -> answer")
def forward(self, question):
return self.prog(question=question)
from dspy.teleprompt import MIPROv2
# Set up the optimizer: we want to "bootstrap" (i.e., self-generate) 4-shot examples of our CoT program.
config = dict(max_bootstrapped_demos=4, max_labeled_demos=4)
# Optimize! Use the `gsm8k_metric` here. In general, the metric is going to tell the optimizer how well it's doing.
optimizer = MIPROv2(metric=gsm8k_metric, num_candidates=3)
optimized_cot = optimizer.compile(CoT(), trainset=gsm8k_trainset)
It throws error:
Do you wish to continue? (y/n): y
Traceback (most recent call last):
File "/Users/chen.qian/Documents/mlflow_team/dspy/tmp.py", line 32, in <module>
optimized_cot = optimizer.compile(CoT(), trainset=gsm8k_trainset)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/chen.qian/Documents/mlflow_team/dspy/dspy/teleprompt/mipro_optimizer_v2.py", line 222, in compile
proposer = GroundedProposer(
^^^^^^^^^^^^^^^^^
File "/Users/chen.qian/Documents/mlflow_team/dspy/dspy/propose/grounded_proposer.py", line 260, in __init__
self.data_summary = create_dataset_summary(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/chen.qian/Documents/mlflow_team/dspy/dspy/propose/dataset_summary_generator.py", line 51, in create_dataset_summary
observation = dspy.Predict(DatasetDescriptor, n=1, temperature=1.0)(examples=order_input_keys_in_string(trainset[0:upper_lim].__repr__()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/chen.qian/Documents/mlflow_team/dspy/dspy/predict/predict.py", line 91, in __call__
return self.forward(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/chen.qian/Documents/mlflow_team/dspy/dspy/predict/predict.py", line 104, in forward
assert lm is not None, "No LM is loaded."
^^^^^^^^^^^^^^
AssertionError: No LM is loaded.
Checking out the code the self.prompt_model assignment is not reading from the global setting by default, is that a bug or I am missing something? @omkar-sh
According to the comment if users are not specifying
prompt_model
andtask_model
, the globally configured LM will be used. However, it seems not being the case while executing the code:It throws error:
Checking out the code the
self.prompt_model
assignment is not reading from the global setting by default, is that a bug or I am missing something? @omkar-sh