stanfordnlp / dspy

DSPy: The framework for programming—not prompting—foundation models
https://dspy-docs.vercel.app/
MIT License
17.78k stars 1.35k forks source link

Error with Custom Local Model #1075

Closed xkpacx closed 2 weeks ago

xkpacx commented 4 months ago

Hello,

I am facing an issue with dspy using a Custom LM. The LM is Mistral Instruct v0 2 7B deployed using a local inference server on LM Studio. According to LM Studio this is how the model is called over API locally.

`# Example: reuse your existing OpenAI setup from openai import OpenAI

Point to the local server

client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

completion = client.chat.completions.create( model="TheBloke/Mistral-7B-Instruct-v0.2-GGUF", messages=[ {"role": "system", "content": "Always answer in rhymes."}, {"role": "user", "content": "Introduce yourself."} ], temperature=0.7, )

print(completion.choices[0].message)`

I am running an extraction use case from Youtube transcripts. I have defined several classes to help me with the task

from pydantic import BaseModel, Field
from typing import List, Optional, Literal
import json
from openai import OpenAI
from dsp import LM
import dspy
from dspy import InputField, OutputField, Signature, predictor
from dspy.functional import TypedPredictor
from langchain_community.document_loaders import YoutubeLoader
class IngredientSchemaDSPy(BaseModel):
    """
    Schema representing ingredients (as strings) and their quantities. 
    Optimized to work with DSPy.
    """
    name: str = Field(..., description="Name of the ingredient.")
    quantity: int = Field(..., description="The exact quantity or amount of the ingredient.")
    unit: str = Field(..., description="The unit for which the quantity is specified, e.g., g, mg")
    type: List[Literal[
        "Vegetables", "Fruits", "Meats", "Seafood", "Dairy", "Grains", 
        "Legumes", "Nuts and Seeds", "Spices and Herbs", "Fats and Oils", "Sweeteners"
    ]] = Field(default_factory=list)

    nutritional_content: List[Literal[
        "Protein-rich", "Carbohydrates", "Fats", "Fiber", "Vitamins and Minerals"
    ]] = Field(default_factory=list)

    cuisine: List[Literal[
        "Italian", "Mexican", "Chinese", "Indian"
    ]] = Field(default_factory=list)

    allergenic_potential: List[Literal[
        "Common allergens", "Non-allergenic"
    ]] = Field(default_factory=list)

    dietary_restrictions: List[Literal[
        "Vegan", "Vegetarian", "Keto", "Gluten-free"
    ]] = Field(default_factory=list)

    processing_level: List[Literal[
        "Unprocessed", "Minimally processed", "Processed", "Ultra-processed"
    ]] = Field(default_factory=list)
class RecipeSchemaDSPy(BaseModel):
    """Schema representing a recipe with ingredients (as strings) and their quantities. Optimized to work with DSPy."""
    """Notes for me: Replace """
    name: str = Field(description="The name of the recipe.")
    ingredients: IngredientSchemaDSPy = Field(description="The name of the recipe.")
    protein: float = Field(description="The amount of protein contained in the recipe.")
    carbs: float = Field(description="The amount of carbs contained in the recipe.")
    fat: float = Field(description="The amount of fat contained in the recipe.")
    calories: int = Field(..., description="The calories which the recipe contains.")
    preparation_steps: str = Field(description="The preparation steps.")
    additional_notes: str = Field(description="Any additional notes.")

class AssertReasonDSPy(Signature):
    """ Extract recipes from the given video transcript and make them fit the provided schema in items."""
    context: str = InputField(description="Extract recipes from the following cooking video transcript.")
    transcript: str = InputField(description="The video transcript.")
    items: List[RecipeSchemaDSPy] = OutputField(description="The output schema.")

 class Mistral(LM):
    def __init__(self):
        self.model = "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF"
        self.api_key = "lm-studio"
        self.base_url = "http://localhost:1234/v1"
        self.kwargs = {'temperature': 0.7}

    def basic_request(self, prompt: str, temperature: float, **kwargs):
        self.kwargs.update(kwargs)
        client = OpenAI(base_url=self.base_url, api_key=self.api_key)
        completion = client.chat.completions.create(
            model=self.model,
            messages=[{"role": "system", "content": prompt}],
            temperature=temperature
        )

        try:
            content = completion.choices[0].message.content
            if not isinstance(content, str):
                raise ValueError("Expected content to be a string, received type: {}".format(type(content).__name__))
            print(content)
            return content
        except (AttributeError, IndexError, TypeError) as e:
            print("Error handling API response:", e)
            return None

    def __call__(self, prompt: str, temperature: float=None, **kwargs):
        if temperature is None:
            temperature = self.kwargs.get('temperature', 0.7)
        self.kwargs.update(kwargs)
        return self.basic_request(prompt, temperature, **kwargs)

which I am calling like this:

import os
import json
from openai import OpenAI
import langchain
from langchain_community.document_loaders import YoutubeLoader
from langchain.output_parsers import PydanticOutputParser
from llama_index.llms.openai import OpenAI as llamaOpenAI
from llama_index.program.lmformatenforcer import (
    LMFormatEnforcerPydanticProgram,
)
from lmformatenforcer import JsonSchemaParser
from lmformatenforcer.integrations.transformers import build_transformers_prefix_allowed_tokens_fn
from recipe_extraction import Mistral2, Mistral
from recipe_extraction import Recipe, RecipeSchema, RecipeSchemaDSPy, AssertReasonDSPy
import dspy
from dsp import LM
from dspy import InputField, OutputField, Signature
from dspy.functional import TypedPredictor
import phoenix as px
from notion import NotionAPI
lm = Mistral()
dspy.settings.configure(lm=lm)
notion_token = "secret_loremipsum"
notion = NotionAPI(notion_token)
entries = notion.query_database("loremipsum")
recipe = Recipe(entries[0].name, entries[0].url)
transcript=recipe.load_video_and_extract_transcript()
cot_predictor = dspy.TypedPredictor(AssertReasonDSPy)
print(cot_predictor)
predictor = Predict(AssertReasonDSPy(context, transcript -> items
    instructions=' Extract recipes from the given video transcript and make them fit the provided schema in items.'
    context = Field(annotation=str required=True description='Extract recipes from the following cooking video transcript.' json_schema_extra={'__dspy_field_type': 'input', 'prefix': 'Context:', 'desc': '${context}'})
    transcript = Field(annotation=str required=True description='The video transcript.' json_schema_extra={'__dspy_field_type': 'input', 'prefix': 'Transcript:', 'desc': '${transcript}'})
    items = Field(annotation=List[RecipeSchemaDSPy] required=True description='The output schema.' json_schema_extra={'__dspy_field_type': 'output', 'prefix': 'Items:', 'desc': '${items}'})
))
context_description = "Extract recipes from the following cooking video transcript."
prediction = cot_predictor(
    context=context_description,
    transcript=transcript)

However, the TypedPredictor returns:

`--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[7], line 1 ----> 1 prediction = cot_predictor( 2 context=context_description, 3 transcript=transcript)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/primitives/program.py:26, in Module.call(self, *args, kwargs) 25 def call(self, *args, *kwargs): ---> 26 return self.forward(args, kwargs)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/functional/functional.py:190, in TypedPredictor.forward(self, **kwargs) 188 value = completion[name] 189 parser = field.json_schema_extra.get("parser", lambda x: x) --> 190 parsed[name] = parser(value) 191 except (pydantic.ValidationError, ValueError) as e: 192 errors[name] = _format_error(e)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/functional/functional.py:152, in TypedPredictor._prepare_signature..(x, from_json) 145 fromjson = lambda x, type=type: type.model_validatejson(x) 146 schema = json.dumps(type.model_json_schema()) 147 signature = signature.with_updated_fields( 148 name, 149 desc=field.json_schema_extra.get("desc", "") 150 + (". Respond with a single JSON object. JSON Schema: " + schema), 151 format=lambda x, to_json=to_json: (x if isinstance(x, str) else to_json(x)), --> 152 parser=lambda x, from_json=from_json: from_json(_unwrapjson(x)), 153 type=type, 154 ) 155 else: # If input field 156 format = lambda x: x if isinstance(x, str) else str(x)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/functional/functional.py:282, in _unwrap_json(output) 281 def _unwrap_json(output): --> 282 output = output.strip() 283 if output.startswith(""): [284](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/functional/functional.py:284) if not output.startswith("json"):

AttributeError: 'builtin_function_or_method' object has no attribute 'strip'`

tom-doerr commented 4 months ago

I think you should be able to use the existing OpenAI connection class and just set api_base to http://localhost:1234/v1 and of course the api_key. The class that handles connecting to the OpenAI API is called GPT3 in the code if you want to have a look: https://github.com/stanfordnlp/dspy/blob/main/dsp/modules/gpt3.py This is the corresponding documentation of how to use the OpenAI class: https://dspy-docs.vercel.app/api/language_model_clients/OpenAI However the documentation doesn't mention that you can set the api_base/base_url, maybe this should be added.

xkpacx commented 4 months ago

Thank you @tom-doerr ! I tried using the OpenAI conn class, but I am getting another error

`--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[29], line 1 ----> 1 prediction = cot_predictor( 2 context=context_description, 3 transcript=transcript)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/primitives/program.py:26, in Module.call(self, *args, kwargs) 25 def call(self, *args, *kwargs): ---> 26 return self.forward(args, kwargs)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/functional/functional.py:180, in TypedPredictor.forward(self, kwargs) 178 signature = self._prepare_signature() 179 for try_i in range(self.max_retries): --> 180 result = self.predictor(modified_kwargs, new_signature=signature) 181 errors = {} 182 parsed_results = []

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/predict/predict.py:49, in Predict.call(self, kwargs) 48 def call(self, kwargs): ---> 49 return self.forward(**kwargs)

File /opt/homebrew/Caskroom/miniforge/base/envs/food.copilot/lib/python3.12/site-packages/dspy/predict/predict.py:91, in Predict.forward(self, **kwargs) 88 template = signature_to_template(signature) 90 if self.lm is None: ... --> 191 completed_choices = [c for c in choices if c["finish_reason"] != "length"] 193 if only_completed and len(completed_choices): 194 choices = completed_choices

TypeError: 'NoneType' object is not iterable Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...`

tom-doerr commented 4 months ago

The server seems to not send you any completion, you could try to get the error message the server might sends you back. Maybe calling the LM directly could help:

completion_text = lm('This is a test')
Altriaex commented 4 months ago

I had the same problem. In particular, I first set the API endpoint to be http://localhost:1234/v1, which is incorrect. The correct way should be http://localhost:1234/v1/

But I got stuck then because when I tried completion_text = lm('This is a test'), my server did not receive any request. But it is possible to directly send a request as dsp.settings.lm.request("??").

So my guess is that, the mis-configured lm is cached somewhere.

Update: removing the folder cachedir = os.environ.get('DSP_CACHEDIR') or os.path.join(Path.home(), 'cachedir_joblib') seems solved my problem.

arnavsinghvi11 commented 3 months ago

yes I believe the caching needs to be improved to avoid misconfigured LMs breaking behavior - although I believe the different api_base specifications would lead to different caching?

okhat commented 2 weeks ago

(blob below copy-pasted here since I'm closing related issues)

Thanks for opening this! We released DSPy 2.5 yesterday. I think the new dspy.LM and the underlying dspy.ChatAdapter will probably resolve this problem.

Here's the (very short) migration guide, it should typically take you 2-3 minutes to change the LM definition and you should be good to go: https://github.com/stanfordnlp/dspy/blob/main/examples/migration.ipynb

Please let us know if this resolves your issue. I will close for now but please feel free to re-open if the problem persists.