microsoft / flashcards-workshop

Create self-study Flashcards with Microsoft Fabric and Azure OpenAI
MIT License
16 stars 6 forks source link

Sample code fails with APIRemovedInV1 error due to OpenAI API changes in openai package version >=1.0.0 #1

Open kelcho-spense opened 1 month ago

kelcho-spense commented 1 month ago

Issue Title: Sample code fails with APIRemovedInV1 error due to OpenAI API changes in openai package version >=1.0.0

Issue Body:

Description: While running the sample code provided in the tutorial on app.fabric.microsoft.com, I encountered an error related to the OpenAI API. The code uses openai.ChatCompletion.create(), which is no longer supported in openai package version 1.0.0 and above.

Error Message:

APIRemovedInV1:

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Steps to Reproduce:

  1. Open a notebook on app.fabric.microsoft.com.
  2. Run the provided sample code: on this tutorial https://moaw.dev/workshop/?src=gh%3Amicrosoft%2Fflashcards-workshop%2Fmain%2Fworkshop.md&step=3#create-the-flashcards-prompt
import openai
openai.api_key = "your_api_key"
openai.api_base = "https://your_openai_endpoint/"
openai.api_type = 'azure'
openai.api_version = '2023-05-15'
deployment_name = 'your_deployment_name'

class LearnAssistant:
    def __init__(self, openai, deployment_name):
        self._openai = openai
        self._deployment_name = deployment_name

    def generate_questions(self, text):
        system_message = "Your system message here"
        user_message = text
        return self.call_openai(
            self._deployment_name,
            system_message=system_message,
            user_message=user_message
        )

    def call_openai(self, deployment_name, system_message, user_message):
        response = self._openai.ChatCompletion.create(
            engine=deployment_name,
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": user_message}
            ]
        )
        return response['choices'][0]['message']['content']

# Usage
assistant = LearnAssistant(openai, deployment_name)
result = assistant.generate_questions("Your input text here")
print(result)
  1. Observe the APIRemovedInV1 error.

Expected Behavior: The code should execute without errors and generate the expected questions and answers using the OpenAI API.

Environment:

Possible Solutions:

  1. Update the Code to Use the New API: Modify the code to comply with the new OpenAI API interface as per the migration guide.

    Example modification:

    def call_openai(self, deployment_name, system_message, user_message):
       response = self._openai.ChatCompletion.create(
           deployment_id=deployment_name,
           messages=[
               {"role": "system", "content": system_message},
               {"role": "user", "content": user_message}
           ]
       )
       return response.choices[0].message.content
  2. Pin the openai Package to an Older Version: Install an older version of the openai package that supports ChatCompletion, such as version 0.27.0:

    pip install openai==0.27.0

Additional Context:

Request: Please update the sample code in the tutorial to be compatible with the latest version of the openai package or provide instructions to install a compatible version of the package.


Note to Maintainers: This issue affects users following the tutorial and using the latest version of the openai package. Updating the code or providing guidance will improve the user experience and prevent confusion.

3.ScreenShots Image

Image

Image

Image

Image

Image

Image

Image

videlalvaro commented 1 month ago

Hi @kelcho-spense thanks for opening the issue.

Do you mind using the following code to create the OpenAI client:

deployment_name=AZURE_OPENAI_CHAT_DEPLOYMENT

client = AzureOpenAI(
    api_key=AZURE_OPENAI_API_KEY,
    azure_endpoint=AZURE_OPENAI_ENDPOINT
    api_version=AZURE_OPENAI_API_VERSION
)

The following code for the LearnAssistant class:

class LearnAssistant:

    _openai = None
    _deployment_name = None

    def __init__(self, client, deployment_name):
        self.name = "Learn Assistant"
        self._openai = client
        self._deployment_name = deployment_name

    def generate_questions(self, text):
        system_message = """
        You are an assistant designed to help people learn from tutorials. 
        You will receive a Markdown document, and extract from it pairs of questions and answers that will help the reader learn about the text. 
        Questions and answers should be based on the input text.
        Extract at least 5 different pairs of questions and answers. Questions and answers should be short.
        Output should be valid JSON format.
        Here's an example of your output format: [{"Q": "What is the name of the assistant?", "A": "Learn Assistant"}]
        """
        user_message = text

        return self.call_openai(
            self._deployment_name, 
            system_message=system_message,
            user_message=user_message
        )

    def call_openai(self, deployment_name, system_message, user_message):
        response = self._openai.chat.completions.create(
            model=deployment_name,
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": user_message}
            ]
        )

        return response['choices'][0]['message']['content']

and finally this way of generating questions:

genQas = LearnAssistant(client, deployment_name).generate_questions(input_text)

Let me know if these changes address the API issues, so I can update the tutorial accordingly.

kelcho-spense commented 4 weeks ago

Hi @videlalvaro I was able to make the code work and successfully finish the workshop. I have feedback from someone who is getting started with Python & fabric.

from openai import AzureOpenAI

AZURE_OPENAI_API_KEY = "Your_OPENAI_API_KEY" AZURE_OPENAI_ENDPOINT = "YOUR_OPENAI_ENDPOINT" AZURE_OPENAI_API_VERSION = "OPENAI_API_VERSION" #NOTE IF YOU ARE USING AZURE OPENAI STUDIO GET YOUR API_VERSION FROM ENDPOINT TARGET URI AZURE_OPENAI_CHAT_DEPLOYMENT="OPENAI_CHAT_DEPLOYMENT"

deployment_name=AZURE_OPENAI_CHAT_DEPLOYMENT

client = AzureOpenAI( api_key=AZURE_OPENAI_API_KEY, azure_endpoint=AZURE_OPENAI_ENDPOINT, api_version=AZURE_OPENAI_API_VERSION )

Create the Flashcards Prompt

class LearnAssistant:

_openai = None
_deployment_name = None

def __init__(self, client, deployment_name):
    self.name = "Learn Assistant"
    self._openai = client
    self._deployment_name = deployment_name

def generate_questions(self, text):
    system_message = """
    You are an assistant designed to help people learn from tutorials. 
    You will receive a Markdown document, and extract from it pairs of questions and answers that will help the reader learn about the text. 
    Questions and answers should be based on the input text.
    Extract at least 5 different pairs of questions and answers. Questions and answers should be short.
    Output should be valid JSON format.
    Here's an example of your output format: [{"Q": "What is the name of the assistant?", "A": "Learn Assistant"}]
    """
    user_message = text

    return self.call_openai(
        self._deployment_name, 
        system_message=system_message,
        user_message=user_message
    )

def call_openai(self, deployment_name, system_message, user_message):
    response = self._openai.chat.completions.create(
        model=deployment_name,
        messages=[
            {"role": "system", "content": system_message},
            {"role": "user", "content": user_message}
        ]
    )

    return response.choices[0].message.content


- I also realized if you return ``` response['choices'][0]['message']['content'] ``` you run into this error ChatCompletion' object is not subscriptable. This is because you're trying to access the response object using dictionary-style indexing: response['choices'][0]['message']['content']. To resolve use dot notation instead of dictionary keys ie ```return response.choices[0].message.content```
- Step **Create a Fabric Data Pipeline** - these steps are a little bit misleading because the fabric UI has changed a lot, it took me hours to find where some options or buttons are. Highly recommend it to be updated with the current UI. 
Example step one - _Select Data pipeline to create a new data pipeline. Give it the name flashcards_pipeline, and then select Create_ 
![Image](https://github.com/user-attachments/assets/388c1079-263b-4a60-87b9-2974babf6c48)

currently you have to scroll **Recommended items to create** to see Data pipeline

![Image](https://github.com/user-attachments/assets/135908c1-b348-4a4c-a60f-cfa7ce0aa4d0)

These is among many few changes which can make someone new to have a hard time.
Generally, it's an amazing resource which I learn a lot.