openai / openai-python

The official Python library for the OpenAI API
https://pypi.org/project/openai/
Apache License 2.0
23.09k stars 3.25k forks source link

Moderation Endpoint Schema Mismatch for illicit and illicit_violent fields #1786

Open bhumkong opened 1 month ago

bhumkong commented 1 month ago

Confirm this is an issue with the Python library and not an underlying OpenAI API

Describe the bug

Description: The results returned by the moderation endpoint do not align with the expected schema.

Details:

Expected Behavior:

Additional Notes: It is surprising that Pydantic does not throw an error for these mismatches and allows None values to be returned. I could not manually create a Categories object with any None value in it.

To Reproduce

Run the moderation endpoint and check response.results[0] categories and category_scores -> illicit field.

Result I'm getting:

response.results: [
    Moderation(
        categories=Categories(
            harassment=False,
            harassment_threatening=False,
            hate=False,
            hate_threatening=False,
            illicit=None,
            illicit_violent=None,
            self_harm=False,
            self_harm_instructions=False,
            self_harm_intent=False,
            sexual=False,
            sexual_minors=False,
            violence=False,
            violence_graphic=False,
            self-harm=False,
            sexual/minors=False,
            hate/threatening=False,
            violence/graphic=False,
            self-harm/intent=False,
            self-harm/instructions=False,
            harassment/threatening=False,
        ),
        category_applied_input_types=None,
        category_scores=CategoryScores(
            harassment=0.000255020015174523,
            harassment_threatening=1.3588138244813308e-05,
            hate=2.8068381652701646e-05,
            hate_threatening=1.0663524108167621e-06,
            illicit=None,
            illicit_violent=None,
            self_harm=9.841909195529297e-05,
            self_harm_instructions=7.693658517382573e-06,
            self_harm_intent=7.031533459667116e-05,
            sexual=0.013590452261269093,
            sexual_minors=0.0031673426274210215,
            violence=0.00022930897830519825,
            violence_graphic=4.927426198264584e-05,
            self-harm=9.841909195529297e-05,
            sexual/minors=0.0031673426274210215,
            hate/threatening=1.0663524108167621e-06,
            violence/graphic=4.927426198264584e-05,
            self-harm/intent=7.031533459667116e-05,
            self-harm/instructions=7.693658517382573e-06,
            harassment/threatening=1.3588138244813308e-05,
        ),
        flagged=False,
    ),
]

Code snippets

import openai

client = openai.OpenAI()
response = client.moderations.create(input="text")
print(response.results[0])

OS

macOS

Python version

Python 3.12.4

Library version

openai 1.51.2