openai-php / client

⚡️ OpenAI PHP is a supercharged community-maintained PHP API client that allows you to interact with OpenAI API.
MIT License
4.92k stars 508 forks source link

allow getting response from StreamResponse object #140

Closed skukunin closed 1 year ago

skukunin commented 1 year ago

It is useful to have access to response object from StreamResponse. In current implementation if API call fails before you actually start iterate over chunks, you haven't any chance to handle this error because you cannot access response code or response body. This small pull request fixes this problem.

gehrisandro commented 1 year ago

Hi @skukunin

Can you give us an example how to run into an error which is not handled yet? It sounds like something the library should handle properly instead of giving access to the raw response.

skukunin commented 1 year ago

Hi @gehrisandro

Your example of streamed chat request:

$stream = $client->chat()->createStreamed([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

If at this point response has error code 400 because of context limit exceeded there is no chance to iterate over chunks. Application fails but there is no exception and $stream instanceof StreamedResponse === true. So no chance to handle this error on the application level. Nothing just happens.

Example of response:

OpenAI\Responses\StreamResponse::__set_state(array(
   'responseClass' => 'OpenAI\\Responses\\Chat\\CreateStreamedResponse',
   'response' => 
  Nyholm\Psr7\Response::__set_state(array(
     'reasonPhrase' => 'Bad Request',
     'statusCode' => 400,
     'headers' => 
    array (
      'date' => 
      array (
        0 => 'Sun, 18 Jun 2023 19:35:59 GMT',
      ),
      'content-type' => 
      array (
        0 => 'application/json',
      ),
      'content-length' => 
      array (
        0 => '281',
      ),
      'access-control-allow-origin' => 
      array (
        0 => '*',
      ),
      'openai-organization' => 
      array (
        0 => 'user-l1frjpcqq4vjem1vqd1cwdgq',
      ),
      'openai-processing-ms' => 
      array (
        0 => '175',
      ),
      'openai-version' => 
      array (
        0 => '2020-10-01',
      ),
      'strict-transport-security' => 
      array (
        0 => 'max-age=15724800; includeSubDomains',
      ),
      'x-ratelimit-limit-requests' => 
      array (
        0 => '3500',
      ),
      'x-ratelimit-limit-tokens' => 
      array (
        0 => '90000',
      ),
      'x-ratelimit-remaining-requests' => 
      array (
        0 => '3499',
      ),
      'x-ratelimit-remaining-tokens' => 
      array (
        0 => '85903',
      ),
      'x-ratelimit-reset-requests' => 
      array (
        0 => '17ms',
      ),
      'x-ratelimit-reset-tokens' => 
      array (
        0 => '2.73s',
      ),
      'x-request-id' => 
      array (
        0 => '4102d798e3f3a175a4e9254201a368b7',
      ),
      'cf-cache-status' => 
      array (
        0 => 'DYNAMIC',
      ),
      'server' => 
      array (
        0 => 'cloudflare',
      ),
      'cf-ray' => 
      array (
        0 => '7d95fa7f3f6777ad-KBP',
      ),
      'alt-svc' => 
      array (
        0 => 'h3=":443"; ma=86400',
      ),
    ),
     'headerNames' => 
    array (
      'date' => 'date',
      'content-type' => 'content-type',
      'content-length' => 'content-length',
      'access-control-allow-origin' => 'access-control-allow-origin',
      'openai-organization' => 'openai-organization',
      'openai-processing-ms' => 'openai-processing-ms',
      'openai-version' => 'openai-version',
      'strict-transport-security' => 'strict-transport-security',
      'x-ratelimit-limit-requests' => 'x-ratelimit-limit-requests',
      'x-ratelimit-limit-tokens' => 'x-ratelimit-limit-tokens',
      'x-ratelimit-remaining-requests' => 'x-ratelimit-remaining-requests',
      'x-ratelimit-remaining-tokens' => 'x-ratelimit-remaining-tokens',
      'x-ratelimit-reset-requests' => 'x-ratelimit-reset-requests',
      'x-ratelimit-reset-tokens' => 'x-ratelimit-reset-tokens',
      'x-request-id' => 'x-request-id',
      'cf-cache-status' => 'cf-cache-status',
      'server' => 'server',
      'cf-ray' => 'cf-ray',
      'alt-svc' => 'alt-svc',
    ),
     'protocol' => '1.1',
     'stream' => 
    Nyholm\Psr7\Stream::__set_state(array(
       'stream' => NULL,
       'seekable' => true,
       'readable' => true,
       'writable' => false,
       'uri' => NULL,
       'size' => NULL,
    )),
  )),
))

If you tell me how to deal with such situation using library I would be much appreciated. But maybe if there is response status != 200 you can at least throw an exception with clear message?

gehrisandro commented 1 year ago

@skukunin Thanks for your explanation.

I think this will be fixed here: https://github.com/openai-php/client/pull/150/commits/f22b6063e44b217e06009cd4cb6ad5582d3eecf0 This should throw an error now, if an error occurs in a streamed response.

Would be great if you could test, if this works for you.

By the way I am currently working on a PR to give the user access to all the meta information returned in the request header in a nice and fully typed object oriented way instead of giving access to the plain response.

skukunin commented 1 year ago

yes, it resolves problem I faced with too. Thank you!

By the way I am currently working on a PR to give the user access to all the meta information returned in the request header in a nice and fully typed object oriented way instead of giving access to the plain response.

This is great. Looking forward to use it. This is really helpful.