Open Frank-Buss opened 2 weeks ago
Issues
3 New issues
0 Accepted issues
Measures
0 Security Hotspots
0.0% Coverage on New Code
0.0% Duplication on New Code
Anyone who wants to review it? I need this for my project. I can use my branch, but would be better to integrate this change, so it works for everybody. BTW, I needed it because Whisper returns nonsense for silence, which is easily fixed by testing no_speech_prob for > 0.4 with the verbose json response.
What
This fixes the verboseJson encoding. I tested it with verboseJson, and works still with text as well.
Why
I got this error: Transcription failed: APIErrorResponse(error: OpenAI.APIError(message: "[{\'type\': \'enum\', \'loc\': (\'body\', \'response_format\'), \'msg\': \"Input should be \'json\', \'text\', \'vtt\', \'srt\' or \'verbose_json\'\", \'input\': \'verboseJson\', \'ctx\': {\'expected\': \"\'json\', \'text\', \'vtt\', \'srt\' or \'verbose_json\'\"}}]", type: "invalid_request_error", param: nil, code: nil))
because verboseJson was not encoded as verbose_json.
Affected Areas
AudioTranscriptionQuery