Closed justwjx closed 3 weeks ago
Looks reasonable. I've added an option to the GUI and command line in v0.6.6 https://github.com/machinewrapped/gpt-subtrans/releases/tag/v0.6.6
It's hard to tell if the OpenAI library is actually using the httpx client but it seems to work. It needs the url to be specified so I've only enabled the option if an api base is set.
It's a pre-release version pending testing - let me know if it works for you and I'll promote the release.
This is now included in the latest official release. https://github.com/machinewrapped/gpt-subtrans/releases/tag/v0.6.7
Additionally, there is a new "Local Server" provider which uses httpx natively to communicate with the server, rather than going through the OpenAI SDK. It might be worth trying rather than using OpenAI with a custom endpoint.
Thanks, I'll test with the latest realse 0.6.7
after several conversation with OpenAI, error occurs like below. --httpx was added:
ERROR: Failed to translate subtitles: Unexpected error communicating with OpenAI Error: Unexpected error communicating with OpenAI Traceback (most recent call last): File "/home/wjx/gpt-subtrans/PySubtitle/Providers/OpenAI/ChatGPTClient.py", line 45, in _send_messages if result.usage: AttributeError: 'str' object has no attribute 'usage'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wjx/gpt-subtrans/gpt-subtrans.py", line 123, in
Thanks. It's a bit mysterious, apparently the client returned a string instead of the expected response type. I can add some validation to mitigate the consequences and retry the request in the next release.
looks like it's not about the --httpx, even I remove the --httpx, this error still happen:
ERROR: Failed to translate subtitles: Unexpected error communicating with OpenAI Error: Unexpected error communicating with OpenAI Traceback (most recent call last): File "/home/wjx/gpt-subtrans/PySubtitle/Providers/OpenAI/ChatGPTClient.py", line 45, in _send_messages if result.usage: AttributeError: 'str' object has no attribute 'usage'
It's peculiar, that method shouldn't return a string... the next release will validate the return type, so at least it'll raise a proper error.
The additional validation is in https://github.com/machinewrapped/gpt-subtrans/releases/tag/v0.6.8 ... it won't prevent the error, but it should at least recover from it more gracefully. If it yields more information about the response that's triggering the problem I'd be interested to know what it says.
hello, any idea what follow error means, when add --httpx
Error: Unable to create provider client: Invalid type for url. Expected str or httpx.URL, got <class 'bool'>: True Traceback (most recent call last): File "/home/wjx/gpt-subtrans/PySubtitle/SubtitleTranslator.py", line 64, in __init__ self.client : TranslationClient = self.translation_provider.GetTranslationClient(self.settings) File "/home/wjx/gpt-subtrans/PySubtitle/Providers/Provider_OpenAI.py", line 67, in GetTranslationClient return ChatGPTClient(client_settings) File "/home/wjx/gpt-subtrans/PySubtitle/Providers/OpenAI/ChatGPTClient.py", line 19, in __init__ super().__init__(settings) File "/home/wjx/gpt-subtrans/PySubtitle/Providers/OpenAI/OpenAIClient.py", line 35, in __init__ http_client = httpx.Client(base_url=openai.base_url, follow_redirects=True) if self.api_base and self.settings.get('use_httpx') else None File "/home/wjx/gpt-subtrans/envsubtrans/lib/python3.10/site-packages/httpx/_client.py", line 643, in __init__ super().__init__( File "/home/wjx/gpt-subtrans/envsubtrans/lib/python3.10/site-packages/httpx/_client.py", line 179, in __init__ self._base_url = self._enforce_trailing_slash(URL(base_url)) File "/home/wjx/gpt-subtrans/envsubtrans/lib/python3.10/site-packages/httpx/_urls.py", line 119, in __init__ raise TypeError( TypeError: Invalid type for url. Expected str or httpx.URL, got <class 'bool'>: True
and here is command line to run
source ~/gpt-subtrans/envsubtrans/bin/activate python3 ~/gpt-subtrans/scripts/gpt-subtrans.py "$file" -l Chinese --description "$film_description" --instructionfile "instructions (Whispered).txt" -b https://api.xty.app/v1 --httpx
.env file is like this:
PROVIDER=OpenAI OPENAI_API_KEY=sk-xxxxx
with limited coding knowledge, modified the gpt-subtrans.py with following hightlight part as a temporary workaround.
provider = "OpenAI" default_model = os.getenv('OPENAI_MODEL') or "gpt-3.5-turbo"
parser = CreateArgParser(f"Translates an SRT file using an OpenAI model")
parser.add_argument('-k', '--apikey', type=str, default=None, help=f"Your OpenAI API Key (https://platform.openai.com/account/api-keys)")
parser.add_argument('-b', '--apibase', type=str, default="https://hk.xty.app/v1"
, help="API backend base address.")
parser.add_argument('-m', '--model', type=str, default=None, help="The model to use for translation")
parser.add_argument('--httpx', type=str, default="https://hk.xty.app/v1"
, help="Use the httpx library for custom api_base requests. May help if you receive a 307 redirect error.")
args = parser.parse_args()
logger_options = InitLogger("gpt-subtrans", args.debug)
continue with above, good news is it is now runing without unexpected quit of the program so far, bad news is the 307 error still appears randomly:
INFO: HTTP Request: POST https://hk.xty.app/v1/chat/completions "HTTP/1.1 307 Temporary Redirect" INFO: HTTP Request: POST https://hk.xty.app/v1/chat/completions?retry=2 "HTTP/1.1 200 OK" INFO: Scene 1 batch 2: 15 lines and 0 untranslated.
Thanks - it was a silly typo, I just pushed a quick fix (not tested but it should be correct).
It's a shame the 307 error still happens though.
You can try using the llm-subtrans command instead, it should be something like...
scripts/llm-subtrans.py "$file" --server https://hk.xty.app --endpoint /v1/chat/completions --chat -l Chinese [...]
It uses a very simple httpx-based client to contact the server directly, without going through OpenAI's libraries. It's set to follow redirects so maybe it will handle those 307 errors better than the OpenAI client.
Closing this issue for lack of activity - hopefully this means it is no longer a problem.
Although it is not the offical API Base, it work consistence and stable, onle randomly encounter 307 error, detail message as follow:
INFO: HTTP Request: POST https://api.xty.app/v1/chat/completions "HTTP/1.1 307 Temporary Redirect" ERROR: Failed to translate subtitles: Error code: 307 Error: Error code: 307 Traceback (most recent call last): File "/home/wjx/gpt-subtrans/PySubtitle/Providers/OpenAI/ChatGPTClient.py", line 34, in _send_messages result = self.client.chat.completions.create( File "/home/wjx/miniconda3/envs/sub/lib/python3.10/site-packages/openai/_utils/_utils.py", line 270, in wrapper return func(*args, **kwargs) File "/home/wjx/miniconda3/envs/sub/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 645, in create return self._post( File "/home/wjx/miniconda3/envs/sub/lib/python3.10/site-packages/openai/_base_client.py", line 1088, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/home/wjx/miniconda3/envs/sub/lib/python3.10/site-packages/openai/_base_client.py", line 853, in request return self._request( File "/home/wjx/miniconda3/envs/sub/lib/python3.10/site-packages/openai/_base_client.py", line 930, in _request raise self._make_status_error_from_response(err.response) from None openai.APIStatusError: Error code: 307
The resolution for dealing with 307 error is to add "http_client" according to the API Base provider, since I am lack of coding knowledge, can we merge following resolution code into the next release?
https://flowus.cn/share/bf106afc-6e3c-4b52-b7e4-bf23b3ec7587
`from openai import OpenAI import httpx
client = OpenAI( base_url="https://api.xty.app/v1", api_key="sk-xxx", http_client=httpx.Client( base_url="https://api.xty.app/v1", follow_redirects=True, ), )
completion = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] )
print(completion)`