yufeikang / raycast_api_proxy

276 stars 43 forks source link

Country, region, or territory not supported #43

Closed littleblack111 closed 3 months ago

littleblack111 commented 3 months ago

I tried using the openai base url locally(via curl) and it worked. this is the log:

raycast  | 2024-07-14 11:10:11,112 MainThread main.py        :72 INFO    : Received request to /api/v1/me
raycast  | 2024-07-14 11:10:11,114 MainThread main.py        :109 INFO    : Received request to /api/v1/ai/models
raycast  | 2024-07-14 11:10:11,115 MainThread utils.py       :95 INFO    : Received request: GET https://backend.raycast.com/api/v1/me
raycast  | 2024-07-14 11:10:11,117 MainThread utils.py       :95 INFO    : Received request: GET https://backend.raycast.com/api/v1/ai/models
raycast  | INFO:     192.168.8.2:42828 - "GET /api/v1/ai/models HTTP/1.1" 200 OK
raycast  | INFO:     192.168.8.2:42822 - "GET /api/v1/me HTTP/1.1" 200 OK
raycast  | INFO:     192.168.8.2:42838 - "GET /api/v1/currencies HTTP/1.1" 401 Unauthorized
raycast  | INFO:     192.168.8.2:42852 - "GET /api/v1/currencies/crypto?symbols=ADA,AVAX,BCH,BNB,BSV,BTC,DASH,DOGE,DOT,EOS,ETC,ETH,LTC,LUNA,MATIC,NEO,SHIB,SOL,TRX,USDT,XLM,XMR,XRP HTTP/1.1" 401 Unauthorized
raycast  | INFO:     192.168.8.2:42858 - "POST /api/v1/ai/chat_completions HTTP/1.1" 200 OK
raycast  | 2024-07-14 11:10:14,949 MainThread models.py      :291 ERROR   : OpenAI error: Error code: 403 - {'error': {'code': 'unsupported_country_region_territory', 'message': 'Country, region, or territory not supported', 'param': None, 'type': 'request_forbidden'}}
raycast  | INFO:     192.168.8.2:42866 - "GET /api/v1/currencies/crypto?symbols=ADA,AVAX,BCH,BNB,BSV,BTC,DASH,DOGE,DOT,EOS,ETC,ETH,LTC,LUNA,MATIC,NEO,SHIB,SOL,TRX,USDT,XLM,XMR,XRP HTTP/1.1" 401 Unauthorized
2024-07-14 11:13:58,380 MainThread utils.py       :95 INFO    : Received request: GET https://backend.raycast.com/api/v1/me
2024-07-14 11:13:58,380 MainThread utils.py       :107 DEBUG   : Forwarding request to https://backend.raycast.com/api/v1/me
2024-07-14 11:13:58,665 MainThread utils.py       :128 DEBUG   : Response https://backend.raycast.com/api/v1/me, status code: 200, data=b'{"id":"b95dccb9-b693-4911-9532-a96edc4cb673","name":"littleblack111","handle":"littleblack11111","bio":"","twitter_handle":"","github_handle":"","location":"","initials":"li","avatar_placeholder_color":"#D36CDD","slack_community_username":null,"slack_community_user_id":null,"created_at":1695864920,"website_anchor":"littleblack111.com","website":"https://littleblack111.com","username":"littleblack11111","avatar":"https://files.raycast.com/asel0t1ktc58g8hjctuqc58iq2at","client_flags":{"pro_plan_walkthrough_shown":true},"eligible_for_pro_features":false,"eligible_for_ai":false,"eligible_for_gpt4":false,"eligible_for_developer_hub":true,"eligible_for_bext":false,"eligible_for_file_search_beta":false,"eligible_for_ai_beta_features":false,"can_use_referral_codes":false,"eligible_for_ai_citations":true,"eligible_for_cloud_sync":false,"eligible_for_application_settings":false,"can_upgrade_to_pro":true,"can_manage_billing":false,"subscription":null,"stripe_subscription_id":null,"stripe_subscription_status":null,"stripe_subscription_interval":null,"has_running_subscription":false,"has_payments":true,"stripe_subscription_canceled_at":null,"stripe_subscription_current_period_end":null,"has_active_subscription":false,"can_cancel_subscription":false,"can_view_billing":false,"can_modify_subscription_interval":false,"any_organization_has_active_subscription":false,"any_organization_has_running_subscription":false,"any_organization_has_better_ai":false,"can_upgrade_to_better_ai":[],"better_ai_subscription_ids":[],"email":"littleblack11111@gmail.com","has_pro_features":false,"has_better_ai":false,"receive_extension_issues_weekly_summary":true,"has_developer_extensions":false,"stripe_customer_id":"cus_OuaGsGYb06z4aU","admin":false,"publishing_bot":false,"broken_raycast_client":false,"can_apply_for_free_trial":true,"organizations":[],"swag_info":null}'
INFO:     192.168.8.2:40700 - "GET /api/v1/me HTTP/1.1" 200 OK
2024-07-14 11:13:59,041 MainThread utils.py       :128 DEBUG   : Response https://backend.raycast.com/api/v1/ai/models, status code: 200, data=b'{"models":[{"id":"openai-gpt-3.5-turbo","name":"GPT-3.5 Turbo","description":"GPT-3.5 Turbo is OpenAI\xe2\x80\x99s fastest model, making it ideal for tasks that require quick response times with basic language processing capabilities.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":["chat","quick_ai","commands"],"capabilities":{"web_search":"full","image_generation":"full"},"abilities":{"web_search":{"toggleable":true},"image_generation":{"model":"dall-e-2"}},"in_better_ai_subscription":false,"model":"gpt-3.5-turbo","provider":"openai","provider_name":"OpenAI","provider_brand":"openai","speed":3,"intelligence":3,"requires_better_ai":false,"context":16},{"id":"openai-gpt-4","name":"GPT-4","description":"GPT-4 is OpenAI\xe2\x80\x99s most capable model with broad general knowledge, allowing it to follow complex instructions and solve difficult problems.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":[],"capabilities":{"web_search":"full","image_generation":"full"},"abilities":{"web_search":{"toggleable":true},"image_generation":{"model":"dall-e-3"}},"in_better_ai_subscription":true,"model":"gpt-4","provider":"openai","provider_name":"OpenAI","provider_brand":"openai","speed":1,"intelligence":4,"requires_better_ai":true,"context":8},{"id":"openai-gpt-4-turbo","name":"GPT-4 Turbo","description":"GPT-4 Turbo from OpenAI has a big context window that fits hundreds of pages of text, making it a great choice for workloads that involve longer prompts.\\n","availability":"public","status":"beta","features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":[],"capabilities":{"web_search":"full","image_generation":"full"},"abilities":{"web_search":{"toggleable":true},"image_generation":{"model":"dall-e-3"}},"in_better_ai_subscription":true,"model":"gpt-4-turbo","provider":"openai","provider_name":"OpenAI","provider_brand":"openai","speed":2,"intelligence":5,"requires_better_ai":true,"context":127},{"id":"openai-gpt-4o","name":"GPT-4o","description":"GPT-4o is the most advanced and fastest model from OpenAI, making it a great choice for complex everyday problems and deeper conversations.\\n","availability":"public","status":"beta","features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":["chat"],"capabilities":{"web_search":"full","image_generation":"full"},"abilities":{"web_search":{"toggleable":true},"image_generation":{"model":"dall-e-3"},"vision":{"formats":["image/png","image/jpeg","image/webp","image/gif"]}},"in_better_ai_subscription":true,"model":"gpt-4o","provider":"openai","provider_name":"OpenAI","provider_brand":"openai","speed":3,"intelligence":5,"requires_better_ai":true,"context":127},{"id":"anthropic-claude-haiku","name":"Claude 3 Haiku","description":"Claude 3 Haiku is Anthropic\'s fastest model, with a large context window that makes it ideal for analyzing code, documents, or large amounts of text.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api"],"suggestions":["quick_ai"],"capabilities":{"web_search":"full"},"abilities":{"web_search":{"toggleable":true},"vision":{"formats":["image/png","image/jpeg","image/webp","image/gif"]}},"in_better_ai_subscription":false,"model":"claude-3-haiku-20240307","provider":"anthropic","provider_name":"Anthropic","provider_brand":"anthropic","speed":3,"intelligence":3,"requires_better_ai":false,"context":200},{"id":"anthropic-claude-sonnet","name":"Claude 3.5 Sonnet","description":"Claude 3.5 Sonnet from Anthropic has enhanced intelligence with increased speed. It excels at complex tasks like visual reasoning or workflow orchestrations.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api"],"suggestions":["commands","chat"],"capabilities":{"web_search":"full"},"abilities":{"web_search":{"toggleable":true},"vision":{"formats":["image/png","image/jpeg","image/webp","image/gif"]}},"in_better_ai_subscription":true,"model":"claude-3-5-sonnet-20240620","provider":"anthropic","provider_name":"Anthropic","provider_brand":"anthropic","speed":3,"intelligence":5,"requires_better_ai":true,"context":200},{"id":"anthropic-claude-opus","name":"Claude 3 Opus","description":"Claude 3 Opus is Anthropic\'s most intelligent model, with best-in-market performance on highly complex tasks. It stands out for remarkable fluency.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api"],"suggestions":[],"capabilities":{"web_search":"full"},"abilities":{"web_search":{"toggleable":true},"vision":{"formats":["image/png","image/jpeg","image/webp","image/gif"]}},"in_better_ai_subscription":true,"model":"claude-3-opus-20240229","provider":"anthropic","provider_name":"Anthropic","provider_brand":"anthropic","speed":1,"intelligence":4,"requires_better_ai":true,"context":200},{"id":"perplexity-llama-3-sonar-small-32k-online","name":"Llama 3 Sonar Small","description":"Perplexity\'s Llama 3 Sonar Small is built for speed. It quickly gives you helpful answers using the latest internet knowledge while minimizing hallucinations.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":["quick_ai"],"capabilities":{"web_search":"always_on"},"abilities":{"web_search":{"toggleable":false}},"in_better_ai_subscription":false,"model":"llama-3-sonar-small-32k-online","provider":"perplexity","provider_name":"Perplexity","provider_brand":"perplexity","speed":3,"intelligence":1,"requires_better_ai":false,"context":28},{"id":"perplexity-llama-3-sonar-large-32k-online","name":"Llama 3 Sonar Large","description":"Perplexity\'s most advanced model, Llama 3 Sonar Large, can handle complex questions. It considers current web knowledge to provide well-reasoned, in-depth answers.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":["quick_ai"],"capabilities":{"web_search":"always_on"},"abilities":{"web_search":{"toggleable":false}},"in_better_ai_subscription":true,"model":"llama-3-sonar-large-32k-online","provider":"perplexity","provider_name":"Perplexity","provider_brand":"perplexity","speed":2,"intelligence":2,"requires_better_ai":true,"context":28},{"id":"groq-llama3-70b-8192","name":"Llama 3 70B","description":"Llama 3 70B from Meta is the most capable openly available LLM which can serve as a tool for various text-related tasks. Powered by Groq.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":["commands"],"capabilities":{},"abilities":{"web_search":{"toggleable":true}},"in_better_ai_subscription":false,"model":"llama3-70b-8192","provider":"groq","provider_name":"Meta","provider_brand":"meta","speed":5,"intelligence":4,"requires_better_ai":false,"context":8},{"id":"groq-mixtral-8x7b-32768","name":"Mixtral 8x7B","description":"Mixtral 8x7B from Mistral is an open-source model that demonstrates high performance in generating code and text at an impressive speed. Powered by Groq.\\n","availability":"public","status":null,"features":["chat","quick_ai","commands","api","emoji_search"],"suggestions":[],"capabilities":{},"abilities":{"web_search":{"toggleable":true}},"in_better_ai_subscription":false,"model":"mixtral-8x7b-32768","provider":"groq","provider_name":"Mistral","provider_brand":"mistral","speed":5,"intelligence":3,"requires_better_ai":false,"context":32}],"default_models":{"chat":"openai-gpt-3.5-turbo","quick_ai":"openai-gpt-3.5-turbo","commands":"openai-gpt-3.5-turbo","api":"openai-gpt-3.5-turbo","emoji_search":"openai-gpt-3.5-turbo"}}'
INFO:     192.168.8.2:40684 - "GET /api/v1/ai/models HTTP/1.1" 200 OK
2024-07-14 11:14:01,937 MainThread main.py        :45 DEBUG   : Received chat completion request: {'debug': False, 'image_generation_tool': True, 'locale': 'en-HK', 'messages': [{'author': 'user', 'content': {'text': 'hi'}}], 'model': 'gpt-4o', 'provider': 'openai', 'source': 'quick_ai', 'system_instruction': 'markdown', 'thread_id': 'F375FC6A-2DE1-41B6-8819-468BBBDE929D', 'web_search_tool': True}
2024-07-14 11:14:01,937 MainThread models.py      :280 DEBUG   : openai chat stream: True
INFO:     192.168.8.2:53486 - "POST /api/v1/ai/chat_completions HTTP/1.1" 200 OK
2024-07-14 11:14:02,001 MainThread models.py      :291 ERROR   : OpenAI error: Error code: 403 - {'error': {'code': 'unsupported_country_region_territory', 'message': 'Country, region, or territory not supported', 'param': None, 'type': 'request_forbidden'}}
littleblack111 commented 3 months ago

the base url is a forwarding server that is a supported country. the code: Country, region, or territory not supported is recived when connecting to api.openai.com in a region that is not allowed(which i am in):

╰─$ curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-xx" -d '{                                                                                              ─╯
  "model": "gpt-3.5-turbo",
  "messages": [{"role": "user", "content": "hiiiiaaaaaa"}],
  "stream":true,
  "temperature": 0.7
}'
{"error":{"code":"unsupported_country_region_territory","message":"Country, region, or territory not supported","param":null,"type":"request_forbidden"}}%                                                                                                                   
yufeikang commented 3 months ago

OpenAI can determine your location based on your IP address. You can deploy the service in a supported country or region.

littleblack111 commented 3 months ago

Yes. The base url is deployed is as a relay to OpenAI in a supported location. But this doesn't work while curl does

yufeikang commented 3 months ago

If you are using Cloudflare Workers as a relay, it won't work because the worker will pass the client's information to OpenAI.

littleblack111 commented 3 months ago

but it worked when I tested it via curl. perhaps its caused by /api/v1/currencies? from the log, it seem to be keep returning 401

yufeikang commented 3 months ago

The /api/v1/currencies API is used for currency conversion. In the latest version of Raycast, I haven't seen any calls to this API. If you have more detailed logs, please provide them to me. I just conducted a test and didn't find any information indicating that geographical location data is being transmitted to the OpenAI server through this project. Therefore, I can't provide more detailed assistance. You might want to add more logging to your relay to test further.

littleblack111 commented 3 months ago

I see. The problem seems to be openai removed support for OPENAI_API_BASE thus I cannot change the base url of it. hence I cannot point to the relay service(seen in https://github.com/openai/openai-python/issues/745). they changed to OPENAI_BASE_URL

I made a PR to fix the documentation: #44