Traceback (most recent call last):
File "F:\Ai\GalTransl-v4.2.0\GalTransl\Backend\SakuraTranslate.py", line 135, in translate
async for data in ask_stream:
File "F:\Ai\GalTransl-v4.2.0\GalTransl\Backend\revChatGPT\V3.py", line 292, in ask_stream_async
raise t.APIConnectionError(
GalTransl.Backend.revChatGPT.typings.APIConnectionError: 503 Service Unavailable
Please check if there is a problem with your network connection
Please check that the input is correct, or you can resolve this issue by filing an issue
Project URL: https://github.com/acheong08/ChatGPT
[04-12 16:19:10][INFO]-> 报错:503 Service Unavailable , 即将重试
经测试,Sakura模型的后台Api确实是在工作的
{"function":"print_timings","id_slot":0,"id_task":64,"level":"INFO","line":339,"msg":" total time = 322.57 ms","t_prompt_processing":322.557,"t_token_generation":0.009,"t_total":322.56600000000003,"tid":"19108","timestamp":1712909484}
{"function":"update_slots","id_slot":0,"id_task":64,"level":"INFO","line":1637,"msg":"slot released","n_cache_tokens":0,"n_ctx":2048,"n_past":84,"n_system_tokens":0,"tid":"19108","timestamp":1712909484,"truncated":false}
{"function":"update_slots","level":"INFO","line":1655,"msg":"all slots are idle","tid":"19108","timestamp":1712909484}
{"function":"log_server_request","level":"INFO","line":2707,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":8637,"status":200,"tid":"11160","timestamp":1712909484}
{"function":"launch_slot_with_task","id_slot":0,"id_task":66,"level":"INFO","line":1015,"msg":"slot is processing task","tid":"19108","timestamp":1712909484}
{"function":"update_slots","id_slot":0,"id_task":66,"level":"INFO","line":1939,"msg":"kv cache rm [p0, end)","p0":0,"tid":"19108","timestamp":1712909484}
{"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":313,"msg":"prompt eval time = 122.31 ms / 100 tokens ( 1.22 ms per token, 817.56 tokens per second)","n_prompt_tokens_processed":100,"n_tokens_second":817.5612148959652,"t_prompt_processing":122.315,"t_token":1.22315,"tid":"19108","timestamp":1712909484}
{"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":329,"msg":"generation eval time = 548.85 ms / 26 runs ( 21.11 ms per token, 47.37 tokens per second)","n_decoded":26,"n_tokens_second":47.37160473133012,"t_token":21.109692307692306,"t_token_generation":548.852,"tid":"19108","timestamp":1712909484}
{"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":339,"msg":" total time = 671.17 ms","t_prompt_processing":122.315,"t_token_generation":548.852,"t_total":671.1669999999999,"tid":"19108","timestamp":1712909484}
{"function":"update_slots","id_slot":0,"id_task":66,"level":"INFO","line":1637,"msg":"slot released","n_cache_tokens":0,"n_ctx":2048,"n_past":125,"n_system_tokens":0,"tid":"19108","timestamp":1712909484,"truncated":false}
{"function":"update_slots","level":"INFO","line":1655,"msg":"all slots are idle","tid":"19108","timestamp":1712909484}
{"function":"log_server_request","level":"INFO","line":2707,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":8637,"status":200,"tid":"11160","timestamp":1712909484}
求解答
04-12 16:19:09][INFO]->输入: 将下面的日文文本翻译成中文:咲來「ってか、白鷺学園だったらあたしと一緒じゃん。\nセンパイだったんですねー」
Traceback (most recent call last): File "F:\Ai\GalTransl-v4.2.0\GalTransl\Backend\SakuraTranslate.py", line 135, in translate async for data in ask_stream: File "F:\Ai\GalTransl-v4.2.0\GalTransl\Backend\revChatGPT\V3.py", line 292, in ask_stream_async raise t.APIConnectionError( GalTransl.Backend.revChatGPT.typings.APIConnectionError: 503 Service Unavailable Please check if there is a problem with your network connection Please check that the input is correct, or you can resolve this issue by filing an issue Project URL: https://github.com/acheong08/ChatGPT [04-12 16:19:10][INFO]-> 报错:503 Service Unavailable , 即将重试
此时Sakura模型:
llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1600.00 MiB llama_new_context_with_model: KV self size = 1600.00 MiB, K (f16): 800.00 MiB, V (f16): 800.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 1188.00 MiB llama_new_context_with_model: CUDA0 compute buffer size = 307.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.00 MiB llama_new_context_with_model: graph nodes = 1524 llama_new_context_with_model: graph splits = 2 {"function":"init","level":"INFO","line":700,"msg":"initializing slots","n_slots":1,"tid":"19108","timestamp":1712909227} {"function":"init","id_slot":0,"level":"INFO","line":712,"msg":"new slot","n_ctx_slot":2048,"tid":"19108","timestamp":1712909227} {"function":"main","level":"INFO","line":2853,"msg":"model loaded","tid":"19108","timestamp":1712909227} {"built_in":true,"chat_example":"<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\nHello<|im_end|>\n<|im_start|>assistant\nHi there<|im_end|>\n<|im_start|>user\nHow are you?<|im_end|>\n<|im_start|>assistant\n","function":"main","level":"INFO","line":2878,"msg":"chat template","tid":"19108","timestamp":1712909227} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3496,"msg":"HTTP server listening","n_threads_http":"19","port":"8080","tid":"19108","timestamp":1712909227} {"function":"update_slots","level":"INFO","line":1655,"msg":"all slots are idle","tid":"19108","timestamp":1712909227} {"function":"log_server_request","level":"INFO","line":2707,"method":"OPTIONS","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":8006,"status":200,"tid":"22048","timestamp":1712909267} {"function":"launch_slot_with_task","id_slot":0,"id_task":0,"level":"INFO","line":1015,"msg":"slot is processing task","tid":"19108","timestamp":1712909267} {"function":"update_slots","id_slot":0,"id_task":0,"level":"INFO","line":1939,"msg":"kv cache rm [p0, end)","p0":0,"tid":"19108","timestamp":1712909267}
程序设置
common: saveLog: false # 是否记录日志到文件[True/False] workersPerProject: 1 # 多线程,同时翻译的文件数量(单文件只能单线程) language: "ja2zh-cn" # 源语言2(to)目标语言。[zh-cn/zh-tw/en/ja/ko/ru/fr] linebreakSymbol: "\r\n" # 这个项目在json中使用的换行符,找问题和自动修复时用到,不影响翻译。 skipH: false # 跳过可能触发敏感词检测的句子。[True/False] skipRetry: false # 开启则解析结果出错时跳过循环重试,直接用"Fail Translation"占位。[True/False] retranslFail: false # 重启时重翻所有"Fail Translation"的句子。[True/False] retranslKey: "" # 重启时主动重翻在problem或pre_jp中包含此关键字的句子,例如"残留日文" gpt.numPerRequestTranslate: 10 # 单次翻译句子数量,推荐值 < 15 gpt.streamOutputMode: true # 流式输出效果,多线程下无效。[True/False]
NewBing/GPT4
gpt.enableProofRead: false # (NewBing/GPT4)是否开启译后校润。(不建议使用,很久未维护)[True/False] gpt.numPerRequestProofRead: 7 # (NewBing/GPT4)单次校润句子数量,不建议修改 gpt.recordConfidence: false # (NewBing/GPT4)记录确信度、存疑句,GPT4 API关掉可节约token。(将废弃)[True/False]
GPT3.5/GPT4
gpt.restoreContextMode: true # (GPT3.5/4)重启时恢复上一轮的译文前文。[True/False]
插件设置
plugin: filePlugin: file_galtransl_json # 指定文件读取插件,默认使用file_galtransl_json textPlugins: # 文本处理插件列表,按顺序执行
- (project_dir)text_common_2333 # 示例,在 - 前加#号可以禁用插件
代理设置
proxy: enableProxy: false # 是否启用代理。[True/False] proxies:
字典设置
dictionary: defaultDictFolder: Dict # 通用字典文件夹,相对于程序目录,也可填入绝对路径 usePreDictInName: false # 将译前字典用在name字段,可用于改名[True/False] usePostDictInName: false # 将译后字典用在name字段,可用于汉化name[True/False]
所有字典、字典内词条将按顺序从上到下替换
译前字典
preDict:
GPT 字典
gpt.dict:
译后字典
postDict:
翻译后端相关设置
backendSpecific: GPT35: # GPT3.5 API/claude-3-sonnet第三方中转 tokens:
自动问题分析配置
problemAnalyze: problemList: # 要发现的问题清单
经测试,Sakura模型的后台Api确实是在工作的 {"function":"print_timings","id_slot":0,"id_task":64,"level":"INFO","line":339,"msg":" total time = 322.57 ms","t_prompt_processing":322.557,"t_token_generation":0.009,"t_total":322.56600000000003,"tid":"19108","timestamp":1712909484} {"function":"update_slots","id_slot":0,"id_task":64,"level":"INFO","line":1637,"msg":"slot released","n_cache_tokens":0,"n_ctx":2048,"n_past":84,"n_system_tokens":0,"tid":"19108","timestamp":1712909484,"truncated":false} {"function":"update_slots","level":"INFO","line":1655,"msg":"all slots are idle","tid":"19108","timestamp":1712909484} {"function":"log_server_request","level":"INFO","line":2707,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":8637,"status":200,"tid":"11160","timestamp":1712909484} {"function":"launch_slot_with_task","id_slot":0,"id_task":66,"level":"INFO","line":1015,"msg":"slot is processing task","tid":"19108","timestamp":1712909484} {"function":"update_slots","id_slot":0,"id_task":66,"level":"INFO","line":1939,"msg":"kv cache rm [p0, end)","p0":0,"tid":"19108","timestamp":1712909484} {"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":313,"msg":"prompt eval time = 122.31 ms / 100 tokens ( 1.22 ms per token, 817.56 tokens per second)","n_prompt_tokens_processed":100,"n_tokens_second":817.5612148959652,"t_prompt_processing":122.315,"t_token":1.22315,"tid":"19108","timestamp":1712909484} {"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":329,"msg":"generation eval time = 548.85 ms / 26 runs ( 21.11 ms per token, 47.37 tokens per second)","n_decoded":26,"n_tokens_second":47.37160473133012,"t_token":21.109692307692306,"t_token_generation":548.852,"tid":"19108","timestamp":1712909484} {"function":"print_timings","id_slot":0,"id_task":66,"level":"INFO","line":339,"msg":" total time = 671.17 ms","t_prompt_processing":122.315,"t_token_generation":548.852,"t_total":671.1669999999999,"tid":"19108","timestamp":1712909484} {"function":"update_slots","id_slot":0,"id_task":66,"level":"INFO","line":1637,"msg":"slot released","n_cache_tokens":0,"n_ctx":2048,"n_past":125,"n_system_tokens":0,"tid":"19108","timestamp":1712909484,"truncated":false} {"function":"update_slots","level":"INFO","line":1655,"msg":"all slots are idle","tid":"19108","timestamp":1712909484} {"function":"log_server_request","level":"INFO","line":2707,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":8637,"status":200,"tid":"11160","timestamp":1712909484} 求解答