machinewrapped / gpt-subtrans

Open Source project using LLMs to translate SRT subtitles
Other
310 stars 36 forks source link

Is it compatible with SakuraLLM? #157

Open DDXDB opened 3 months ago

DDXDB commented 3 months ago

SakuraLLM is a specialized Japanese to Chinese LLM. It can be deployed locally on windows via llama.cpp. But it has special requirements for prompt, and the existing gpt-subtrans can hardly be used. SakuraLLM repository: https://github.com/SakuraLLM/Sakura-13B-Galgame

prompt build:

v0.8

input_text = "" # 要翻译的日文
query = "将下面的日文文本翻译成中文:" + input_text
prompt = "<reserved_106>" + query + "<reserved_107>"
v0.9

input_text = "" # 要翻译的日文
query = "将下面的日文文本翻译成中文:" + input_text
prompt = "<|im_start|>system\n你是一个轻小说翻译模型,可以流畅通顺地以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,不擅自添加原文中没有的代词。<|im_end|>\n<|im_start|>user\n" + query + "<|im_end|>\n<|im_start|>assistant\n"
machinewrapped commented 3 months ago

I have a branch that adds support for llama.cpp via ollama (https://github.com/machinewrapped/gpt-subtrans/tree/ollama-support) but it's currently on hold because the ollama server stops responding after the first request and I need assistance to understand why (https://github.com/ollama/ollama-python/issues/109).

Customising prompts for the model is possible, but it would require modifying the code. You would need the ollama provider to return a custom TranslationClient that implements _request_translation and formats messages according to the model's requirements. There could be a way to make this data driven so that it can be done without changing code, but it will need some thought.

If the model returns the translation in a specific format rather than following the instructions, the client would need to provide a custom TranslationParser as well... probably not difficult, subclassing TranslationParser and implementing ProcessTranslation to extract text using a regex should be enough... in fact I'll make some changes so it's easier to create a parser with custom regex templates - which could then be data-driven too.

TL;DR yes but not without writing code, as it stands.

DDXDB commented 3 months ago

The main branch can communicate with llama.cpp via the openai api and appears to be available. The main problem is prompt, please formally support llama.cpp(not llama.cpp py) if possible.