Open SkyLull opened 2 months ago
I think, We can utilize structured response to request a reasoning when translating. By only passing the corresponding paragraph source and prompt for CoT - summation in zhtw first, we can eliminate the need to give it the whole RST source (which was intended for giving LLM global context).
This might also improve the translation quality.
However we need additional code to decide where to cut out a paragraph is enough, and some more experiments (duh)
Now the code puts the whole rst document as part of the prompt, which consumes a lot of input tokens. Maybe the code can first ask gpt to summarize the whole rst, and only pass that summation and the paragraph to be translated to reduce token usage?