1 - When the dictionary is large it returns an error:
400
{
error: {
message: "This model's maximum context length is 4097 tokens. However, your messages resulted in 5904 tokens. Please reduce the length of the messages.",
type: 'invalid_request_error',
param: 'messages',
code: 'context_length_exceeded'
}
}
I think that the dictionary is sent for each string to be translated, so it would be good to filter the dictionary with only the strings that contain the text to be translated.
2 - When I work with many languages I need to change dictionary to each language. It would be nice to use a dictionary for each language (e.g. dictionary-spanish.json, dictionary-italian.json, dictionary-simplified-chinese.json) or to be able to pass the address of the dictionary through a parameter (e.g. gpt-po --po it_IT.po --lang "italian" --dic <dictionary>).
I have two ideas to improve the dictionary:
1 - When the dictionary is large it returns an error:
I think that the dictionary is sent for each string to be translated, so it would be good to filter the dictionary with only the strings that contain the text to be translated.
2 - When I work with many languages I need to change dictionary to each language. It would be nice to use a dictionary for each language (e.g.
dictionary-spanish.json
,dictionary-italian.json
,dictionary-simplified-chinese.json
) or to be able to pass the address of the dictionary through a parameter (e.g.gpt-po --po it_IT.po --lang "italian" --dic <dictionary>
).