Translators need to be able to provide translations for glosses, descriptions, notes, etc for a word.
[ ] Our first step is to be able to open a custom notebook for a file that stores the lemma. We want to be able to translate descriptions and glosses for each meaning of a word. Those changes should save to the file when the notebook is saved
[ ] We'll need to identify and build an editor for the strings that will be common throughout the lexicon. Things like the names of domains, and parts of speech should be translated once. We may need to adapt our data format from the format from UBS in order to facilitate this type of data normalization.
[ ] We'll need to identify what information should be displayed in the notebook to aid the translator in their work for each field.
[ ] Once we've got the notebook working for a minimal set of fields, then we can expand to the remaining fields that need to be translated.
Other features to consider:
Incorporate other extensions in the Codex ecosystem like spell checking and AI codepilot
Show history of translation for a single field inline
Translators need to be able to provide translations for glosses, descriptions, notes, etc for a word.
Other features to consider: