Open HB9ocq opened 7 years ago
Hi, while I know how important internationalization is, and how attractive use of multiple language in the user interface and documentation seems for the user, there are many risk associated:
Each change in documentation then needs to be translated into multiple languages the change contributor typically will not be able to write/read. So we risk inconsistency. New documentation/menu entries will only be available in a subset of languages. Which to use e.g. for the german user, if there is only english and french etc. Keeping all the information consistent is a lot of effort.
In summary, I don't think the effort is spent in the right place. The technical approach, btw, is not a problem I see, just to be clear about that. The solution sketched above makes sense, but it must be embedded in the "standard" Eclipse (and CoIDE) workflows.
So my question is: Why not stick to English? What is the benefit of multilingual menu help texts? And will the benefit outweighed the problem associated? You have to see it in the long run. It is hard to get rid of something started being expected to be there (even though has been done only in moderate quality).
In a nutshell, I don't think the effort is worth it. I would rather aim to have a good and easy to read maintained documentation in English plus strict use of only English in the UI. And would think about extracting it in a form that it is easily machine translatable. By this we can offload the language maintenance to a machine. I know automatic translations are not even near perfect but at least everyone can trigger such a translation after each change without much time to invest.
BTW, the removal of the menu ids was opportunistic, since these have not been maintained and used for a while now. Nothing against using these again with a proper system how to assign the "right" id to them. The flash size is not a problem here.
73 Danilo
The reason for thinking about multi-language support are mainly the handbook texts than the language in running firmware. In our German discussion group there are many, many OMs who do not speak / cannot read english txts easily. That's the reason why they are discussing in our German forum and not Yahoo NG... Same is international. I get many emails from all over the world regarding mcHF and most of the users first tell that they do not understand english texts well. Many problems came up in Yahoo NG because of misunderstandings.
Of course I know it is difficult to keep it up-2-date. But if there is no possibility to create translated menu texts there never will be some. And we never know if it possibly could have worked.
So I am thinking at this time only about the possibility of creating different handbooks.
73 de Andreas, DF8OE
I can imagine a possibility that contributions can be made using webinterface. Users who are logged in to GitHub can open a page where they can select menu entries which appear at a new page in english. There are recent banners selectable for different languages. If a translation is already available it is automatically put to text editing window. (If not window is empty). Users now can type in their translations and save the result. During my builds a bash script tests if something has changed and inserts text automatically.
So there are two stages: 1) the existing structure (which is untouched) 2) an only "online" reachable structure where users who definitely need no programming knowledge or special tools can insert their translations. 3) a bash-script that creates local files for each language from the web server 4) a possibility to build handbooks from these new files
I can do the server/php/editing stuff. And I can offer my own server to store the contents (GitHub is not working for this because of I need php and mysql access).
EVERYBODY can work on translations.
Handbooks are updated recently and hopefully grow in different languages
73 de Andreas, DF8OE
I agree in all of Danilos points: maintenance costs manpower and once a feature is there expectations are triggered. Then technical implementation is merely a matter of taste than a a really hard problem :-)
Even though in my issue opening post I wrote about I18N in the transceiver, it is probably best to tackle the "handbook" part first (and maybe stop there), so to avoid dealing with tedious variant builds of the binaries so to not lock out 0.5MB FLASH devices.
So the first and most important work is to get some critical amount of reasonably well translated texts.
@df8oe - with all those user asking for, say, german: where are their inputs for translated texts? I for one am ready to accept any written form of such input, being it in a dedicated thread in the I40 forum, here in GH as GithubPages or so.
In this spirit I think putting up an "empty table" (in the GH-Wiki) to solicit user contributed translations could possibly lower the entry barrier.
Possibly a vision on the long run may lay in "dynamic reloadable" UI-Texts.
Let me explain first and then try to circle the necessary preconditions on how to get there. (btw: the ideas here may be of use for "channel storage" and "trx programming transfer from one mcHF to another" too)
Having dynamic reloadable texts would allow to "overload" the default UI-Texts from "some source". The easiest "some source" from the user point of view would be reading a text file off a mass storage device (USB connected pen drive or (u)SD card). If mcHF would be able to put an empty template file onto the mass storage, every user can then edit this file and put in his favorite texts. The user can then keep them for himself or feed them back to the developers to make them available to the world - so much about the offload of the translation manpower to the public :-) If mcHF finds a translated text out of a (partially filled) file, it displays it; otherwise it stays with the built-in default text (presumably english). Switching to any language then is a matter of changing the mass storage resp. THAT file on it.
As obviously intelligible, the needs are
const char[4] id
mnemonic clearly has advantages over a purely numeric enum
oneOk, enough-o-my-brainfarts err Brainstorming for now.
I find it a pitty that the short 3 char "identifier"
MenuDescriptor.id
has been removed without replacement. In my opinion it would come in handy by moving toward I18N/Multilanguage. I know this id is somehow redundant with the (unfortunately still) anonymous enum of all menus. I read that id was not maintained anymore and that it wasn't displayed anywhere... well, that's how it is now.Additionally the "dual interpretation" of
const uint16_t number
is better modelled with anunion
, as once its value is the anonymous enum of the menus and at other times itsenum MENU_KIND
.Even that we are in the (embedded) C world: lets leverage on the compilers help with type safety - right?
Multilanguage
A simple approach could be to manyfold todays file
drivers/ui/menu/ui_menu_structure.c
into one for each language to support:Controlled by an environment variable (kind of LANG), a run of make can produce a FW-binary containing the specified language for the LCD (let's start with only 1 language at once on the transceiver) by listing only one language file in the list of all source files (variable SRC in Makefile) e.g. by copying over the appropriate language file named WITH language shorthand in place of the one WITHOUT language shorthand.
Independent of this I can adapt my Python scripts in
support/ui/menu
so that they will generate MD-Tables and GV-graphs for the "handbook" in ALL available languages in parallel at any time. My proposition for file names with language shorthands is*_mdtable_${LANG}.md
and*_graph_${LANG}.(gv|svg|png)
The advantage as per my own experience is that using a "diff-view" courtesy the editor of your choice, two random picked languages can be edited side by side. This eases creation and maintenance of translated texts A LOT.
It is clear that this is redundant due to todays usage of 9 distributed arrays holding menu data. If later more/all languages find it's way into the transceiver (e.g. deviced with 2MB FLASH) it is also clear that some work must be done as up to 9 distributed arrays * number of languages are needed or alternatively a more clever data structure which may also come up with a fallback to english for missing translation of texts.
To be on the safe side I'd use additional per-language macros
IoMenuDesc${LANG}(..)
Additional Thoughts (no particular order)
stash texts of more than one language into the very same file [ui_menu_structure.c]() (e.g. using multiple macros like
UiMenuDesc(..)
makes editing VERY UGLY. Already the very long lines today are barely acceptable.with the now disappeared/removed (const char[4] id)[] there would have been a mnemonic indirection (instead of just the anonymous numeric enum of the menus) so to further factor out handbook texts into additional language files. Of course this runs against the now very pleasant practice of having to edit exactly one and only file [ui_menu_structure.c]()
how about go the path in the opposite direction: all (text) data about the menus get edited primarily in e.g. Python (or JSON, or...); the build process then generates C-source accordingly (what today is [ui_menu_structure.c]()). Build runs can then be controlled by characteristics of the target-HW (size of FLASH) and use source for 1 or for multiple languages.
the extracted english only text parts, which today end up in the MarkDown table at ui_menu_structure_mdtable.md weight in(*) at around following figures:
(* the computer did the counting, down to the byte. But due to the [MarkDown]() and the lot of [WhiteSpace]() I made some deliberate simple reductions - and so far it was just ASCII, no UTF-8 nor Unicode... then with [PackedStrings]() some more space savings could be achieved...)
73 de Stephan HB9ocq