Open snomos opened 9 months ago
What you are suggesting here is a wholesale change from .flt
to .po
files even though Fluent was what was proposed and agreed to in the tender. As such, this is not a bug and an enhancement.
https://github.com/projectfluent/fluent/wiki/Fluent-vs-gettext
If .flt fills the place of .po files, that is nice. The question then becomes, is it possible to handle .md translation using the fluent format, e.g. turning paragraphs in .md files into a fluent unit. Then if just part of the original .md doc is changed, we can mark those changed translation fuzzy, and the translator knows exactly what should be changed.
Yes, easily. As described in the documentation, in an .mdx
file, you have access to the t
function, which gives you strings from .flt
files.
Example:
index.en.flt
foo =
This is an entire paragraph.
bar =
This is a second paragraph.
index.mdx
{t("foo")}
{t("bar")}
we can mark those changed translation fuzzy, and the translator knows exactly what should be changed.
The .flt format supports arbitrary comments prefixed with #
anywhere in the document, so you can slap a # fuzzy
if that is the workflow you prefer on any modified strings.
Final comment as I'm going to finish for the weekend now: the .flt files can be basically anywhere in the codebase and they will be detected. However, they overlay on parent directory .flt files, so locality of .flt files for articles, components or layouts is not a mistake.
It means that for strings that aren't meant to be used globally, they are closest to the actual component consuming them. This means you do not have to think up unique identifiers for specific pages, as the "local scope" of these strings will never interfere with other components and their own local translations.
You can test this by copying the example, creating say "example2", modifying the content of those strings but not the identifiers, and see that example and example2 do not interfere with each other even with the same string ids.
So what I am really asking for is something to help managing the localisation process. Just storing the .ftl
files in git does not help with that, even with all the Fluent benefits. This is not a critique of Fliuent, or the present implementation, just an observation that it is not enough.
Something like Pontoon perhaps. Or whatever. Something that:
This may be overkill, but already now it is too much work for me to manage the localisations. I don't want it, I just want to tell localisers: go here, do localise. Nothing else. Now I am not able to.
This article looks promising, but then does not follow through. Irritating.
Har testet Pontoon på egen maskin opp mot Borealium, det ser lovende ut.
Veldig bra! :)
Most things of the site and setup is very easy and nice to work with. There is one exception: localisation. Although the general goal of reducing the localisation effort as much as possible was achieved, it still needs to be done, and that work is not easy, for a couple of reasons:
A possible solution is a tool to extract all and only the strings to be translated, with some context info, and store them in a separate file (possibly in a separate repository for more relaxed write access). Then another tool merges the localisations back. The extracted strings could be stored in
.po
files (suggested by @albbas ), since support for localising such files is good, and it makes it easy to do incremental translations, identify source language changes on parts of a document (as opposed to the whole document), etc. In the case of markdown content, something like mdpo could be used.This could be considered a bug, since support for easy localisation (with references to existing tools within the GiellaLT infra) was part of the public tender specification.