redbadger / crux

Cross-platform app development in Rust
https://redbadger.github.io/crux/
Apache License 2.0
1.67k stars 57 forks source link

Question about adding translation resources from Core to Shell #166

Closed pjankiewicz closed 7 months ago

pjankiewicz commented 8 months ago

Hi Crux team. Great project! I'm looking for a way to provide the Shell with translations. How do you think it should be exposed? I already set the language in the Core from the app settings and in the Core there is "tr!" macro that handles the translations. This works for the translations that are a part of the ViewModel. But there are also translations that are a part of the interface itself (Button names, sections names etc.) and also enum variants translations.

I had a couple of ideas:

  1. Add additional entry point to expose synchronous functions from the Core - this is like a reverse capability - this would mean adding some new function to the UDL.
  2. Use an Event to request the translation from the Shell and then maybe KV capability to store it? Then read it from KV in the Shell? - this seems like a very round about way but it would work without any changes to crux.
  3. Share the translation resources in both Core and Shell - this I don't like because I would have to guarantee that the Shell has the same possibility to use https://projectfluent.org.
  4. I was also playing around with some custom serialisers for enums that would convert the variant name into translated name which could actually work but only for enums.

Generally I tend to lean towards the situation where the Core is responsible for translations.

I would be grateful for some guidance. I could contribute some solution if you find any of the above interesting.

StuartHarris commented 8 months ago

Hey Paweł, thanks for raising this. I think your intuition is correct, and the core should be responsible. But the core needs to remain shell-agnostic, possibly by using a "translation" capability to fetch translations from the outside world. I probably have an overly simplistic view of your use case (for instance I don't quite understand why enum variants need translating). Do you have some example code you can share?

pjankiewicz commented 8 months ago

Thanks for the answer. I'm testing 2nd approach.

In the core I have this event handler (writing the requested translation the the KV).

Event::RequestTranslation(message) => {
    caps.key_value
        .write(message, tr!(message).into_bytes(), |_| Event::None);
}

In the shell (GLOBAL_STATE is the KV in my web-yew)

 impl RootComponent {
    pub fn translate(&self, message: &str) -> String {
        let callback = Callback::from(move |msg| {});
        core::update(
            &self.core,
            Event::RequestTranslation(message.to_string()),
            &callback,
        );
        let translation = GLOBAL_STATE
            .read(&message)
            .unwrap_or("NOT_TRANSLATED".to_string());
        translation
    }
}
<button>
     {root.translate("ui-change-language")}
</button>

This works so that's cool.

When it comes to variants. A variant like this or any really will have to be translated too - it will be presented as a form field, list field etc. I mean even for English language you would need to translate it to a string.

#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub enum Difficulty {
    Beginner,
    Intermediate,
    Advanced,
}

I was thinking about some custom serialisation serde logic which adds the translation on the fly - but this is quite complex because the serialization/deserialisation in the shell should be manipulated too.

Or in the shell we can also translate it using core functionality (that's probably the easiest if I choose the 2nd solution).

let translation = match difficulty {
      Difficulty::Beginner => root.translate("difficulty-beginner"),
      Difficulty::Intermediate => root.translate("difficulty-intermediate"),
      Difficulty::Advanced => root.translate("difficulty-advanced")
}
pjankiewicz commented 8 months ago

@StuartHarris I went a slightly different road. The translations that are defined in the Core (are a part of the viewmodel) are translated by the Core. But for translations that are requested from the Shell I defined an additional function:

namespace shared {
  bytes process_event([ByRef] bytes msg);
  bytes handle_response([ByRef] bytes uuid, [ByRef] bytes res);
  bytes view();
  string translate([ByRef] string msg);
};

The core keeps track of the current locale and I change it whenever there are some setting changes

// Global language setting
lazy_static! {
    pub static ref CURRENT_LOCALE: RwLock<LanguageIdentifier> = RwLock::new(langid!("en-US"));
}

I think that using message passing for this use case is an overkill.

Additionally I have a translation script written in Python that goes through all Rust files and Swift files and searches for translation function applications and updates locale files with new translation keys.

StuartHarris commented 8 months ago

Hey @pjankiewicz, I'm not too sure that this is a good idea :-) The bridge interface is deliberately very small, stable, and focused purely on the mechanics of passing messages between the core and the shell. It doesn't feel right to me to introduce domain specific functions into this IDL. I still think that the way to go with this is to write a small "translation" Capability and use that to communicate between the core and the shell for translations. Is there a specific reason why you think this would not work?

StuartHarris commented 8 months ago

Ahh, thinking about this a bit more, I could see how that extra function on the IDL could help for parts of the UI not directly rendered in the loop. Which would probably make it the most practical thing to do.

pjankiewicz commented 8 months ago

@StuartHarris Thank you for the response. Translation as a capability feels awkward, but again this is me saying this not completely understanding all of the Core implementation - I tried to do it through interacting with KV but the solution wasn't very good (I didn't like that for simple translation I needed to exchange 2 events and if we have a page full of things to translate it grows quickly).

For example in our app we separated events into Commands (access to a read-only Model) and Events (access to a mutable Model) in a traditional CRQS sense, so when we go this route translation is a synchronous read-only Query (also read-only access). I'm not sure if instead of having translation as a more generic query would be better from your point of view?

namespace shared {
  bytes process_event([ByRef] bytes msg);
  bytes handle_response([ByRef] bytes uuid, [ByRef] bytes res);
  bytes view();
  bytes query([ByRef] bytes msg);
};

in the Core the query could be just:

fn query(msg: QueryRequest) -> QueryResponse

I would be happy to discuss further how we are using Crux at the moment. We also implemented an event log and snapshots backed in KV (that's why we needed to separate Commands and Events).

charypar commented 8 months ago

Sorry to come late to this conversation, and also that I'm about to confuse matters further 😅

I think my broad instinct would be to handle localisation in the shells.

  1. The platforms typically have built-in mechanisms for this (e.g. iOS has localised.strings)
  2. The majority of text in your UI will be straight up static text, not involved in the business logic (and quite possibly different across platforms) and therefore the core is unaware of them

If the core must put some localised text in the view model, I'd go the way of using a localisation key, which gets translated into the localised string in the shell as its put on screen. The tricky bit is internationalisation of numbers/dates etc. I think I'd lean towards representing that information in data, and formatting it in the shell as well.

If the core does anything at all with translations, I'd try to limit it to handling of how they are loaded from a remote system if that's something you wish to do. Although even there, I'd advise against it and instead fetch them at build time, producing plain configuration files, otherwise you're relying on the translation service for your application to work, and a copy update can inadvertently break live apps (remember several previous versions of mobile apps will be live at the same time). It's a bit of a nightmare.

So much for my general instincts, which may not necessarily be correct 😄.

More relevant to your situation – if it genuinely does make more sense for the core to handle localisation, then your approach with an additional exposed function, making it a whole separate subsystem, seems sensible. My only suggestion would be that the locale should be passed in as an argument along with the translation key - the shell is ultimately the source of this information (often set on an OS level), and it avoids duplicating and synchronising the locale selection state.

Even with that approach, I'd still avoid the Crux app knowing about localisation if it at all can. And if it cannot be avoided, keep the current locale in your model.

I'm aware this isn't exactly helping, and I'm giving somewhat conflicting guidance to Stu. Localisation has always struck me as quite a complicated, tricky, annoying problem space, and as you can see, opinions differ. Hard to give clear universally applicable guidance. Happy to discuss further though!

pjankiewicz commented 8 months ago

@charypar Thank you for this explanation. In the project we are using fluent which is a little bit more flexible than whatever platform offers as a default solution. If you are not sure if the platform implements this localisation format it could be a part of the Core.

I mean there are a couple of ways how to separate things:

  1. Shell handles localisation using different standards.
  2. Shell handles localisation using one standard (like Fluent) - and then we can share locale assets.
  3. Shell provides localisation assets capability, Core requests what locales are available and has access to assets.
  4. Shell requests translations, Core implements them all and embeds all the assets in the library (this is what we do right now)

Also there are also 2 aspects of this. There are:

  1. Translations that come from the Core - they are the part of the data that you share in the view model - but again this can be also done by sharing a message to be translated with arguments.
  2. There are translations of UI interface elements and others like variant names for pickers etc.

For now we defined 2 functions:

#[derive(Debug, PartialEq)]
pub struct FLTMessage<'s> {
    pub msg: String,
    pub fluent_args: Option<HashMap<String, FluentValue<'s>>>,
}

#[derive(Debug, Clone, Serialize, Deserialize, Display, Eq, PartialEq)]
pub enum TranslationKey {
    Name,
    Description,
}

// builds a translation key possibly with arguments
pub trait Translatable {
    fn translate(&self, key: &TranslationKey) -> FLTMessage;
}

#[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq, Eq, Display, Translatable)]
pub enum DistanceUnit {
    #[default]
    Meters,
    Miles,
}

And it is used:

Picker(selection: $appSettings.units.distance, label: Text(translate("ui-settings-distance-unit"))) {
    Text(translateVariant(TranslateEnumVariant(object: .distanceUnit(.meters), translation_key: .name))).tag(DistanceUnit.meters)
    Text(translateVariant(TranslateEnumVariant(object: .distanceUnit(.miles), translation_key: .name))).tag(DistanceUnit.miles)
}

But again I see your point. There are many ways to implement this. For example we could define a type that we share from the Core (FLTMessage) and let the Shell figure out how to translate it. And whenever we are returning a translatable String from the Core we can use FLTMessage instead.

Another important thing is that the application will be fully offline. I don't intend to connect to any external service so all the translations should be embedded in the application.

pjankiewicz commented 8 months ago

There is a fifth way. Basically treat translation as a capability and build this capability in Rust that you share as a library, the same way as Core is shared but it would be a separate thing. This solves the problem of the platform not being able to handle localisation format out of the box.

I feel this would be probably the best approach. I think out of pure laziness and lack of skills we have decided to cram as many functionalities into the core :).

charypar commented 8 months ago

That all makes sense @pjankiewicz! I think the fifth way is quite similar to extending the FFI interface at the end of the day. Whichever works easier for you is probably fine.

In principle, while they are exposed using the same mechanism, your translation calls and the Crux interface don't interact in any way, they just happen to be in the same library.