Open CloneMMDDCVII opened 3 weeks ago
I see a couple of ways how this could work. Primarily it would be a feature under Settings, as its own page, named something like 'AI Assistant'.
After toggling the feature on, two (or three, but could be four) radio buttons should appear in a vertical list which would change what AI model is used. The first radio button would be 'Use on-device model', which can take advantage of the on-device Gemini Nano model through the Google AI Edge SDK, and would work on supported devices.
The second option would be a field asking for an API endpoint to whatever LLM model you'd like, be it self-hosted or served from providers like OpenAI, Ollama (especially if self-hosted, on-device - e.g. Termux, MLC Chat - or on a computer owned by the user).
If a third option is added, in the event that Mozilla creates their own 'Gemini Nano' alternative, or wants to implement a small model themselves, it may be an option as well, besides the on-device model. Note, of course, that the model would still be considerably large (AICore takes ~1.5GBs of storage), so Thunderbird can't ship their own model by default, it would be a separate download from the 'AI Assistant' page if the user decides to use the Mozilla-provided/hosted model.
Alternatively or, if a fourth option is added, it could list 'common' AI models such as GPT-4o, Claude 3, Llama 3.1 or Mixtral in a dropdown that the user could choose from.
It would, regardless, have to check if the device even meets the requirements to run such a model locally, otherwise the on-device/in-app model options would be grayed out informing the user that the device doesn't meet the hardware requirements.
There should be a note at the bottom informing the user that while local models may not collect any data from the e-mails (depending on the model, of course), it's very likely that third-party models accessed through APIs from OpenAI, HuggingFace, etc. may collect and use the data of the e-mails for training purposes, and therefore, to not use this feature on sensitive e-mails.
Hi @CloneMMDDCVII and @alextecplayz. Thank you for the very well thought out ideas, it looks like you've put more than just a little bit of thought into this.
I think this is a great idea, and an easy way to experiment with AI support without going all in on the "Your virtual AI assistant" tagline. I especially like the idea of using the on-device LLM on supported devices. I know there are limitations especially on a phone, which makes using APIs of other AIs lucrative. Personally I'm not very happy about that because it also means that others receiving my emails might add my content to an AI training set. It is hard to avoid these days of course.
We should definitely experiment with some light AI help in 2025. I'd love to immediately turn this into a plan and get started, though right now we're focusing a lot on the release feedback and how we can add more stability to Thunderbird for Android.
It sounds like you know what you are talking about, is this something you'd be interested in building?
@kewisch I work mostly on implenting existing software to fit user needs, product feedback and speccing of work, and my coding abilities are limited to small scale utilitarian scripts rather than user facing features where interoperability, maintanability, security and performance are paramount.
I'd love to test or hunt around for compatible underlying technologies, possibly hack a proof of concept together, but it wouldn't be anywhere close to production code, if it worked at all.
Possibly better to patiently wait for the team to process the recent release feedback and see what not-code I can contribute when the "AI" support discussion has time to pick back up.
--edit just realised I sent this from my other github account. woops
Checklist
App version
8.0-b5
Problem you are trying to solve
Emails are messy, and the first few lines are a often very poor indicator of the contents of the message. Unlike instant messaging, the content of the notification only serves as a prompt to read further, as opposed to an overview of the content
Compact and default view currrently do not provide a meaningful overview of the contents prior to opening the message. Deciding on the value of the email currently requires clicking and reading all emails, making the inbox more of a to do list than a dashboard from which one can at a glance archive, keep or delete a message.
Suggested solution
Offer users the option to have an LLM parse the content of the email, and create a short digest, providing a more meaningful overview of the message.
Screenshots / Drawings / Technical details
The model chosen should be (in order of importance):
There may be considerations,
For a minimum viable feature, there should be
Risks involved: