Open knolleary opened 9 months ago
With K8s ingress we can direct different paths to different backend instances if needed
NOTE: The following is a very early idea, barely fleshed out and ultimately, may not be viable for many reasons but I wanted to share early doors in case there is any merit in it and helps us avoid travelling one path, only to change direction when it doesn't solve the issues.
I had thought about this (for different reasons) some time ago. My thought experiment was to add layers between the core fuse application and the devices. Lets call it a "FlowFuse Agent" or "FFA" for ease of discussion.
The FFA would be able to support 1 ~ n
connections - n
being a soft target that is determined for best ratio of performance vs distribution.
There could be 1 or multiple FFAs (for horizontal scaling)
The FFAs aggregate / tunnel comms to the cloud / master Fuse app
Supporting on-prem means the majority of traffic (especially for multiple devices pulling snapshot, multiple device tunnels) stays on site 9or in the case of cloud, stay off the main Fuse App)
These FFAs could/should provide resilience (e.g. run them at 75% design capacity, if one fails, the others take over)
The FFAs would host the MQTT and ACLs reducing the hits on the Fuse App to mere 1st time loading (and dynamic updates)
While out of scope at this point, for complete vision of the approach:
--
This architecture is quite common in Manufacturing where hundreds of tools on the shop floor communicate with a local aggregator on the internal (private) network. The aggregator orders and streams data to its parent, etc.
For on-site Eng/IT/IS, the obvious benefits are security/simplification (not exposing devices to internet/no special VLANs/proxies/network provisioning), resilience against internet outage.
For FF app, benefits come in the form of much reduced traffic and reduced connections.
I'm going to split this out into separate stories for the four topics identified - all linked from the task list above.
One we missed, Database model updates need to only run on one instance or we get race conditions.
@MarianRaphael @joepavitt Can you update this epic? I think there's some issues and things missing.
@ZJvandeWeg this was on me to update from our conversation last week. Will make it so.
From a scheduling perspective, when do we expect these outstanding tasks to be worked on @MarianRaphael @knolleary?
Description
As we build experience of running FF with an increasing workload, we need to look at how it will continue to scale.
The most immediate solution to scaling is to run two instances of the app with load-balancing in front to distribute the work. However there are a few blockers to being able to do this.
This epic is where we will identify and document them to spin off separate tasks to address them.