Open ndrean opened 11 months ago
The app is running SQLite and run a single node on a VM in Paris, France. Firstly some tests:
Since I am located in France at the moment, latency is 30ms.
I used a VPN and I relocated to Japan. Latency increased to 500ms.
and if I relocate in the US, latency is 150ms
To distribute this, you use the Fly.io DNS discover library and replicate/sync in some way the embeded SQLite database. It is currently on a volume of the VM running my single node in one region.
I fail to deploy a distributed version with a distributed SQLIte db. So I opened an issue on Fly.io on how to distribute the SQLite db. It writes on the File system ok, and I understand you should use a kinda proxy LiteFS that should use a local volume to be replicated/synced with other locations. Normally.... However, I don't really believe that Fly will consider this: my ambition is probably too high considering their availability and the low attractiveness of this issue.
Approaching but there yet to a distributed Phoenixx+SQLite+LiteFS on Fly.io
LiteFS lease "LiteFS only allows a single node to be the primary at any given time. The primary node is the only one that can write data to the database. The other nodes are called replicas and they provide a read-only copy."
Lots of moving parts, so plenty of reasons to fail. It works, probably. I am not sure if it really works though: the Presence is ok, but the clicks count is not ok. When connect to a remote node, the "local" click count is rendered ok, but the other click counts of other nodes are reset. I need to click to update the correct click count, which is not lost in fact.
I have 2 nodes, on in CDG, one in NRT. Two sessions are online.
I update the NRT connection: I get the correct click count for NRT, but not for CDG:
I update the CDG connection: I get the correct click count for CDG, but not for NRT:
I click on NRT, and everything is updated:
🤔
All I am doing is:
The rendering happens when the socket assigns are changed of course.
I ended up removing LiteFS
and used :erpc.call
. Every writes (and reads when a node starts) to the SQLITE db are made to one node, the "primary" node. This works because Fly assigns a FLY_REGION
to each node, so I just set an env var , say PRIMARY_REGION=cdg
that starts a db, and all other nodes will find him once clustered. Then Litestream
You have this "local-first" paradigm mostly for mobile first. You use your local embedded SQLIte
database. To keep every conection client in sync, you add a layer to sync with a Postgres
database. Here ElectricSQL comes into play.
I understand that this is really for mobile first, and more precisely offline-capable apps. This means no LiveView but an SPA/PWA client.
Today, I made the distributed version of this as your link is dead. This means you run each app on a different port and cluster them and everything works. I plan to test the deployment on Fly.io. Some interest to publish this as a branch?
https://github.com/ndrean/elixir-hiring-project/tree/updated-version
They (Fly.io) say it is a 2h job. Really? For example, counting twice some events: the proposed code is wrong. And not totally straightforward to sync all the views when a user leaves... Took me the day to make it workn and just on time for the rugby!