Open only-cliches opened 4 years ago
Architecture-wise, I believe that this will be accomplished best by adding a mod folder to the layout, which contains each module binary. On domain startup, a domain will have an Admin provided list of enabled modules, stored as a space separated string under "domain-modules/
The admin server also manages the list of enabled modules. When a module is loaded, it'll be stored as a child process in the Admin server. Once a module process spins up, it opens an admin-only socket "mod_rocksdb_admin" for example. On that socket, Admin will transmit the list of active domains requesting access. For each domain sent, a separate socket will be activated on the module side, (e.g. "mod_rocksdb_mydomain.com") providing the necessary separation of data. Domains will only connect to their dedicated socket, so they don't have to transmit domain metadata on each request.
For now, the command transmission protocol will be blind-idiot simple, a single leading byte to indicate protocol, which can be either a JSON object, or a JSON object followed by raw bytes.
JSON example:
0{ "cmd": "PUT_DATA",
"args": {
"key": "mykey",
"value": "myval"
}
}
JSON+bytes example:
1{ "cmd": "UPLOAD_FILE",
"args": {
"name": "bird.jpg",
"owner": "myUsername"
}
}>>>
!#$Fa43tF#Q$#@FGE^g%#Q3G6h75h^W56H^%Ge6y56U65eY3wT%u^R8... etc.
Where args contains arbitrary keys and values defined by the module specific API. Replies into Admin and the domains will follow the same protocol, but with a slightly different JSON structure:
1{ "result": "/* Arbitrary JSON */"
}>>>
yu6ysY^5u$%Y65U4U... etc.
I think this is general enough to do just about everything we want, without having to do expensive stunts like base64 encoding binary data.
Just putting this all down in text so that I can scaffold it out quicker, and have a clear overview of the implementation. I had originally thought I wanted to run these sockets as full fledged HTTP servers, but that's not really adding any value on top of an RPC. Also, of course, none of these modules will be exposed directly to the web. All external interactions are going to be mediated through both Deno and Warp, but mostly Deno. All of the basic readiness checks and help routes will be served via standard commands instead of URL. Public facing HTTP routes should be handled on the domain side, according to expected interfaces, such as fs
kv
and so on. All other usage should pass through server side Deno services, using the provided interface file, such as might be provided by "cmd": "GET_INTERFACE"
.
One issue I've yet to solve is how to differentiate user sessions on the server side. Modules will just send messages to the open socket, not necessarily tied to web client sessions. A socket per user might be necessary? That or a routing component on the server, possibly keyed to user sessions via cookie or something.
I don't think there needs to be a specific way to keep user sessions seperate like that at the module level, the API you've described is perfect.
The only question is how do we add/remove domains dynamically?
Just a bit more detail... I expect the user sessions to stop at Deno, the modules may have access to them for each request but we don't need to silo the sessions like you've described.
We do need to silo the responses, because they have to respond to the correct requests. All responses on a socket are visible to all listeners, so I need to add some sort of addressing component, to avoid giving the wrong data to the wrong thread/request. This is a really low level networking primitive we're dealing with. Having slept on it, the socket listener can hold an atomic int, which is incremented for each module request. The listener will send the int as the first line of the request, and the response will echo that back. Pending requests will be held in a hashmap of <int, fn> or <int, oneshot> and that way they can call back to the thread that requested them. The request structure will look like the following:
Request:
3!3#4.>3 //8-byte request id, wrapping on overflow.
1 // 1-byte format indicator.
{ // JSON object, etc. }
>>> // JSON/binary fence marker
Response:
3!3#4.>3 //matching 8-byte id
0 // 1-byte format indicator
{ // JSON }
An atomic int is a really easy way to prevent ID races, and avoid having to use an entire UUID for something so small. This also means that we don't have to couple the number of sockets to the number of domains. A socket pool would be viable, should that become necessary, and easy to swap out, since a request can be responded to on any of them, so long as the ID is respected.
Excellent, that API sounds perfect.
I don't think we want a socket pool, having the static socket setup like you described initially is perfect. For each module we have an administrative socket and one socket for each domain.
So if there are 3 modules and 3 domains we have: 3 admin sockets (one for each module) 9 domain sockets (each domain gets its own socket on each module)
// example.com (domain render HTML)
[x]A -> admin api -> grab A
[x] B1 -> admin api -> grab ONLY B1
[ ] B2 -> ignore
Implmement simple Redis like module for testing
(public API) xxxx.com/modules/fs/XXXXXXX -> socket (domain and URL and POST/PUT/GET + request data) -> native module (private API) (only in server via deno application services)
xxx.com/services/db/get/user -> deno runtime -> socket -> module
Socket API
Module can say "I'm ready" (backend API)
Module can say with one JSON File:
Module can say "I have an error, send error to admin" (backend API)
Module has access to scoped admin database, scope is also limited to domains
Module needs to provide private ts file for private API
Module should have a socket response with module admin API HTML that talks to admin database and adjusts settings