Closed MichaelLangbein closed 1 year ago
In this branch: https://github.com/riesgos/async/tree/deploy-lrz
Ok, I'm still having issues, but I think we're on the right track.
This is what our setup currently looks like:
outside vm │ inside vm inside vm │ outside vm
│ │
│ │
│ │
│ /riesgosfiles/<id>
│ ┌──────┼┐ ┌─────┐
│ ┌─────────────┐◄─────────────────┼ proxy│┼───────┤wps │
│ │filestorage │ └──────┼┘ └─────┘
│ └─────────────┘◄─────────────┐ │ ▲
│ │ │ │
│ │ │ │
│ │ │ │
│ │ │ │
┌──────────┐ │ ┌──────────────┐ ┌───┴─────┐ │ │
│ browser ├──────────┼──────────────────►│ queue │◄─────────┤ wrapper ├────┼───────────┘
└────┬─────┘ │ └──────────────┘ └───┬─────┘ │
│ │ │ │
│ /backend/api │ │
│ ┌────┴─┼┐ ┌───────────┐ ┌──────────────┐ │ │
└────────►│ proxy│┼──►│ fast-api │───► │ database │◄──────────┘ │
order └──────┼┘ └───────────┘ └──────────────┘ jobForOrder │
│ order │
│ │
│ │
At the moment an issue still occurs: Hugo can send out orders, and those orders do show up in the database and in the frontend. However, no jobs appear in the database.
It is my understanding that it is the job of the wrappers to write jobs into the database. So either:
filestoreage.access
and filestorage.endpoint
wrong?)If the latter is the case, my suspicion is that we'd need something like this:
outside vm │ inside vm inside vm │ outside vm
│ │
│ │
│ │
│ /riesgosfiles/<id>
│ ┌──────┼┐ ┌─────┐
│ ┌─────────────┐◄─────────────────┼ proxy│┼───────┤wps │
│ │filestorage │ └──────┼┘ └─────┘
│ └─────────────┘◄─────────────┐ │ ▲
│ │ │ │
│ │ │ │
│ │ │ │
/backend/queue │ │ │
┌──────────┐ ┌────┴─┼┐ ┌──────────────┐ ┌───┴─────┐ │ │
│ browser ├───│ proxy│┼─────────────────►│ queue │◄─────────┤ wrapper ├────┼───────────┘
└────┬─────┘ └──────┼┘ └──────────────┘ └───┬─────┘ │
│ │ │ │
│ /backend/api │ │
│ ┌────┴─┼┐ ┌───────────┐ ┌──────────────┐ │ │
└────────►│ proxy│┼──►│ fast-api │───► │ database │◄──────────┘ │
order └──────┼┘ └───────────┘ └──────────────┘ jobForOrder │
│ order │
│ │
│ │
with the proxy accepting the frontend's message and passing it on to the queue.
I have seen websockets being blocked or broken by firewalls and proxies in the past (mostly when not using TLS). Generally they aren't very reliable when there is no direct connection... also because nginx needs some extra configuration to properly proxy websocket-conections. Unfortunately, there is no browser-client for pulsar that doesn't use websockets.
If this is the problem, I guess we could change the current, broken connection
browser ---> ws ---> firewall ---> ws ---> queue
To something like this:
browser ---> HTTP ---> reverse-proxy ---> fast-api ---> pulsar-client ---> queue
Unfortunately, I don't know fast-api well enough to get this done quickly. What do you think, @nbrinckm and @bpross-52n ?
PS: current state is here: https://github.com/riesgos/async/tree/deploy-lrz
Actually, now that I think of it: When fast-api receives an order from the browser, could and should it notify pulsar of that order?
Part of our setting for the eqexplorer for the nginx contaienr to integrate the pulsar ws api:
location /ws/v2/ {
proxy_pass http://pulsar:8080/ws/v2/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header Connection "upgrade";
}
However, you have to update the container name (I think queue
) and check the port.
Alright; deployment looks quite alright now