Open FZambia opened 5 months ago
During last two weeks I was working on Centrifugo v6. The focus in v6 is configuration refactoring - make it unified, structured, more explicit.
In the current state it's becoming very hard to maintain and add new features – so as part of Centrifugo v6 implementing the configuration framework which is simple to read and extend in the code. Also, I want to provide a unified way to set secrets for all entities inside arrays - like inside list of proxies, list of consumers configuration. And a unified config key validation (shown now as warnings on Centrifugo v5 start).
To show one example how config will change, this is what we could have in Centrifugo v5 now:
{
"token_hmac_secret_key": "XXX",
"admin_password": "XXX",
"admin_secret": "XXX",
"api_key": "XXX",
"allowed_origins": ["http://localhost:3000"],
"presence": true,
"namespaces": [
{"name": "ns", "presence": true}
]
}
In Centrifugo v6 becomes:
{
"client": {
"token": {
"hmac_secret_key": "XXX"
},
"allowed_origins": [
"http://localhost:3000"
]
},
"admin": {
"password": "XXX",
"secret": "XXX"
},
"http_api": {
"key": "XXX"
},
"channel": {
"without_namespace": {
"presence": true
},
"namespaces": [
{
"name": "ns",
"presence": true
}
]
}
}
Or in YAML:
---
client:
token:
hmac_secret_key: XXX
allowed_origins:
- http://localhost:3000
admin:
password: XXX
secret: XXX
http_api:
key: XXX
channel:
without_namespace:
presence: true
namespaces:
- name: ns
presence: true
Only top-level structure is different, nested keys like channel options stay the same. In the code configuration will be a single readable Go struct instead of manually crafted (using viper key getters) different structs.
Hello @FZambia!
In my custom server (on top of the Centrifuge library) I use this approach to configure outgoing proxy connections. Maybe some of my ideas will seem interesting to you.
Configure all outgoing connections
# Outgoing connections configuration
connections:
connection-name-01: # connection name
nats: # connection type ("nats", "http" or "grpc")
# options specific for NATS connection
address:
- nats://127.0.0.1:4222
- nats://localhost
# possible credentials (or nothing if insecure connect)
jwt-token: "" # priority 0 (if enabled)
nkey: "" # priority 1 (if enabled) only for NATS
user: "" # priority 2 (if enabled)
password: ""
token: "" # priority 3 (if enabled) only fo NATS
# TLS configuration (if enabled)
tls:
cert: "" # path to cert file
key: ""
insecure-skip-verify: false
server-name: ""
# other TLS options ...
connection-name-02:
grpc:
# options specific for gRPC connection
url: grpc://127.0.0.1:12000
credentials-key: authorization
credentials-value: qwerty
# any other options ...
tls:
# TLS options ...
connection-name-03:
http:
# options specific for HTTP connection
url: https://127.0.0.1:8443
user: alex
password: qwerty
# any other options ...
Organize connections to pools
# Proxy connections pool configuration
proxies:
dev-proxy: # proxy pool name
timeout: "15s" # default timeout 5s (if not set)
connections:
connection-name-01:
endpoint: dev # for NATS or gRPC it means subject "dev.<method>", may be empty
priority: 1 # default 0, maximum 255 (uint8), if all priorites equivalent it means "roundrobin"
timeout: "2s" # overwirte default timeout, if no respond after timeout, call next priority connector
connection-name-03:
endpoint: api/dev/proxy # for HTTP it means "<addr>/api/dev/proxy/<method>", may be empty
priority: 2
headers:
- Cookie
prod-proxy:
connections:
connection-name-01:
endpoint: v1
connection-name-03:
endpoint: api/v1/proxy
Define default proxy settings
# Default proxy settings
defaults:
proxy: # map calls to proxy from poll
connect: prod-proxy
refresh: prod-proxy
rpc: dev-proxy
Define overrides for namespaces
namespace:
personal:
history-size: 100
join-leave: true
force-join-leave: true
proxy:
publish: dev-proxy
Thx @matsuev , I guess I found a couple of ideas from your conf to consider:
namespaces
or proxies
a map instead of array can be a good move, but there are considerations I have that the array fits better to not be limited in name
to be a valid map key in YAML/TOML and thus not break existing configurations. Will try to check whether it's possible without breaking changes.granular_proxy_mode
option – and somehow combine global and granular proxy configurations to work together. So may be part of this initiative.But need to stop at some point and find good balance between changes and user difficulties during v5->v6 config migrations.
Another feature of Centrifugo v6 - possibility to get the configuration file with all defaults for all all available configuration options. It will be possible using the command like:
centrifugo defaultconfig -c config.json
centrifugo defaultconfig -c config.yaml
centrifugo defaultconfig -c config.toml
Also, in dry-run mode it will be posted to STDOUT instead of file:
centrifugo defaultconfig -c config.json --dry-run
Finally, it's possible to provide this command a base configuration file - so the result will inherit option values from base file and will extend it with defaults for everything else:
centrifugo defaultconfig -c config.json --dry-run --base existing_config.json
One more feature of Centrifugo v6 – possibility to use separate Redis configurations for broker functionality and for presence management.
Attaching example of full JSON file with default configuration values using the command:
centrifugo defaultconfig -c config.sample.json
In addition to defaultconfig
added defaultenv
command:
❯ ./centrifugo defaultenv
CENTRIFUGO_ADDRESS=""
CENTRIFUGO_ADMIN_ENABLED=false
CENTRIFUGO_ADMIN_EXTERNAL=false
CENTRIFUGO_ADMIN_HANDLER_PREFIX=""
CENTRIFUGO_ADMIN_INSECURE=false
CENTRIFUGO_ADMIN_PASSWORD=""
CENTRIFUGO_ADMIN_SECRET=""
...
Which prints all config options as environment vars with default values to STDOUT.
It also supports base config file to inherit values from:
./centrifugo defaultenv -b config.json
Here is what I am thinking to do regarding proxy configuration in v6 at this point:
Main points:
client
level, each can be configured with its own set of options.channel
level, and it will be possible to explicitly set different configuration for each type, which reduces the need to use named_proxies
in many cases.proxies
-> named_proxies
- but we still need more granular control for channel namespaces, but now it's clear that only channel related proxies should be set in the array.rpc
proxy is also separated, since rpc does not have any relation to the channelFor now, refactored proxy configuration in a bit different way:
---
client:
proxy:
connect:
enabled: true
endpoint: http://localhost:3000/centrifugo/connect
refresh:
enabled: false
endpoint: http://localhost:3000/centrifugo/refresh
rpc:
proxy:
endpoint: http://localhost:3000/centrifugo/rpc
without_namespace:
proxy_enabled: true
namespaces:
- name: xxx
proxy_enabled: true
proxy_name: example
channel:
proxy:
subscribe:
endpoint: http://localhost:3000/centrifugo/subscribe
publish:
endpoint: http://localhost:3000/centrifugo/publish
sub_refresh:
endpoint: http://localhost:3000/centrifugo/sub_refresh
subscribe_stream:
endpoint: grpc://localhost:3000
without_namespace:
subscribe_proxy_enabled: true
namespaces:
- name: notification
subscribe_proxy_enabled: true
subscribe_proxy_name: example
proxies:
- name: example
endpoint: grpc://localhost:12000
One more thing considering for Centrifugo v6 – the feature called headers emulation
. To be used in Centrifugo event proxies. It should become a part of centrifuge-js SDK.
WebSocket browser API does not allow setting custom HTTP headers which makes implementing authentication for browser WebSocket connections harder. With Centrifugo JWT authentication it works pretty good, but proxy still requires careful thinking each time.
Centrifugo can help here by providing a feature called headers emulation
. Centrifugo users can provide a custom headers
map to the browser SDK (centrifuge-js
) constructor, these headers are then sent in the first message to Centrifugo, and Centrifugo has an option to translate it to the outgoing proxy request native HTTP headers – abstracting away the specifics of WebSocket protocol in a secure way. This can drastically simplify the integration from the auth perspective since the backend may re-use existing code.
I already have MVP, so maybe (I will still evaluate for some time) it will be possible to do sth like this soon in centrifuge-js
:
const centrifuge = new Centrifuge(
"ws://host/connection/websocket",
{"headers": {"Authorization": "Bearer XXX"}})
And Centrifugo will deliver Authorization
as an HTTP header in connect proxy request, and in can deliver in all other proxy request types too.
It would be nice if we could dynamicly update allowed_origins
list, ability to reload this config or making it like a proxy_connect_endpoint
, for example allowed_origins_endpoint
if it's defined then list of originis will be returned
@holoyan hello, could you explain better? Do you have frequently changing origins? And I did not understand the second part about making it like proxy_connect_endpoint.
Hello @FZambia , sorry if my previous comment wasn't clear. Let me explain it another way.
Basically, we have a project where domains frequently change. New domains can be added to the list at any time, and we need these changes to be reflected in the Centrifugo server settings.
As a solution, I was suggesting something similar to proxy_connect_endpoint. Every time a new connection is established, Centrifugo would make an API call to that endpoint (our backend) to retrieve the list of allowed origins. For example:
config.json
{
"allowed_origins_endpoint": "https://example.com/api/origins"
}
@holoyan, thanks, clear now. Maybe you have third level domains and can use mask for the allowed origins, it's supported already - like "allowed_origins: ["https://*.example.com"]
- so connections from https://xxx.example.com
and https://yyy.example.com
will be accepted, or your domains from where clients connect are absolutely different? And another question if the mask does not work - how many different origins you have?
Domains are absolutely different, so we currently use a wildcard (*). The checks are performed only during the proxy_connect process, where we proxy the Origin header and perform some validation.
@holoyan I see, do you want to remove that process, stop using connect proxy and move to JWT auth? Is this the end goal? If you will keep connect proxy for auth – then current solution seems viable.
Generally, it's better to open a separate issue for this and describe everything there. It's not directly related to Centrifugo v6 roadmap – requires a separate discussion. Please provide as much details as you can: number of origins you expect to have, the rate how often origins added/removed, whether some cache with configurable TTL is acceptable for origins, and so on.
This is an issue to collect information about Centrifugo v6 in one place. There is no date for the release defined yet. This post will be updated.