Open Midou36O opened 2 years ago
I would love this as well. The current Docker setup does not work. Trying to create an account results in an unhelpful network error. Either the app needs to be fixed or the documentation needs to be updated.
Unable to deploy it either. Stuck at nondescript "Network Error" when trying to log in remotely. Locally works just fine, however.
@maxwelljens : You need to edit the .env config file. Otherwise it defaults to
Which resolves to localhost. When you're using it locally, of course it resolves to localhost which is the server; outside of that network, though, would load to their localhost which the server is not hosted at. I'm not quite sure why it can't use relative URL's but it does mention that the default config only works for localhost in the readme
@sigaloid Thanks for your reply.
I am aware of the .env
file configuration. I changed every address accordingly, and port forwarded every relevant port. It doesn't work in spite of that.
Try pressing F12 when logging in. That will let you see which network address it's going to. If it's still "local.revolt.chat:8000" then it's an issue with the config not being loaded correctly somehow. Otherwise it could be a bug in your web proxy
Oops. I just tried deploying it and hit the same problem :p it seems my .env config is not being picked up in the docker-compose.
@sigaloid Interesting. Good to know I am not the only one with this problem. Do let everyone know in this issue if you find a solution or workaround. Thanks.
Okay so. Basic setup, just cloned the repo, edited .env to have different URL's for the API, but it still sends requests to http://local.revolt.chat:8000/auth/account/create. This seems to me like a problem with the environment variables because there's no configuration changes that are actually going through to the Delta server.
Basically, changing the API server in the .env has zero effect on the resulting containers. Did this on a fresh install. Looking like a bug in the docker compose setup to me at this point, maybe the env variables in the docker-compose.yml are not being properly retrieved?
Or maybe the ones in revoltchat/delta/docker-compose.yml supercede it... That would make sense because the ones there are the ones being used.
Anyway I am yet to find a solution. Unfortunately this is blocking me from deploying this awesome software ;(
I just cloned the repo and am able to get a local version up and running just using docker compose up
.
I did have to adjust some of the port binding for my host machine due to conflicts but that was obvious in the logs.
i.e. I changed 5000 to 42069 for the web app 5000:5000
became 42069:5000
and I can access Revolt using 127.0.0.1:42069
or local.revolt.chat:42069
.
What do logs look like when you run docker compose up
?
For example, try changing REVOLT_APP_URL in the .env to some random domain and see if that's where the network requests go when you restart it. (Press F12 and try registering)
This is the scenario I had trouble with
@sigaloid Thanks for your reply.
I am aware of the
.env
file configuration. I changed every address accordingly, and port forwarded every relevant port. It doesn't work in spite of that.
Same issue over here, nothing I do is working. Tried adding URLs in the docker-compose.yml environment but to no avail
https://asciinema.org/a/fqCutcwDYY2jK4mMcI6ewj1W8
Okay, so I re-pulled the docker container on my local PC rather than my VPS (sudo docker pull revoltchat/server:master
) and did this to the .env file (even though it's gibberish). Oddly enough, when I attempted to register, it successfully sent it to the new domain I entered in the .env file. This means that I did succeed in making my config changes save.
I don't even know what I did differently (i did basically everything the same). And I still cannot make it work on the server.
I've been trying to fix this and running against a wall about this for days now. It's gotta be something really basic I'm overlooking.
Starting from a fresh Docker install, my Fedora box can follow the directuions and persist config changes, but my Debian box cannot. Side by side, completely brand new environment... What OS are you on, @Razorback360 @mirkoRainer @maxwelljens ?
@sigaloid My server is on Fedora 34.
MacOS Apple Silicone
Get Outlook for iOShttps://aka.ms/o0ukef
From: maxwelljens @.> Sent: Sunday, February 6, 2022 5:00:19 PM To: revoltchat/self-hosted @.> Cc: Mirko Rainer @.>; Mention @.> Subject: Re: [revoltchat/self-hosted] A comprehensive guide to self host revolt (Issue #25)
@sigaloidhttps://github.com/sigaloid My server is on Fedora 34.
— Reply to this email directly, view it on GitHubhttps://github.com/revoltchat/self-hosted/issues/25#issuecomment-1030923198, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMTS6CBQWIJZU6JMPCP6HF3UZ3VPHANCNFSM5K2TFC4Q. You are receiving this because you were mentioned.Message ID: @.***>
Starting from a fresh Docker install, my Fedora box can follow the directuions and persist config changes, but my Debian box cannot. Side by side, completely brand new environment... What OS are you on, @Razorback360 @mirkoRainer @maxwelljens ?
I'm on Ubuntu.
https://asciinema.org/a/fqCutcwDYY2jK4mMcI6ewj1W8
Okay, so I re-pulled the docker container on my local PC rather than my VPS (
sudo docker pull revoltchat/server:master
) and did this to the .env file (even though it's gibberish). Oddly enough, when I attempted to register, it successfully sent it to the new domain I entered in the .env file. This means that I did succeed in making my config changes save.I don't even know what I did differently (i did basically everything the same). And I still cannot make it work on the server.
I've been trying to fix this and running against a wall about this for days now. It's gotta be something really basic I'm overlooking.
I think you need to refresh the cache. I tried to open the website in the browser's traceless mode, and the problem was successfully solved.
Here is an example of nginx.
map $http_host $revolt_upstream {
example.com http://127.0.0.1:5000;
api.example.com http://127.0.0.1:8000;
ws.example.com http://127.0.0.1:9000;
autumn.example.com http://127.0.0.1:3000;
january.example.com http://127.0.0.1:7000;
vortex.example.com http://127.0.0.1:8080;
}
server {
listen 80;
listen 443 ssl http2;
server_name example.com *.example.com;
# SSL
if ($http_upgrade) {
# Here, the path is used to reverse the generation of ws. Just roll the keyboard to prevent conflicts with other services.
rewrite ^(.*)$ /ws_78dd759593f041bc970fd7eef8b0c4af$1;
}
location / {
proxy_pass $revolt_upstream;
proxy_set_header Host $host;
}
location /ws_78dd759593f041bc970fd7eef8b0c4af/ {
# Note that here is the trailing slash.
proxy_pass $revolt_upstream/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection $http_connection;
proxy_set_header Upgrade $http_upgrade;
# Important, to prevent ws from sending data for a long time and causing timeout disconnection.
proxy_read_timeout 24h;
}
}
asciinema.org/a/fqCutcwDYY2jK4mMcI6ewj1W8 Okay, so I re-pulled the docker container on my local PC rather than my VPS (
sudo docker pull revoltchat/server:master
) and did this to the .env file (even though it's gibberish). Oddly enough, when I attempted to register, it successfully sent it to the new domain I entered in the .env file. This means that I did succeed in making my config changes save. I don't even know what I did differently (i did basically everything the same). And I still cannot make it work on the server. I've been trying to fix this and running against a wall about this for days now. It's gotta be something really basic I'm overlooking.I think you need to refresh the cache. I tried to open the website in the browser's traceless mode, and the problem was successfully solved.
Wow. My goodness. Is this some sort of caching issue? I disabled cache, refreshed, rebooted, etc and nothing fixed the issue. Are these resources aggressively cached?
Regardless, the fix is to NOT START THE DOCKER CONTAINER until you've set up the env file. I started the docker container to make sure the images were downloaded, then I stopped it to configure it. Thank you so much @SurpriseLon that saved me from a few more days of confusion!!
Regardless, the fix is to NOT START THE DOCKER CONTAINER until you've set up the env file. I started the docker container to make sure the images were downloaded, then I stopped it to configure it. Thank you so much @SurpriseLon that saved me from a few more days of confusion!!
Solution mentioned here does not work for me. I have completely deleted the docker compose, removed all containers and their data, configured the .env, then deployed the compose and yet it is still the same issue. I am running on Ubuntu 20.04.3 LTS
A few things I can think of: Are you editing the .env.example? Make sure you cp .env.example .env
and change the config in .env. Then, try in incognito mode or in a fresh browser - the javascript containing the server is heavily cached and the filename isn't dynamically changed based on its hash, so it will never re-request it. I even did F12 - disable cache and it didn't help. Only trying in incognito + Ctrl+F5'ing helped.
Here is an example of nginx.
map $http_host $revolt_upstream { example.com http://127.0.0.1:5000; api.example.com http://127.0.0.1:8000; ws.example.com http://127.0.0.1:9000; autumn.example.com http://127.0.0.1:3000; january.example.com http://127.0.0.1:7000; vortex.example.com http://127.0.0.1:8080; } server { listen 80; listen 443 ssl http2; server_name example.com *.example.com; # SSL if ($http_upgrade) { # Here, the path is used to reverse the generation of ws. Just roll the keyboard to prevent conflicts with other services. rewrite ^(.*)$ /ws_78dd759593f041bc970fd7eef8b0c4af$1; } location / { proxy_pass $revolt_upstream; proxy_set_header Host $host; } location /ws_78dd759593f041bc970fd7eef8b0c4af/ { # Note that here is the trailing slash. proxy_pass $revolt_upstream/; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Connection $http_connection; proxy_set_header Upgrade $http_upgrade; # Important, to prevent ws from sending data for a long time and causing timeout disconnection. proxy_read_timeout 24h; } }
On this nginx config, would i need to make all those domains? As well as, can i not just reverse proxy those domains to the ports that they are on?
Here is an example of nginx.
map $http_host $revolt_upstream { example.com http://127.0.0.1:5000; api.example.com http://127.0.0.1:8000; ws.example.com http://127.0.0.1:9000; autumn.example.com http://127.0.0.1:3000; january.example.com http://127.0.0.1:7000; vortex.example.com http://127.0.0.1:8080; } server { listen 80; listen 443 ssl http2; server_name example.com *.example.com; # SSL if ($http_upgrade) { # Here, the path is used to reverse the generation of ws. Just roll the keyboard to prevent conflicts with other services. rewrite ^(.*)$ /ws_78dd759593f041bc970fd7eef8b0c4af$1; } location / { proxy_pass $revolt_upstream; proxy_set_header Host $host; } location /ws_78dd759593f041bc970fd7eef8b0c4af/ { # Note that here is the trailing slash. proxy_pass $revolt_upstream/; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Connection $http_connection; proxy_set_header Upgrade $http_upgrade; # Important, to prevent ws from sending data for a long time and causing timeout disconnection. proxy_read_timeout 24h; } }
On this nginx config, would i need to make all those domains? As well as, can i not just reverse proxy those domains to the ports that they are on?
Yes,you can
Here is an example of nginx.
map $http_host $revolt_upstream { example.com http://127.0.0.1:5000; api.example.com http://127.0.0.1:8000; ws.example.com http://127.0.0.1:9000; autumn.example.com http://127.0.0.1:3000; january.example.com http://127.0.0.1:7000; vortex.example.com http://127.0.0.1:8080; } server { listen 80; listen 443 ssl http2; server_name example.com *.example.com; # SSL if ($http_upgrade) { # Here, the path is used to reverse the generation of ws. Just roll the keyboard to prevent conflicts with other services. rewrite ^(.*)$ /ws_78dd759593f041bc970fd7eef8b0c4af$1; } location / { proxy_pass $revolt_upstream; proxy_set_header Host $host; } location /ws_78dd759593f041bc970fd7eef8b0c4af/ { # Note that here is the trailing slash. proxy_pass $revolt_upstream/; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Connection $http_connection; proxy_set_header Upgrade $http_upgrade; # Important, to prevent ws from sending data for a long time and causing timeout disconnection. proxy_read_timeout 24h; } }
On this nginx config, would i need to make all those domains? As well as, can i not just reverse proxy those domains to the ports that they are on?
Yes,you can
So this doesn't exactly answer my question, there was 2 in the sentence, just a yes doesn't help. Sorry for any confusion or etc.
Hello everyone.
Sorry for the inactivity I had here (I had no pc). But I put HTTPS on my server. Problem, now it's giving me "network error". Could my other site (also in https) cause this problem?
Hello everyone.
Sorry for the inactivity I had here (I had no pc). But I put HTTPS on my server. Problem, now it's giving me "network error". Could my other site (also in https) cause this problem?
I'm also running into a myriad of issues when trying to convert a insecure instance of Revolt to https. I'm using the nginx config detailed in this same thread. Other than that, I've made no changes to my .env or docker-compose.yml beyond those that were required to get the insecure http-hosted version of my Revolt chat running. Any ideas? There seems to be an issue with an HTTP GET and a "Blocked loading mixed active content “http://[domain]:[port]/” error in the web dev console I cannot resolve.
@maxwelljens : You need to edit the .env config file. Otherwise it defaults to
Which resolves to localhost. When you're using it locally, of course it resolves to localhost which is the server; outside of that network, though, would load to their localhost which the server is not hosted at. I'm not quite sure why it can't use relative URL's but it does mention that the default config only works for localhost in the readme
I can see this screen, but I get this error message even before entering the password.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://local.revolt.chat:8000/.
That's on Firefox. On Chrome I don't exactly the same error messages
Failed to load resource: net::ERR_SOCKET_NOT_CONNECTED :8000/1:
Uncaught (in promise) Error: Network Error createError.js:16
at Np (vendor.aeef7222.js:12)
at XMLHttpRequest.u.onerror (vendor.aeef7222.js:13)
I use Ubuntu 20.04.2 LTS
I tried to do a complete Docker reset, but the containers still didn't work
Let that sink in
Not even docker system prune -a
fixed the problem of .env not applying
Just dropping in to mention that I'm consolidating all issues regarding self-hosting into this issue. I'll probably eventually be able to get around to writing something up but I currently don't have any time.
HE RESPONDS, finally this will actually be fixed
"Thank you for finally finding time to respond, as I understand that life happens and volunteer open source projects aren't always a priority." FTFY. ;)
voso hosting is been a issue for me myself @insertish everything else i can run fine even using vercel for front end and just using the vps ofc for backend
I strongly do not recommend hosting the voice server since it's still under heavy development, but if you want to try anyways, please use the corresponding issue to find information: https://github.com/revoltchat/vortex/issues/23#issuecomment-1113594595
"Thank you for finally finding time to respond, as I understand that life happens and volunteer open source projects aren't always a priority." FTFY. ;)
I am actually putting in a couple dozen hours in every month, despite being in exam season and having to manage another major project and a side job. Though, all of that time is currently going into polishing the next major update, which is almost ready for release.
Self hosted instances can already migrate to it but may encounter minor bugs, I think we've gotten through most of the issues.
Glad this is now getting attention :)
Same, same problem :o thanks for working on it
I've set up a selfhosted instance; but I'm getting some interesting not implemented yet
errors when trying to register:
thread 'rocket-worker-thread' panicked at 'not yet implemented: <emailaddress>', /usr/local/cargo/git/checkouts/rauth-d390fb78242db219/8a3791a/crates/rauth/src/database/dummy.rs:28:9
stack backtrace:
0: 0x557d8e38ea0d - std::backtrace_rs::backtrace::libunwind::trace::h7401910188046071
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
1: 0x557d8e38ea0d - std::backtrace_rs::backtrace::trace_unsynchronized::h8dff7aa2924f24e9
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x557d8e38ea0d - std::sys_common::backtrace::_print_fmt::h07ca90b544a24df2
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys_common/backtrace.rs:66:5
3: 0x557d8e38ea0d - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h9331308a3088c05f
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys_common/backtrace.rs:45:22
4: 0x557d8e3b553c - core::fmt::write::h61c349b2e024d424
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/core/src/fmt/mod.rs:1196:17
5: 0x557d8e387651 - std::io::Write::write_fmt::h84ca4b318d095fa7
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/io/mod.rs:1654:15
6: 0x557d8e3905b5 - std::sys_common::backtrace::_print::hd3c72b1e40e79baf
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys_common/backtrace.rs:48:5
7: 0x557d8e3905b5 - std::sys_common::backtrace::print::ha4da5a270383a62c
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys_common/backtrace.rs:35:9
8: 0x557d8e3905b5 - std::panicking::default_hook::{{closure}}::h8c6cca5f381817ba
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/panicking.rs:295:22
9: 0x557d8e3902d6 - std::panicking::default_hook::he91089be4e889ce2
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/panicking.rs:314:9
10: 0x557d8e390bfa - std::panicking::rust_panic_with_hook::he7ddebf187262887
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/panicking.rs:702:17
11: 0x557d8e390a37 - std::panicking::begin_panic_handler::{{closure}}::h8722b5623900e5b6
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/panicking.rs:588:13
12: 0x557d8e38eec4 - std::sys_common::backtrace::__rust_end_short_backtrace::hb889cd97ae575020
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys_common/backtrace.rs:138:18
13: 0x557d8e390769 - rust_begin_unwind
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/panicking.rs:584:5
14: 0x557d8cbff193 - core::panicking::panic_fmt::h81550e582c787e06
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/core/src/panicking.rs:142:14
15: 0x557d8d99bae3 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h2c0569324677ec48
16: 0x557d8d1233af - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hfaeddd5adb8c12b8
17: 0x557d8ce7a52b - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h0982a9e6e79f51cc
18: 0x557d8ce6094f - rocket::server::hyper_service_fn::{{closure}}::{{closure}}::h96c0d24670f766fa
19: 0x557d8cfd6a9d - tokio::runtime::task::harness::poll_future::h75ea8736854dc9ef
20: 0x557d8cfd8cee - tokio::runtime::task::harness::Harness<T,S>::poll::hfbe63f9e93ae0f86
21: 0x557d8e3389cf - std::thread::local::LocalKey<T>::with::h674cec49ffb2e666
22: 0x557d8e353b43 - tokio::runtime::thread_pool::worker::Context::run_task::h1297ed749e40de3a
23: 0x557d8e3530ae - tokio::runtime::thread_pool::worker::Context::run::ha0f93c6c9dc4d14d
24: 0x557d8e35c737 - tokio::macros::scoped_tls::ScopedKey<T>::set::h7d3248257f0528d6
25: 0x557d8e352aeb - tokio::runtime::thread_pool::worker::run::h4d79910df954c3ec
26: 0x557d8e348c71 - <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll::h8bbed5a09b69e370
27: 0x557d8e337c29 - tokio::runtime::task::harness::Harness<T,S>::poll::h12208adb3417de05
28: 0x557d8e34855a - tokio::runtime::blocking::pool::Inner::run::hf975627a590c0261
29: 0x557d8e33bb72 - std::sys_common::backtrace::__rust_begin_short_backtrace::h2771ddd274e841a6
30: 0x557d8e34011f - core::ops::function::FnOnce::call_once{{vtable.shim}}::hcf05d4d4a25edc3b
31: 0x557d8e395783 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hd06934b23a7f2fcd
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/alloc/src/boxed.rs:1951:9
32: 0x557d8e395783 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hb092aa1c4d31e227
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/alloc/src/boxed.rs:1951:9
33: 0x557d8e395783 - std::sys::unix::thread::Thread::new::thread_start::hea70d8d092b89098
at /rustc/bb8c2f41174caceec00c28bc6c5c20ae9f9a175c/library/std/src/sys/unix/thread.rs:108:17
34: 0x7f2765769fa3 - start_thread
35: 0x7f2765510eff - clone
36: 0x0 - <unknown>
ERROR _ > Handler create_account panicked
When I look into https://github.com/insertish/rauth/blob/master/crates/rauth/src/database/dummy.rs
I see a whole lot of todo!
macro's; should I downgrade the container versions I'm using??
I've set up a selfhosted instance; but I'm getting some interesting
not implemented yet
errors when trying to register:[...]
When I look into
https://github.com/insertish/rauth/blob/master/crates/rauth/src/database/dummy.rs
I see a whole lot oftodo!
macro's; should I downgrade the container versions I'm using??
Never mind, it seems I wasn't bringing all containers online correctly (I'm doing this in K8s); it's running without any errors now :)
Same issue for me.
Changes to the .env aren't being compiled into the app. For some reason it keeps trying to connect to http:// <url> :8000
despite me removing any references to the port in the .env.
Here are my configs:
.env
MONGODB=mongodb://database
REDIS_URI=redis://redis/
REVOLT_APP_URL=https://revolt.stokoe.dev
REVOLT_PUBLIC_URL=https://revolt.stokoe.dev/api
VITE_API_URL=https://revolt.stokoe.dev/api
REVOLT_EXTERNAL_WS_URL=https://revolt.stokoe.dev/ws
AUTUMN_PUBLIC_URL=https://revolt.stokoe.dev/autumn
JANUARY_PUBLIC_URL=https://revolt.stokoe.dev/january
# VOSO_PUBLIC_URL=https://revolt.stokoe.dev/vortex
REVOLT_UNSAFE_NO_CAPTCHA=1
# REVOLT_HCAPTCHA_KEY=0x0000000000000000000000000000000000000000
# REVOLT_HCAPTCHA_SITEKEY=10000000-ffff-ffff-ffff-000000000001
REVOLT_UNSAFE_NO_EMAIL=1
# REVOLT_SMTP_HOST=smtp.example.com
# REVOLT_SMTP_USERNAME=noreply@example.com
# REVOLT_SMTP_PASSWORD=CHANGEME
# REVOLT_SMTP_FROM=Revolt <noreply@example.com>
REVOLT_INVITE_ONLY=0
REVOLT_MAX_GROUP_SIZE=150
REVOLT_VAPID_PRIVATE_KEY=xxxxxxxxxxxxxxxxxxxx
REVOLT_VAPID_PUBLIC_KEY=xxxxxxxxxxxxxxxxx
AUTUMN_S3_REGION=minio
AUTUMN_S3_ENDPOINT=http://minio:9000
MINIO_ROOT_USER=minioautumn
MINIO_ROOT_PASSWORD=minioautumn
AWS_ACCESS_KEY_ID=minioautumn
AWS_SECRET_ACCESS_KEY=minioautumn
# VOSO_MANAGE_TOKEN=CHANGEME
nginx
map $http_host $revolt_upstream {
revolt.stokoe.dev http://127.0.0.1:5000;
revolt.stokoe.dev/api http://127.0.0.1:8000;
revolt.stokoe.dev/ws http://127.0.0.1:9000;
revolt.stokoe.dev/autumn http://127.0.0.1:3000;
revolt.stokoe.dev/january http://127.0.0.1:7000;
revolt.stokoe.dev/vortex http://127.0.0.1:8080;
}
server {
server_name revolt.stokoe.dev *.revolt.stokoe.dev;
listen 80;
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/stokoe.dev-0002/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/stokoe.dev-0002/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
if ($http_upgrade) {
# Here, the path is used to reverse the generation of ws. Just roll the keyboard to prevent conflicts with other services.
rewrite ^(.*)$ /ws_78dd759593f041bc970fd7eef8b0c4af$1;
}
location / {
proxy_pass $revolt_upstream;
proxy_set_header Host $host;
}
location /ws_78dd759593f041bc970fd7eef8b0c4af/ {
# Note that here is the trailing slash.
proxy_pass $revolt_upstream/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection $http_connection;
proxy_set_header Upgrade $http_upgrade;
# Important, to prevent ws from sending data for a long time and causing timeout disconnection.
proxy_read_timeout 24h;
}
}
And sometimes (100% of the time if hard-refreshing):
I've tried docker system prune -a
, docker volume rm $(docker volume ls -q)
to remove all volumes. rm -rf revolt
and rebuilding .env
. Nothing seems to work.
A lot of these reported issues can be resolved by clearing your browser cache after making changes to the .env. Or run each attempt in a new private browsing tab.
I might be able to write a quick guide for Caddy v2 if there is any interest.
A lot of these reported issues can be resolved by clearing your browser cache after making changes to the .env. Or run each attempt in a new private browsing tab.
I might be able to write a quick guide for Caddy v2 if there is any interest.
I'd love a caddy config / guide for V2 and or just a complete guide to getting revolt on nginx, it'd be nice to have.
Here's the relevant parts of my caddy config. It's not complicated because I didn't want to deal with rewrites or anything fancy. It requires setting up multiple subdomains, in my case with Cloudflare. I also use a wildcard cert with a custom Caddyv2 build including said Cloudflare DNS component. I have my container with this here with some basic docs to get it running if you want to go that route.
I did not set up Vortex so voice is not working. Looks like it requires building the container yourself, which I wasn't going to do because after I got into the web client and had a friend join we realized that video/screen sharing was not implemented (and I couldn't find any talk of adding it) so I will not be using it at this time, but will monitor the progress for the future.
So I have set up revolt, revoltapi, revoltjanuary, and revoltws as subdomains. They will all be using HTTPS using this caddyfile config so no additional port forwarding (outside of 443/80) was necessary for me. You can omit the wildcard_cert directives if you don't want to worry about using a wildcard DNS challenge with Let'sEncrypt of course.
You'll want to change the IP addresses to your server's IP. The X-Frame-Options directive fixed my CORS issues, so make sure you change that appropriately as well. Alternatively, Access-Control-Allow-Origin
might also solve CORS.
(wildcard_cert) {
tls my.account@gmail.com {
dns cloudflare <cloudflare_api_token>
resolvers 1.1.1.1
}
}
revolt.server.cloud {
encode gzip
import wildcard_cert
reverse_proxy 192.168.69.150:5000 {
header_up X-Real-IP {remote_host}
}
}
revoltapi.server.cloud {
encode gzip
import wildcard_cert
reverse_proxy 192.168.69.150:8001 {
header_up X-Real-IP {remote_host}
header_down X-Frame-Options "allow-from https://revolt.server.cloud"
}
}
revoltws.server.cloud {
encode gzip
import wildcard_cert
reverse_proxy 192.168.69.150:9000 {
header_up X-Real-IP {remote_host}
header_down X-Frame-Options "allow-from https://revolt.server.cloud"
}
}
revoltautumn.server.cloud {
encode gzip
import wildcard_cert
reverse_proxy 192.168.69.150:3000 {
header_up X-Real-IP {remote_host}
header_down X-Frame-Options "allow-from https://revolt.server.cloud"
}
}
revoltjanuary.server.cloud {
encode gzip
import wildcard_cert
reverse_proxy 192.168.69.150:7000 {
header_up X-Real-IP {remote_host}
header_down X-Frame-Options "allow-from https://revolt.server.cloud"
}
}
Here's my docker-compose.yml. I think I only changed a port or two due to conflicts, and of course directory mount points! I was running this on an Unraid server with docker-compose. Make sure you put the .env file in the same directory as your docker-compose.yml too.
version: '3.8'
services:
# MongoDB database
database:
image: mongo
restart: always
volumes:
- ./data/db:/mnt/user/appdata/revolt/data/db
# Redis server
redis:
image: eqalpha/keydb
restart: always
# API server (delta)
api:
image: ghcr.io/revoltchat/server:20220715-1
env_file: .env
depends_on:
- database
- redis
ports:
- "8001:8000"
restart: always
# Events service (quark)
events:
image: ghcr.io/revoltchat/bonfire:20220715-1
env_file: .env
depends_on:
- database
- redis
ports:
- "9000:9000"
restart: always
# Web App (revite)
web:
image: ghcr.io/revoltchat/client:master
env_file: .env
ports:
- "5000:5000"
restart: always
# S3-compatible storage server
minio:
image: minio/minio
command: server /data
env_file: .env
volumes:
- ./data/minio:/data
ports:
- "10000:9000"
restart: always
# Create buckets for minio.
createbuckets:
image: minio/mc
depends_on:
- minio
env_file: .env
entrypoint: >
/bin/sh -c "
while ! curl -s --output /dev/null --connect-timeout 1 http://minio:9000; do echo 'Waiting minio...' && sleep 0.1; done;
/usr/bin/mc alias set minio http://minio:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD;
/usr/bin/mc mb minio/attachments;
/usr/bin/mc mb minio/avatars;
/usr/bin/mc mb minio/backgrounds;
/usr/bin/mc mb minio/icons;
/usr/bin/mc mb minio/banners;
/usr/bin/mc mb minio/emojis;
exit 0;
"
# File server (autumn)
autumn:
image: ghcr.io/revoltchat/autumn:1.1.5
env_file: .env
depends_on:
- database
- createbuckets
environment:
- AUTUMN_MONGO_URI=mongodb://database
ports:
- "3000:3000"
restart: always
# Metadata and image proxy (january)
january:
image: ghcr.io/revoltchat/january:master
ports:
- "7000:7000"
restart: always
Here's my .env. Note the REVOLT_EXTERNAL_WS_URL was changed to wss:// because it would be using HTTPS and would not allow the original HTTP call from ws:// on an HTTPS website.
##
## Quark configuration
##
# MongoDB
MONGODB=mongodb://database
# Redis
REDIS_URI=redis://redis/
# URL to where the Revolt app is publicly accessible
REVOLT_APP_URL=https://revolt.server.cloud
# URL to where the API is publicly accessible
REVOLT_PUBLIC_URL=https://revoltapi.server.cloud
VITE_API_URL=https://revoltapi.server.cloud
# URL to where the WebSocket server is publicly accessible
REVOLT_EXTERNAL_WS_URL=wss://revoltws.server.cloud
# URL to where Autumn is publicly available
AUTUMN_PUBLIC_URL=https://revoltautumn.server.cloud
# URL to where January is publicly available
JANUARY_PUBLIC_URL=https://revoltjanuary.server.cloud
# URL to where Vortex is publicly available
# VOSO_PUBLIC_URL=https://voso.server.cloud
##
## hCaptcha Settings
##
# If you are sure that you don't want to use hCaptcha, set to 1.
REVOLT_UNSAFE_NO_CAPTCHA=1
# hCaptcha API key
# REVOLT_HCAPTCHA_KEY=0x0000000000000000000000000000000000000000
# hCaptcha site key
# REVOLT_HCAPTCHA_SITEKEY=10000000-ffff-ffff-ffff-000000000001
##
## Email Settings
##
# If you are sure that you don't want to use email verification, set to 1.
REVOLT_UNSAFE_NO_EMAIL=1
# SMTP host
# REVOLT_SMTP_HOST=smtp.example.com
# SMTP username
# REVOLT_SMTP_USERNAME=noreply@example.com
# SMTP password
# REVOLT_SMTP_PASSWORD=CHANGEME
# SMTP From header
# REVOLT_SMTP_FROM=Revolt <sagiri@server.cloud>
##
## Application Settings
##
# Whether to only allow users to sign up if they have an invite code
REVOLT_INVITE_ONLY=0
# Maximum number of people that can be in a group chat
REVOLT_MAX_GROUP_SIZE=150
# VAPID keys for push notifications
# Generate using this guide: https://gitlab.insrt.uk/revolt/delta/-/wikis/vapid
# --> Please replace these keys before going into production! <--
REVOLT_VAPID_PRIVATE_KEY=Generate your own key
##
## Autumn configuration
##
# S3 Region
AUTUMN_S3_REGION=minio
# S3 Endpoint
AUTUMN_S3_ENDPOINT=http://minio:9000
# MinIO Root User
MINIO_ROOT_USER=minioautumn
# MinIO Root Password
MINIO_ROOT_PASSWORD=minioautumn
# AWS Access Key ID
AWS_ACCESS_KEY_ID=minioautumn
# AWS Secret Key
AWS_SECRET_ACCESS_KEY=minioautumn
##
## Vortex configuration
##
# VOSO_MANAGE_TOKEN=Your_secret_token
Has anyone got it to run through nginx and https ( lets encrypt ) ? The last time I tested the stack was back in april where https didnt work for me at all :/
Id appreciate a set of working configs very much
Hi there, Does anyone already made something working through nginx https with SWAG? Thank you!
Hey, I'm experiencing an issue with logging in. A user can create an account successfully and it seems like they can log-in but they do not get a session. When a new user logs in for the first time they're presented with a username field. Upon entering a desired username it errors out.
After a user refreshes the page they're represented by the log-in screen, indicating their session isn't saved and they get an error after logging in again.
Is something not configured correctly or is this a known issue?
Edit: This is now fixed by changing to subdomains instead of domain paths.
Definitely not trying to be discouraging here; this project is doing great work. Do want to point out however, the main reason we as a company are interested in moving out of Discord to something open source is the self-hosting angle.
For data protection this is a major win. The readme of this repo and the apparently lack of TLC it gets, as apparent from this year old request, are very telling that Revolt is going to shift toward a SaaS business model or otherwise discourage the self-hosting options. I hope that you reconsider, finding a way of monetizing self-hosted open source instead. IMO that will be a major hurdle in project adoption.
Let me make this very clear, I am for the most part a solo dev on this project, I account for around 92% of the code and pretty much manage everything, there is a lot of work left to do and it's not feasible for me to maintain every single aspect of this project consistently.
Maintaining just revolt.chat on its own is already pretty demanding and even I don't have the deployment fully down yet, so I don't know how I'm supposed to recommend others on setting up their instance when I haven't reached a point where I'm comfortable saying this is how you should be running things in production.
There's also a bunch of things I want to improve with the configuration as well which will probably land whenever I get around to it.
I want to really stress (based on "as apparent from this year old request"), I am an (effectively) solo developer who is currently studying at university, there are at least 200 issues on the revite repository, I just don't have the time to cover all of this.
I do also want to touch on this:
Thanks for the response. Well as I say, I would like to encourage you. Not claiming to know your motives or plans. Based on your response, perhaps this was purely situational -- frankly I assumed you had more help due to the quality of work. This project has every capacity to be a great startup already, and startups taking this stance would generally be doing so with the plan of offering a managed hosting SaaS product. (usually the conclusion seeing self-hosting is discouraged and an AGPL project)
Definitely hear you that I overestimated how many hands you had helping. I am just showing what a first impression is for a company that wants to support the project. This repo is basically a marketing funnel for businesses running open software stacks. Whether that is intentional or not; this repo is very good at that purpose.
If you do go the SaaS route more power to you. If you are not trying to monetize at all, understandable. However I do still submit to you that if you work out a monetization strategy for open source self-hosting; please believe many of us are happy to pay even while development is ongoing.
I believe you can gain more contributors, sponsors and wider adoption by doing so.
Unfortunately we do not have much experience with Rust, though a few C/C++ concepts translate. We do have some extensive experience with deployments, web languages and JS/TS. Happy to lend a hand if we can ease any pain points whether techie or business...y
If anyone is still having issues with the reverse proxy, try using nginx proxy manager. You can run it as a docker container then just add it to the same docker network as your revolt docker containers. Might be overkill but it works pretty well.
I've updated the README to include more information regarding setting up a custom domain and hopefully streamlined the whole process by putting a reverse proxy in front of everything.
I'm not really sure what, if anything, is missing from the repository at this point.
Hello, i tried self hosting revolt through docker, sure enough the guide provided on readme worked. The problem is when creating an account on localhost i always get an error, furthermore, trying to proxying through nginx (using my own knowledge, which isn't really this advanced) made things worse as i didn't even knew what to proxy and what not. Would it be possible to write an actual complete guide to self host revolt? This defenitely would help people decentralize the service!