mozilla / fxa

Monorepo for Mozilla Accounts (formerly Firefox Accounts)
https://mozilla.github.io/ecosystem-platform/
Mozilla Public License 2.0
602 stars 210 forks source link

Update self-hosting docs #3652

Closed jaredhirsch closed 3 years ago

jaredhirsch commented 4 years ago

There are some self-hosting docs (https://mozilla-services.readthedocs.io/en/latest/howtos/run-fxa.html) that seem to date to before the monorepo existed.

We should get those up to date, and maybe pull them into the ecosystem repo as well.

┆Issue is synchronized with this Jira Task ┆Issue Number: FXA-24

immanuelfodor commented 4 years ago

Huge +1 for the documentation update initiative! The current state of this project with all the involved microservices and moving parts is not suitable for self-hosting even though the project is open source. There are many outdated guides on different sites, and I even saw some complaints that it would took an experienced sysadmin 3 days at best to set everything up for production πŸ˜€

I suggest creating a docker-compose file/project that builds/pulls all the services in separate containers, and links them together with suitable environment and volume configurations.

To give you some advantage starting the docs, here are my notes on setting up a FxA server on an Ubuntu VM - without success but at least I tried πŸ˜€ Maybe I shouldn't have started to set up a dev env in the first place, but as there are so few docs how to do it, it was the only way to start it.

VM specs

At least:

Docker install

Run as root after logging into the VM:

# general VM update
apt update && apt upgrade -y

# cleanup of any old docker remnants
apt remove -y docker docker-engine docker.io docker-compose && apt purge -y docker-ce

# install latest docker, adds itself to the sources.list, so we can update it via apt later
cd ~
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
cat /etc/apt/sources.list.d/docker.list
rm get-docker.sh

# optional: install latest docker-compose and bash completion
curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose

bash

docker -v
# Docker version 19.03.5, build 633a0ea838
docker-compose version
# docker-compose version 1.25.0, build 0a186604
# docker-py version: 4.1.0
# CPython version: 3.7.4
# OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

systemctl status docker

# add the "ubuntu" unprivileged user to the "docker" group to skip writing sudo all the time
groupadd docker
gpasswd -a ubuntu docker
service docker restart

Install FXA dependencies

Run as the unprivileged ubuntu user after logging into the VM:

# originally wanted to skip "openjdk-11-jre" but it is needed for some JS compression in npm install
# however, skip "firefox" as it is a selenium dependency and needed only for running tests
sudo apt install -y build-essential git libgmp3-dev graphicsmagick python-virtualenv python-dev pkg-config libssl-dev curl openjdk-11-jre

# dep: nvm to install proper nodejs and npm version
wget -O nvm-install-update.sh https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.2/install.sh
chmod +x nvm-install-update.sh
./nvm-install-update.sh
# check the added nvm load codesnippet and reload bash
tail ~/.bashrc
bash

# dep: nodejs 12 latest LTS
nvm ls-remote
nvm install 12.14.0
nvm which default
which node
which npm

# dep: grunt
npm install -g grunt grunt-cli

# optional dep: maildev
npm install -g maildev

# dep: rust
wget -O rustup.sh https://sh.rustup.rs
chmod +x rustup.sh
./rustup.sh
# Select "2) Customize installation"
# Leave "Default host triple" blank, hit "enter"
# Type "nightly" for "Default toolchain"
# Type "default" for "Profile"
# Type "y" for "Modify PATH variable?"
# Select "1) Proceed with installation"

# check the added rust load code snippet and reload bash
tail ~/.profile
bash

Install FXA from source

Run as the unprivileged ubuntu user:

# clone the project to a new empty folder:
mkdir -p ~/github/fxa
git clone https://github.com/mozilla/fxa.git ~/github/fxa
cd ~/github/fxa

# install npm packages and start the project
npm install
npm start

# optional: check the open ports
sudo netstat -tulpn

# check the services state at any time
./pm2 status

# check the service logs
./pm2 logs
# ctrl+c to exit

# stop the whole project
npm stop

# check installed images, some of the start scripts pull images
docker image ls

Example ./pm2 status output above:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ App name                            β”‚ id β”‚ version β”‚ mode β”‚ pid   β”‚ status β”‚ restart β”‚ uptime β”‚ cpu β”‚ mem       β”‚ user   β”‚ watching β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 123done PORT 8080                   β”‚ 16 β”‚ 0.0.2   β”‚ fork β”‚ 9015  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 54.2 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ 321done UNTRUSTED PORT 10139        β”‚ 17 β”‚ 0.0.2   β”‚ fork β”‚ 9027  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 53.8 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ Fake SQS/SNS PORT 4100              β”‚ 3  β”‚ 2.0.0   β”‚ fork β”‚ 8896  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 2.9 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ Fortress PORT 9292                  β”‚ 15 β”‚ 0.0.2   β”‚ fork β”‚ 9011  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 40.3 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ MySQL server PORT 3306              β”‚ 0  β”‚ 2.0.0   β”‚ fork β”‚ 8890  β”‚ online β”‚ 0       β”‚ 12s    β”‚ 0%  β”‚ 3.0 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ auth-server db mysql PORT 8000      β”‚ 9  β”‚ 2.0.0   β”‚ fork β”‚ 8912  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 3.2 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ auth-server key server PORT 9000    β”‚ 10 β”‚ 0.35.2  β”‚ fork β”‚ 8918  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 49.9 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ auth-server local mail helper       β”‚ 8  β”‚ 1.152.1 β”‚ fork β”‚ 8906  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 54.8 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ browserid-verifier PORT 5050        β”‚ 18 β”‚ 0.10.1  β”‚ fork β”‚ 9059  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 47.3 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ content-server PORT 3030            β”‚ 11 β”‚ 1.152.1 β”‚ fork β”‚ 8925  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 98.2 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ email-service PORT 8001             β”‚ 7  β”‚ 2.0.0   β”‚ fork β”‚ 8903  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 3.0 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ event-broker                        β”‚ 21 β”‚ 0.35.2  β”‚ fork β”‚ 10176 β”‚ online β”‚ 0       β”‚ 6s     β”‚ 0%  β”‚ 49.6 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ google-firestore-emulator PORT 8006 β”‚ 5  β”‚ 2.0.0   β”‚ fork β”‚ 8900  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 2.9 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ google-pubsub-emulator PORT 8005    β”‚ 4  β”‚ 2.0.0   β”‚ fork β”‚ 8898  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 2.8 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ memcached PORT 11211                β”‚ 2  β”‚ 2.0.0   β”‚ fork β”‚ 8894  β”‚ online β”‚ 0       β”‚ 12s    β”‚ 0%  β”‚ 3.0 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ payments server PORT 3031           β”‚ 19 β”‚ 0.35.2  β”‚ fork β”‚ 9677  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 48.6 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ profile-server PORT 1111            β”‚ 12 β”‚ 2.0.0   β”‚ fork β”‚ 8932  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 3.2 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ profile-server static dev PORT 1112 β”‚ 13 β”‚ 1.152.1 β”‚ fork β”‚ 8960  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 55.6 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ profile-server worker PORT 1113     β”‚ 14 β”‚ 1.152.1 β”‚ fork β”‚ 8978  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 57.3 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ pushbox PORT 8002                   β”‚ 22 β”‚ 2.0.0   β”‚ fork β”‚ 10184 β”‚ online β”‚ 0       β”‚ 6s     β”‚ 0%  β”‚ 2.8 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ redis PORT 6379                     β”‚ 1  β”‚ 2.0.0   β”‚ fork β”‚ 8892  β”‚ online β”‚ 0       β”‚ 12s    β”‚ 0%  β”‚ 2.9 MB    β”‚ ubuntu β”‚ disabled β”‚
β”‚ support admin panel PORT 7100       β”‚ 20 β”‚ 0.35.2  β”‚ fork β”‚ 9693  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 48.1 MB   β”‚ ubuntu β”‚ disabled β”‚
β”‚ sync server PORT 5000               β”‚ 6  β”‚ 2.0.0   β”‚ fork β”‚ 8901  β”‚ online β”‚ 0       β”‚ 11s    β”‚ 0%  β”‚ 3.0 MB    β”‚ ubuntu β”‚ disabled β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This gives a local dev env working but it's not really useful for any remote synchronization. It was mainly intended for running it on a laptop (with MacOS as seen from npm darwin references).

Switch architecture from dev to prod FxA

And this is where I failed, maybe because the monorepo was never intended to run a production self-hosted service. The problems that I encountered and need improvement:

Treat these notes with a grain of salt, mainly for providing context what I tried:

# firewall configuration of the VM, run as root or all with sudo:
apt install -y ufw
ufw allow OpenSSH
ufw allow 3030/tcp comment fxa-content-server
ufw allow 9000/tcp comment fxa-auth-server
ufw allow 9010/tcp comment fxa-oauth-server
ufw allow 1111/tcp comment fxa-profile-server
ufw allow 5000/tcp comment fxa-sync-server
ufw enable
ufw status verbose

# local git user config in the cloned repo to be able to stash, commit, branch, etc
git config user.name ubuntu
git config user.email ubuntu@fxa-home

# create a backup of the pm2 config, in case removing any service from default startup
cp mysql_servers.json{,-orig}

# replace URLs like the below ones to the VM's FQDN
#    auth: 'http://127.0.0.1:9000/v1',
#    content: 'http://127.0.0.1:3030/',
#    token: 'http://localhost:5000/token/1.0/sync/1.5',
#    oauth: 'http://127.0.0.1:9010/v1',
#    profile: 'http://localhost:1111/v1'
find . -type f -exec grep "127\.0\.0\.1" {} \; -exec sed -i -E 's/127\.0\.0\.1:(9000|3030|5000|9010|1111)/fxa-home.local.lan:\1/g' {} \; -print
find . -type f -exec grep "localhost" {} \; -exec sed -i -E 's/localhost:(9000|3030|5000|9010|1111)/fxa-home.local.lan:\1/g' {} \; -print

# (re)start all the services with the replaced local LAN domain:
npm stop
npm start

Now these base URLs (as seen in: https://github.com/vladikoff/fxa-dev-launcher/blob/master/profile.js) should be available:

But for a start, there is no service running on port 9010. Also, the port 3030 content server asks for an email address but then hangs with a spinner on the button. Several OCSP POST requests are sent in the background for inline-script violation. Many errors are printed on the debug console. Here is where I gave up after a day of trying.

I really hope somebody gets down to this issue and creates a stable and secure guide that actually works for all the self-hosters :) If you have any comments to the steps above, please feel free make suggestions. Maybe I was only a few steps behind from succeeding, or I might have looked over some (for you) obvious parts based on frustration.

jaredhirsch commented 4 years ago

Wow, thanks for the detailed notes @immanuelfodor! This is really helpful material.

Just a general note, the dev-fxacct mailing list is a good place (very low traffic) to watch for future updates on self-hosting beyond this bug: https://mail.mozilla.org/listinfo/dev-fxacct

immanuelfodor commented 4 years ago

Thanks for the suggestion, I subscribed to the list, avaiting moderation.

jackyzy823 commented 4 years ago

After a few days' works. I managed to run a self-hosting fxa service based on commit 594790996. Here's the step for your reference. @immanuelfodor @6a68

Assume you have a domain called example.local. You should point fxa.example.local , api.fxa.example.local , oauth.fxa.example.local , profile.fxa.example.local , token.fxa.example.local to your server.

You should generate certificate for this domains. For simplify, you can just generate two certs.

1) generate *.fxa.example.local cert called wild.fxa.example.local.cer

2) generate *.example.local cert called wild.example.local.cer or fxa.example.local cert called called fxa.example.local.cer

Then clone the repo. and do all the things (install dependencies and docker) except npm start.

Then do some modifications.

1. Install and configure nginx How to install nginx is skipped. Create a nginx conf /etc/nginx/sites-enabled/fxa and replace all example.local with your domain , replace ssl_certificate and ssl_certificate_key with your cert.

server {
    server_name fxa.example.local;
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /home/ubuntu/cert/wild.example.local.cer;
    ssl_certificate_key /home/ubuntu/cert/wild.example.local.key;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;
    location / {
        proxy_pass http://127.0.0.1:3030;
    }
    location /favicon.ico {
        proxy_pass http://127.0.0.1:3030/favicon.ico;
    }

}

server {
    server_name api.fxa.example.local;
    server_name oauth.fxa.example.local;
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /home/ubuntu/cert/wild.fxa.example.local.cer;
    ssl_certificate_key /home/ubuntu/cert/wild.fxa.example.local.key;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;
    location / {
        proxy_pass http://127.0.0.1:9000;
    }

}

server {
    server_name profile.fxa.example.local;
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /home/ubuntu/cert/wild.fxa.example.local.cer;
    ssl_certificate_key /home/ubuntu/cert/wild.fxa.example.local.key;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;
    location ^~/img/ {
        proxy_pass http://127.0.0.1:1112/;
    }

    location / {
        proxy_pass http://127.0.0.1:1111;
    }
}

server {
    server_name token.fxa.example.local;
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /home/ubuntu/cert/wild.fxa.example.local.cer;
    ssl_certificate_key /home/ubuntu/cert/wild.fxa.example.local.key;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;

    location / {
        proxy_pass http://127.0.0.1:5000;
    }
}

2. Modify mysql_servers.json Note:Remeber to replace example.local with your domain.

  1. change "auth-server key server PORT 9000" 's "env"
-        "SIGNIN_CONFIRMATION_FORCE_EMAIL_REGEX": "^sync.*@restmail\\.net$"
+        "SIGNIN_CONFIRMATION_FORCE_EMAIL_REGEX": "^sync.*@restmail\\.net$",
+       "ISSUER":"api.fxa.example.local",
+       "PUBLIC_URL":"https://api.fxa.example.local",
+       "OAUTH_URL":"https://oauth.fxa.example.local",
+       "AUTH_SERVER_URL":"https://api.fxa.example.local",
+       "VERIFICATION_URL":"http://127.0.0.1:5050/v2"

2) change "content-server PORT 3030" 's "env"

-        "NODE_ENV": "development"
+        "NODE_ENV": "development",
+       "PUBLIC_URL":"https://fxa.example.local",
+       "FXA_OAUTH_URL":"https://oauth.fxa.example.local",
+       "FXA_URL":"https://api.fxa.example.local",
+       "FXA_PROFILE_URL":"https://profile.fxa.example.local",
+       "FXA_PROFILE_IMAGES_URL":"https://profile.fxa.example.local"

3) change "profile-server PORT 1111" 's "env" . Note: you can custom your profile image url. such as use profile-img.fxa.example.local and config a nginx server block to proxy_pass 127.0.0.1:1112. Or just like me ,use the same domain as profile, but add a url dispatcher ^~/img/ in nginx.

-        "DB": "mysql"
+        "DB": "mysql",
+       "IMG_PROVIDERS_FXA":"^https://profile.fxa.example.local/img/a/[0-9a-f]{32}$",
+       "IMG_URL":"https://profile.fxa.example.local/img/a/{id}"

4) change "browserid-verifier PORT 5050" 's "env" Note: Since we make api.fxa.example.local run with https. so we should make browserid-verifier lookup api.fxa.example/.well-known/browserid via https. Note: extend http_timeout to avoid frequent resigin problem .

-        "FORCE_INSECURE_LOOKUP_OVER_HTTP": "true"
+        "FORCE_INSECURE_LOOKUP_OVER_HTTP": "false",
+       "HTTP_TIMEOUT":"60"

5) change "pushbox PORT 8002" 's args Note: if you host fxa services in windows/macos you can set server_inner_ip to host.docker.internal (not tested) Note: since pushbox is a docker. so you can not set server_inner_ip to 127.0.0.1. Note: if you use aws or other cloud providers. you can use ip like "172.16.x.x" or "192.168.x.x" or "10.x.x.x"

-      "args": "3306 root@mydb:3306",
+      "args": "3306 root@<server_inner_ip>:3306",

3. Modify _scripts 1) _script/mysql.sh Note: if you want to keep your data. make a dir wherever mkdir -p data/mysql and chmod a+w data

 docker run --rm --name=mydb \
+  -v <persistence path>/data/mysql:/var/lib/mysql \
   -e MYSQL_ALLOW_EMPTY_PASSWORD=true \

2) _script/syncserver.sh Note: without --network="host" we cannot vist host's http://127.0.0.1:5000 from docker Note: sine we use nginx to proxy_pass, so we set SYNCSERVER_FORCE_WSGI_ENVIRON to true to avoid host check.

-docker run --rm --name syncserver \
+docker run --rm --network="host"  --name syncserver \
   -p 5000:5000 \
-  -e SYNCSERVER_PUBLIC_URL=http://127.0.0.1:5000 \
+  -v <persistence path>/data:/tmp \
+  -e SYNCSERVER_PUBLIC_URL=https://token.fxa.example.local \
   -e SYNCSERVER_BROWSERID_VERIFIER=http://$HOST_ADDR:5050 \
   -e SYNCSERVER_SECRET=5up3rS3kr1t \
   -e SYNCSERVER_SQLURI=sqlite:////tmp/syncserver.db \
   -e SYNCSERVER_BATCH_UPLOAD_ENABLED=true \
-  -e SYNCSERVER_FORCE_WSGI_ENVIRON=false \
+  -e SYNCSERVER_FORCE_WSGI_ENVIRON=true \

3) _script/pushbox.sh Note: pushbox need to visit mysql which published port to host. without --network="host" we cannot vist host's :3306

-  docker run --rm --name pushbox \
+  docker run --rm --network="host" --name pushbox \

Finally we can just npm start to start all services!

About Client Edit user.js under your firefox profile and replace example.local with your domain.

user_pref("identity.fxaccounts.auth.uri","https://api.fxa.example.local/v1");
user_pref("identity.fxaccounts.remote.oauth.uri","https://oauth.fxa.example.local/v1");
user_pref("identity.fxaccounts.remote.profile.uri","https://profile.fxa.example.local/v1");
user_pref("identity.fxaccounts.remote.root","https://fxa.example.local/");
user_pref("identity.sync.tokenserver.uri","https://token.fxa.example.local/token/1.0/sync/1.5");

About Profile image:

store in fxa/packages/fxa-profile-server/var/public

About Email Verification Code:

under fxa folder ,run ./pm2 logs "auth-server local mail helper" to find the code.

Security Issues: 1 . Since redis and mysql are in docker , ufw cannot protect these ports. Use under your own risk !

  1. Mysql is not password protected and is exposed to public network.

TODO:

  1. delete pushbox garbage from mysql automatically according to pushbox's document.

  2. make syncserver work with mysql db

  3. make all this in a docker and only expose 443 (which can avoid pushhox<->mysqldb , syncserver<->browserid-verifier commnuicating problem)

ABOUT commnuicating problem As far as i know, 127.0.0.1 is docker container's self localhost, NOT HOST's localhost under bridge networks(which is default) . So it's not possible for docker pushbox visit docker mysql via 127.0.0.1:3306. Also it's not possible for docker syncserver visit browserid-verifier via 127.0.0.1:5050. We need to use --network="host" (host networking) to make 127.0.0.1 works. Maybe there's better ways to solve this problem (such as make all in a docker/ run all service in different server) .Please let me know , thanks.

immanuelfodor commented 4 years ago

OMG, this is awesome work! I'm thankful for your days of experimentation @jackyzy823 in the name of the self-hosting community! It should truly run in a single docker container (and with one docker compose maybe), but I think this is progress, we are not far from a dockerfile that unifies the setup process once and for all.

immanuelfodor commented 4 years ago

One suggestion if somebody starts to experiment with a dockerfile from now on. To ease up cert generation, maybe we can use dashes instead of dots in the subdomain names for two-part domains, this way we only need one wildcard cert. For example: *.example.local should now cover fxa.example.local and api-fxa.example.local (instead of api.fxa.example.local previously).

jackyzy823 commented 4 years ago

Hello everyone,after another a few days , i finally managed to run a selfhosting fxa in production way using docker-composer. Here's my project https://github.com/jackyzy823/fxa-selfhosting .

And according to @immanuelfodor , i replaced fxa.example.local to www.fxa.example.local. So only one wild cert for all subdomains is enough.

About security: There are only two ports, 0.0.0.0:443(nginx) and 127.0.0.1:9001(fxa-auth-local-mail-helper) exposed. So i think it is far more secure than before. However, use it under your own risk :)

immanuelfodor commented 4 years ago

OMG, this is fantastic news, so promising, I'll definitely try it out at the weekend.

jackyzy823 commented 4 years ago

Feel free to ask any questions you encountered while trying. I'll try to help solving.

immanuelfodor commented 4 years ago

One thing that came up just by looking at the code: is it safe to do a sed recursively on all the files for replacing the www string for something else? In my env, it's already occupied.

jackyzy823 commented 4 years ago

It's safe. replacing www. seems much safer.

immanuelfodor commented 4 years ago

I went through your repo and started to configure everything, here are my notes, improvement ideas and questions.


When using a wild cert only, docker-compose config gives an error:

ERROR: Named volume "$WILD_CERT:/certs/fxaprofile.cer:ro" is used in service "nginx" but no declaration was found in the volumes section.

It can be fixed manually as docker compose env substitution only support inline default values:

-      - ${PROFILE_CERT:-$WILD_CERT}:/certs/profile.cer:ro
-      - ${PROFILE_CERTKEY:-$WILD_CERTKEY}:/certs/profile.key:ro
+      - ${WILD_CERT}:/certs/profile.cer:ro
+      - ${WILD_CERT_KEY}:/certs/profile.key:ro

The init file could handle altering the compose file according to the cert properties (wild or not). Or if not automated, it could give instructions what to do based on the env config, where to make changes in the compose.

Also, all the env variables are named as *_CERTKEY, only the wild is *_CERT_KEY which is a bit error prone (I had to debug it :D).


Recursive replace for the subdomains was done by the below commands if it's useful for others. In fact, I replaced all subdomains to my liking.

# check all places involved
find . -type f -not -path "*/.git/*" -exec grep "www\." {} \; -print

# do the replace where needed
find . -type f -not -path "*/.git/*" -exec grep "www\." {} \; -exec sed -i 's/www\./mywww\./g' {} \;

Doing so will mark many files as changed in git status, so I'd recommend providing config options for them.


The current setup assumes the nginx container faces the internet but I suppose many folks would run this stack behind an existing reverse proxy. In this case SSL might be terminated at the reverse proxy. To support this scenario, it would be great to listen on port 80 as well without any certs needed.


Client IDs and secrets shouldn't be auto generated, and then substituted by the init script?

As I don't really understand what are these secrets for, my main concern is if having the FXA stack open to the net, could it possibly be attacked if these are known and published. Other secrets are provided in the env file: https://github.com/jackyzy823/fxa-selfhosting/blob/master/.env.sample#L31 Why are these fixed?


I also personalized the sender email address, and I'm not sure that these quotes are at the right place or if needed at all: https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L239 (I used "Name" <sender@host.com> and the config check was okay.)


Does the realwhatever supposed to be there? https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L174 It seems like a too easy secret :)


Do I get it right when I define these SMTP host and port variables, then the fxa-auth-local-mail-helper service is not needed at all, so I can comment it out completely? https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L242

The compose file is already so complex that it could be automatically generated based on a template via the init script. This way when using a custom SMTP host/port, it could be left out. Users could choose from the CLI email helper service, the email forwarder service, using their own host/port without auth, or using a 3rd party. The needed elements are already in the compose.


I did all my customizations to the env and the compose, I'm just about to start everything up, fingers crossed! :)

immanuelfodor commented 4 years ago

Oh, and I forgot to write that you did an amazing job putting these together! Wow, just wow!

immanuelfodor commented 4 years ago

Here is a typo in the env var name: https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L251

immanuelfodor commented 4 years ago

When setting up the FXA URL, I've found the below settings, should I replace the URL here to the sysnc storage as well?

Key Value
webextensions.storage.sync.enabled true
webextensions.storage.sync.serverURL https://webextensions.settings.services.mozilla.com/v1
immanuelfodor commented 4 years ago

Based on the returned JSON, it seems it's another service that connects to the FXA auth stack: https://github.com/Kinto/kinto-fxa

jackyzy823 commented 4 years ago

Based on the returned JSON, it seems it's another service that connects to the FXA auth stack: https://github.com/Kinto/kinto-fxa

You are right. Besides, i'm trying to intergate kinto to self hosting project to make Firefox Notes (webextension and android version) works, However both of webextension and android version need modification to support custom fxa service and custom kinto server .

Since webextensions.storage using kinto too, so i will try to make it work too.

I'm still writing answers to previous questions. :)

immanuelfodor commented 4 years ago

Wow, that would be great to have it in the stack, too!

In the meantime, it seems the desktop sync works, or at least I could login, and both the activation code and the account confirmation emails arrived. Still ahead of testing that sync indeed works between FF instances, and also connecting an Android phone.

FYI, docker logs show lots of pushbox.local amazon SNS config errors.

Haha, sorry for writing that much points, I was just looking really into it :)

jackyzy823 commented 4 years ago

1. About wild certs. Sorry for not testing that. Try to find a way to both kind of certs without modifying docker compose file.

2. About subdomain name Making subdomain configurable will be on my todolist.

3. About reverse proxy I'll take these into consideration

4. About Client IDs and secrets It' all about OAuth protocol.

These client ids are predefined by mozilla for their real products as 3rd reliers which using OAuth. (such as firefox-sync , accounts.firefox.com(aka content-server:prod) etc).

So the fxa-oauth-server (inside fxa-auth-server) needs to know all clientids (from oauthserver-prod.json) to perfom oauth .

All these clientids' secret are hashedsecret. so it's safe to be seen.

Other secrets are provided in the env file: https://github.com/jackyzy823/fxa-selfhosting/blob/master/.env.sample#L31 Why are these fixed? We use our content-server (which is a oauth client) to act as accounts.firefox.com.

Since we can control content-server .we can assign a new clientid to it or just use mozilla's predefined one (which is https://github.com/jackyzy823/fxa-selfhosting/blob/master/.env.sample#L31).

However we cannot control firefox-sync (which is built into browser). So we must have the exact firefox-sync client id in oauthserver-prod.json

See: https://docs.telemetry.mozilla.org/datasets/fxa_metrics/attribution.html#service-attribution

https://github.com/mozilla/fxa-dev/blob/docker/roles/oauth/templates/config.json.j2

5. About other secrets.

They are used for internal communication. such as pushbox<->authserver,authserver<->oauthserver , etc . so i think it's safe to use a predefined secret (maybe).

Does the realwhatever supposed to be there? https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L174 It seems like a too easy secret :)

See: https://github.com/mozilla/fxa/blob/00e4ecd27059a80f07d90d8ad4535209e1bf6557/packages/fxa-auth-server/config/dev.json#L378

However i'm not sure i'm doing this right


Do I get it right when I define these SMTP host and port variables, then the fxa-auth-local-mail-helper service is not needed at all, so I can comment it out completely? yes.

The compose file is already so complex that it could be automatically generated based on a template via the init script. This way when using a custom SMTP host/port, it could be left out. Users could choose from the CLI email helper service, the email forwarder service, using their own host/port without auth, or using a 3rd party. The needed elements are already in the compose.

Is there any recommended methods to make docker compose file templatized? I'm struggling on this for a long time.

Finally. due to my poor english , i'm wondering whether i explained clearly. So if there's anything confuses you, you could keep asking me to explain.

jackyzy823 commented 4 years ago

Wow, that would be great to have it in the stack, too!

In the meantime, it seems the desktop sync works, or at least I could login, and both the activation code and the account confirmation emails arrived. Still ahead of testing that sync indeed works between FF instances, and also connecting an Android phone.

FYI, docker logs show lots of pushbox.local amazon SNS config errors.

Haha, sorry for writing that much points, I was just looking really into it :)

since we do not use goaws. pushbox requests real aws sqs url which is hardcoded. see https://github.com/mozilla-services/pushbox/blob/0a7f9eee128d5a540b9d5f5c69b250ae081922f8/src/sqs/mod.rs#L72

jackyzy823 commented 4 years ago

In the meantime, it seems the desktop sync works, or at least I could login, and both the activation code and the account confirmation emails arrived. Still ahead of testing that sync indeed works between FF instances, and also connecting an Android phone.

For android , you need config these:

"identity.fxaccounts.auth.uri":"https://api.$DOMAIN_NAME/v1",
"identity.fxaccounts.remote.oauth.uri":"https://oauth.$DOMAIN_NAME/v1",
"identity.fxaccounts.remote.profile.uri":"https://profile.$DOMAIN_NAME/v1",
"identity.fxaccounts.remote.webchannel.uri":"https://www.$DOMAIN_NAME",

and append/prepend https://www.$DOMAIN_NAME to "webchannel.allowObject.urlWhitelist"

jackyzy823 commented 4 years ago

More about kinto. The hardest part of intergating kintois that it uses postgres as default backend. I don't want to maintain two databases (more memory required and more work for backup). So i want to write a mysql backend if they do not use sql features only in postgres and if i have time. :(

immanuelfodor commented 4 years ago

1. About wild certs. 2. About subdomain name 3. About reverse proxy Okay, thanks!

4. About Client IDs and secrets a) So it means if I get it right, FF browsers do an OAuth "log in" to the auth server on their own, and THEN the user logs in with their credentials? This effectively closes out other browsers from sync and prevents some tampering? b) So it can't be then that somebody from FF login to the server with some "master key" if the server is exposed, right? :D c) But it can be that other users use my personal FXA/sync server to sync their browsers? Can the server locked down somehow? Like HTTP Basic auth, and the user/pass is added to the URL like https://user:pass@www.fxa.example.org Would the browser still sync in this case? d) If I add this Send object to the JSON: https://github.com/mozilla/fxa-dev/blob/docker/roles/oauth/templates/config.json.j2#L132 Could a self-hosted Send connect to the FXA by defining FXA_URL=https://www.fxa.example.org and FXA_CLIENT_ID=fced6b5e3f4c66b9 for the send instance? Also specifying the redirectUri to the Send instance in the JSON.

5. About other secrets. Thanks for the explanation, I randomized all of that env params with openssl rand -hex 16. Also added one in place of the realwhatever. Do we really need two secrets here?

6. Docker compose template I know many k8s template helpers but not one for docker compose. I think people just invent their own, here is one in PHP: https://github.com/brettmc/docker-compose-generator :D Could be done with sed in bash, or a cat < file .... EOF syntax that can substitute variables.

7. Pushbox Thanks for the info, so I just skip them from now, it's like their normal operation.

8. Android Thanks, I've just updated the repo, and got some merge conflicts :D will continue with that tomorrow.

9. Kinto I still have some gigs of RAM left, it wouldn't be a problem for me to run two DBs.

jackyzy823 commented 4 years ago

4. About Client IDs and secrets a) So it means if I get it right, FF browsers do an OAuth "log in" to the auth server on their own, and THEN the user logs in with their credentials? This effectively closes out other browsers from sync and prevents some tampering?

Yes

b) So it can't be then that somebody from FF login to the server with some "master key" if the server is exposed, right? :D

Do you mean the staff of Mozilla? I think they can't.

c) But it can be that other users use my personal FXA/sync server to sync their browsers? Can the server locked down somehow? Like HTTP Basic auth, and the user/pass is added to the URL like https://user:pass@www.fxa.example.org Would the browser still sync in this case?

since mozllia use fxa as a public service, i dont think they designed a principal to forbid some email addresses to register.

So the only way i found is to configure your email sender to only send mails to allowed domains which makes that user can not proceed the verfication step and can not using sync service.

HTTP Basic auth might work but not realistic. Many inter-server requests may be affected.

d) If I add this Send object to the JSON: https://github.com/mozilla/fxa-dev/blob/docker/roles/oauth/templates/config.json.j2#L132 Could a self-hosted Send connect to the FXA by defining FXA_URL=https://www.fxa.example.org and FXA_CLIENT_ID=fced6b5e3f4c66b9 for the send instance? Also specifying the redirectUri to the Send instance in the JSON.

Yes. But not tested.

Also added one in place of the realwhatever. Do we really need two secrets here?

Maybe no :confused:

I know many k8s template helpers but not one for docker compose. I think people just invent their own, here is one in PHP: https://github.com/brettmc/docker-compose-generator :D Could be done with sed in bash, or a cat < file .... EOF syntax that can substitute variables.

Thanks for your advice. I think a python script with yaml template support would be more flexible for env substitution and components selection.

https://github.com/k14s/ytt may be another good option.

immanuelfodor commented 4 years ago

Email: Great idea to limit the domain accepted, I use my own, so it could work. Are these variables are for this case? https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L204

Send: Added that block to the JSON, also added the two variables to Send, restarted both, but I can't see any change. What should have happened? I know it's untested but maybe you have a guess.

Templating: Yes, that tool is one of the k8s related ones. Well, compose is YAML, so it could work, I was thinking about docker specialized tools, sorry.

Today's changes: pretty clever move to include everywhere the wild cert in the config if the user wants to use that! :D Less complexity, nice.

The FXA stack works fine so far on the LAN with desktop browser <-> Android sync.

Two bugs:

jackyzy823 commented 4 years ago
  • The orange text of the init script is impossible to copypaste, always results in empty lines when pasted to a text editor. Manjaro Linux with Konsole and Kate.

Sorry , i cannot reproduce it.

Are these variables are for this case? https://github.com/jackyzy823/fxa-selfhosting/blob/master/docker-compose.yml#L204

this env is not for that case. It may be for forcing code verfication when user signin.

Send: Added that block to the JSON, also added the two variables to Send, restarted both, but I can't see any change. What should have happened? I know it's untested but maybe you have a guess.

I tested Send. There are more things needed to do than i think. So i made a branch for it https://github.com/jackyzy823/fxa-selfhosting/tree/send . Note: Since base images of fxa-content-server and fxa-auth-server is alpine and it does not contain envsubst by default. So You need to manually edit _init/content/contentserver-prod.json and _init/auth/oauthserver-prod.json replacing ${DOMAIN_NAME} to the real one.

vbudhram commented 3 years ago

Thank you for filing this issue. This is not currently on our roadmap, and in an effort to focus our work, we are closing old issues that we are unlikely to be closed in the future. Thanks again.