Open Jazzepi opened 3 years ago
I agree that the onboarding/development process is way too hard for new contributors. I had to hack together some scripts and an ngnix config to develop outside of Docker.
I'm able to run a development instance on my box (without Docker) with these steps. My distro is Arch Linux.
yay -S postgresql nginx
# Only needed if installing postgres for the first time
sudo -iu postgres initdb -D /var/lib/postgres/data
sudo systemctl start postgresql
sudo -iu postgres createuser szuru -s
sudo -iu postgres psql -c "ALTER USER szuru PASSWORD 'dog';" # set a password to be used later
sudo -iu postgres createdb szuru
server/config.yaml.dist
to server/config.yaml
and scroll to the bottom. Update these fields:## ONLY SET THESE IF DEPLOYING OUTSIDE OF DOCKER
debug: 1 # generate server logs?
show_sql: 1 # show sql in server logs?
data_url: data/
data_dir: /home/hiro/build/work/szurubooru/client/public/data
## usage: schema://user:password@host:port/database_name
## example: postgres://szuru:dog@localhost:5432/szuru_test
database: postgres://szuru:dog@localhost:5432/szuru
/home/hiro/build/work/szurubooru/
is my checkout of the Git repo. szuru:dog
is the username/password. localhost:5432/szuru
is the host/database name.
vendor.min.js
to save time.cd ./client
npm install
npm run build
alembic
to initialize the database schema. Virtualenv should be installed on the host system.cd ../server
virtualenv python_modules
source python_modules/bin/activate
pip install -r requirements.txt
pip install -r dev-requirements.txt
alembic upgrade head
nginx.dev.conf
, with these contents. Place it inside ./server
. This configuration indicates the frontend will be reachable at http://localhost:8001
and the backend at http://localhost:6666
.pid /tmp/nginx/nginx.pid;
error_log /tmp/nginx/error.log;
worker_processes 1;
daemon off;
events {
worker_connections 1024;
}
http {
client_max_body_size 100M;
root /tmp/nginx/;
access_log /tmp/nginx/access.log;
client_body_temp_path /tmp/nginx/client_body/;
fastcgi_temp_path /tmp/nginx/fastcgi/;
proxy_temp_path /tmp/nginx/proxy/;
scgi_temp_path /tmp/nginx/scgi/;
uwsgi_temp_path /tmp/nginx/uwsgi/;
include /etc/nginx/mime.types;
server {
listen 8001;
location ~ ^/api$ {
return 302 /api/;
}
location ~ ^/api/(.*)$ {
if ($request_uri ~* "/api/(.*)") { # preserve PATH_INFO as-is
proxy_pass http://127.0.0.1:6666/$1;
}
}
location / {
root /home/hiro/build/work/szurubooru/client/public;
try_files $uri /index.htm;
}
}
}
Remember to update location
with the path to the frontend assets.
pip install gunicorn
run_dev.sh
, which handles rebuilding/restarting the backend and frontend. Place it in the root directory of the repo.#!/bin/bash
mkdir -p /tmp/nginx/
pushd ./server
source python_modules/bin/activate
gunicorn szurubooru.facade:app --reload -b 127.0.0.1:6666 &
nginx -p . -c nginx.dev.conf &
popd
pushd ./client
pwd
npm run watch &
popd
wait
./run_dev.sh
. Wait for the message Bundled app JS
to appear, then go to http://localhost:8001
.Hopefully at that point you should be able to see the front page. Create a user through the "Register" tab and they should be granted admin rights.
When the backend/frontend is started this way, gunicorn
and npm build
will watch for any changes to their respective files and restart the backend/rebundle if necessary.
I usually used print debugging or logging with the Python backend that got reloaded on each change I made.
The tests are run using pytest
, which is inside dev-requirements.txt
. Make sure you run pip -r dev-requirements.txt
from inside the python_modules
virtualenv created earlier and then run pytest
in the server
directory.
Wow okay that's amazing. Thank you bunches. I'll take a look.
This is a really good guide. I will try to create some helper files sometime to make it easier to do dev work outside of Docker, using python venv and the like
TBH I consider development without Docker archaic. (Compare the volume of the above tutorial with just docker-compose up
in the ideal world.) At the same time having to rebuild after every change seems like a bug that should be mitigated with the use of Docker volumes and potentially a docker-compose.dev.yml file. Just my 2 cents
There's no need afaik to have separate composer files these days.
You can make "service profiles" instead. What I would propose doing here is make separate containers specifically for development. From the top of my head, the only major difference would then have to be in the server and client containers where the development folder needs to be a mountpoint (instead of only the config file).
From that point on, you can use docker-compose exec
/restart
to do things like updating or adding dependencies/reloading the frontend and backend when you make changes.
I would like to point out that this would be very welcome for me, as I have a desire to contribute but the heavy docker dependency as it is right now makes doing that rather difficult as I'm someone who really doesn't like working from a theoretical and then seeing where it bugs up, which docker-based development in this situation tends to require.
Docker based development sounds like a good idea, especially considering some dependencies are a PITA to setup (e.g. pyheif is not even supported on windows).
I started a small PoC on this branch which mounts the server source code inside the docker. It uses hupper to auto-reload the server on changes. I haven't managed to attach my debugger to the container yet, but something like that should be possible. You do still need to restart the container to apply any new alembic migrations, but it would probably be overkill/unwanted to auto-reload those.
The same setup could be created for the client code (which recently got an improved --watch
mode).
I also replaced the MOUNT_DATA
and MOUNT_SQL
vars with a named volume, which is easier to work with. This allows us to do docker-compose -f ./docker-compose.dev.yml down -v
to delete all your data and start anew. I guess you could still use bind mounts, but I don't really see an use case for that. So imo the default behavior should be named volumes (for the dev setup).
I haven't looked at the "service profiles" yet, but it should be easy enough to implement those. The current usage is docker-compose -f ./docker-compose.dev.yml up --build
I had some basic questions about developing on the booru codebase.
It seems like the only way to iterate on the booru is to rebuild the docker image(s) every time you have a change, even if it's something simple like an HTML template or a CSS style.
https://github.com/rr-/szurubooru/wiki/Customizing-UI-Colors
Is there any way to rebuild the client facing stuff without building the entire docker image and starting the container anew? I saw a watch script in the client directory but that doesn't seem to help if you've got everything baked into docker images. Same questions for the python code in the web server. I assume that since it's interpreted all you would need is a way to edit the mounted files. Is something like that available? Is there information on how to setup debugging for the python running inside the docker? Are there any tests setup for the system? If so, how do we run them?