Closed tobiasmcnulty closed 7 years ago
Awesome! I suspect we can do away with the stack/node.sh
script because node is being installed with apk.
What do we feel is a reasonable way to go about migrations? I noticed @stefanfoulis has a Django specific "build" step that may relate to this.
Our current approach is that migrations are run as part of the container booting up as they're idempotent.
Hm, that might not be ideal here because the migrations are so slow & memory intensive to compute (even if there are no new ones to apply). Perhaps it could be controlled by an environment variable? Or run by a one-off script that schedules the same image with the migrate
command?
Static media's giving me a lot of trouble. I gave up on whitenoise
in favor of compress
, but can't seem to get compress
to run either:
CommandError: An error occurred during rendering /Users/tobias/.virtualenvs/rapidpro/lib/python2.7/site-packages/smartmin/templates/smartmin/users/user_failed.html: '/static/bower/select2/select2.js' isn't accessible via COMPRESS_URL ('/sitestatic/') and can't be compressed
Nevermind, I just discovered .travis.yml
has the correct compress
command:
python manage.py compress --extension=".haml" --settings=temba.settings_travis
I'm happy for this to be a separate step or perhaps accept it as a positional argument for the container's entry point?
The tricky thing is that this is largely dependent on how deploys are done. In our setting we're working with blue/green deploys, the load balancer's routes are only updated to point at the new app when its health checks report it as being happy, after that the old app is taken out of rotation. That's after the migrations have run.
@stefanfoulis what are your thoughts on this?
Sure, we could make the entry point a separate script that would optionally run collectstatic, compress, and migrate if specified?
yeah, that seems like the best option given what we know now.
How can I help with this? I was thinking of when this lands looking at hooking up the travis side of things to build the image, run some tests against it and if succesful & publish to the docker hub with the tag?
That sounds good. What else do you think is left to do before merging? I think it's mostly working, but feel free to give it a run & leave any feedback you see fit on the PR!
And let us know if you feel like we're on the wrong track here @stefanfoulis !
@tobiasmcnulty @smn looking great :-)
@tobiasmcnulty I'm happy to see this merged, then I can look at the travis & docker registry pushing side of things.
Thanks @smn. This still needs some work but it's basically operational. I'll go ahead and merge & we can tweak further as you & others test.
The geo library build step takes forever. This isn't a huge issue for me but if it is for others (not sure if Docker Hub is good about caching or not) we could split this off into a separate Dockerfile.
I also discovered we'll need a docker image for mage (https://github.com/rapidpro/mage). I don't have time to flesh this out further at the moment but here's what I have so far:
FROM openjdk:8-jdk-alpine
ENV MAGE_VERSION=0.1.82
# make sure tar and openssl are up to date
RUN set -ex \
&& apk add --no-cache openssl tar
WORKDIR /mage
RUN wget "https://github.com/rapidpro/mage/releases/download/v$MAGE_VERSION/mage-$MAGE_VERSION-bundle.tar.gz" && \
tar -xvf mage-$MAGE_VERSION-bundle.tar.gz && \
rm mage-$MAGE_VERSION-bundle.tar.gz
ENV REDIS_DATABASE=8
ENV TEMBA_HOST=localhost:8000 TEMBA_AUTH_TOKEN=none
ENV TWITTER_API_KEY=none TWITTER_API_SECRET=none
ENV SEGMENTIO_WRITE_KEY=none
ENV SENTRY_DSN=none
ENV LIBRATO_EMAIL=none LIBRATO_API_TOKEN=none
ENV STARTUP_CMD="java -jar mage.jar server config.yml"
EXPOSE 8027 8028
COPY stack/startup.sh /
CMD ["/startup.sh"]
startup.sh
is pretty simple:
#!/bin/sh
export REDIS_HOST=$(echo $REDIS_URL | cut -d'/' -f3)
if [ "x$ENVIRONMENT" = "xproduction" ]; then
export PRODUCTION=1
else
export PRODUCTION=0
fi
$STARTUP_CMD
@nicpottier @ewheeler Could we (a) add a version of this to the mage
repo or (b) get a mage-docker
repo setup, managed by the same team as this one?
awesome, thanks for this. @nicpottier & @ewheeler do mage & rapidpro tags at all correlate? I suspect this isn't the case but it'd be good to know if a tagged release of rapidpro requires a specific tagged release of mage.
There isn't any relation between these right now, though I think we could start doing so. @rowanseymour any thoughts there?
Difficulty there is that RapidPro is tagged a lot more frequently than Mage which tends to be only updated a couple of times a year (basically whenever the schema of msgs_msg or contacts_contact changes). And when we do update Mage we'll sometimes make several tags without re-tagging RapidPro. Maybe we could just email the dev list whenever a new Mage version is released?
Added a rapidpro/mage-docker repo and created a Docker team with everybody on it.
I think we could at least version mage in a similar way to RapidPro, and have a convention that the last version equal to or less than the RapidPro version should be used.
That sounds reasonable to me. Thanks for creating the repo. I pushed up copies of the above scripts there and created the corresponding repo on Docker hub: https://hub.docker.com/r/rapidpro/mage/
Let's say we make a an x.y.z
release of RapidPro that requires some work in Mage. Typically we end up making more than one release of Mage (because it's been six months since Mage was last updated and we realise it needs some love - or we don't get things completely right the first time). The only way to accommodate that whilst keeping some relation between the versioning schemes would be something like x.y.z-n
where n is nth Mage release for RapidPro version x.y.z
.
Right, so adding an extra digit.. so x.y.z.a essentially.
I'm fine with that, it seems like it would certainly help anybody else trying to track.
@smn just saw your comment; I'd been doing some work in the same direction (vanilla Python based image) so I'm pushing it up here in case it's useful to you.
Still needs some work, but I believe this should be smaller than the official Python docker image since it's based on Alpine (compressed size is about 305MB, mostly due to the geo libraries).
Building GEOS and GDAL manually was sort of a pain while developing, but once it's built that layer should be cached & you won't have to worry about it
I prefer uWSGI to Gunicorn b/c it survives a little better on its own, and it's entirely configurable via environment variables (Gunicorn is not).
To do:
collectstatic
andcompress
) + external static media hosting (e.g., S3)DEBUG
by default