bigbluebutton / greenlight

A really simple end-user interface for your BigBlueButton server.
GNU Lesser General Public License v3.0
787 stars 3.8k forks source link

skip yarn install on every docker instance start #5637

Open morya opened 9 months ago

morya commented 9 months ago

IMHO

Container shouldn't require additional yarn install called from bin/start with rails assets:precompile

perhaps, we should pre-build this from a docker built image?

like this in /Dockerfile

RUN yarn install --production --frozen-lockfile \
    && yarn build \
    && yarn cache clean

Is there a perticular reason for delayed yarn build and install?

morya commented 9 months ago

It could be a problem to run greenlight inside a air-gap env when require running yarn install on every docker instance start .

And, if we could do yarn install on docker build stage, a huge node_modules can also be removed?

That will shink the image size for sure.

morya commented 9 months ago

this is the dockerfile im am ~using~ trying ( not finished, see below), and with #rails assets:precompile commented in bin/start file.

FROM ruby:alpine3.17 AS base

ARG RAILS_ROOT=/usr/src/app
ENV RAILS_ROOT=${RAILS_ROOT}
ARG RAILS_ENV
ENV RAILS_ENV=${RAILS_ENV:-production}
ARG NODE_ENV
ENV NODE_ENV=${RAILS_ENV}
ARG RAILS_LOG_TO_STDOUT
ENV RAILS_LOG_TO_STDOUT=${RAILS_LOG_TO_STDOUT:-true}
ARG RAILS_SERVE_STATIC_FILES
ENV RAILS_SERVE_STATIC_FILES=${RAILS_SERVE_STATIC_FILES:-true}
ARG PORT
ENV PORT=${PORT:-3000}
ARG VERSION_TAG
ENV VERSION_TAG=$VERSION_TAG
ENV PATH=$PATH:$RAILS_ROOT/bin
WORKDIR $RAILS_ROOT
RUN bundle config --local deployment 'true' \
    && bundle config --local without 'development:test'

FROM base as build

ARG PACKAGES='xz alpine-sdk libpq-dev imagemagick yarn build'
COPY Gemfile Gemfile.lock ./
RUN apk update \
    && apk add --update --no-cache ${PACKAGES} \
    && apk upgrade \
    && update-ca-certificates \
    && bundle install --no-cache \
    && bundle doctor

COPY . ./
ENV SECRET_KEY_BASE=1234

RUN yarn install --production --frozen-lockfile \
    && yarn cache clean \
    && rails assets:precompile

FROM base as prod

ARG PACKAGES='xz libpq-dev tzdata imagemagick bash'
COPY --from=build $RAILS_ROOT/vendor/bundle ./vendor/bundle
COPY --from=build $RAILS_ROOT/app/assets ./app/assets
RUN apk update \
    && apk add --update --no-cache ${PACKAGES} 
COPY . ./

EXPOSE ${PORT}
ENTRYPOINT [ "./bin/start" ]
ffdixon commented 9 months ago

Thanks for sharing!

Ithanil commented 8 months ago

this is the dockerfile im am using, and with #rails assets:precompile commented in bin/start file.

Hi, thanks for sharing this excellent idea! I came back to this after I had to experience that a well-tested container image didn't start properly when deployed to production, because yarn wasn't able to get a certain package at that point in time. So that got me thinking how to avoid an accident like this in the future and I remembered about your issue.

However, when testing this out I wasn't able to use your Dockerfile as you posted it. Especially, I had to drop the package "build" (which doesn't exist and is also not used in the original Dockerfile) from the first/build PACKAGES, but add the tzdata package like in the second/prod PACKAGES. Furthermore, it's not necessary to include the package xz in the prod PACKAGES.

Most importantly though, for me it was also required to copy the public/assets directory in addition to the app/assets(/builds) . The assets:precompile task does put a lot of stuff there as well.

In the end I arrived at this Dockerfile:

FROM ruby:alpine3.17 AS base

ARG RAILS_ROOT=/usr/src/app
ENV RAILS_ROOT=${RAILS_ROOT}
ARG RAILS_ENV
ENV RAILS_ENV=${RAILS_ENV:-production}
ARG NODE_ENV
ENV NODE_ENV=${RAILS_ENV}
ARG RAILS_LOG_TO_STDOUT
ENV RAILS_LOG_TO_STDOUT=${RAILS_LOG_TO_STDOUT:-true}
ARG RAILS_SERVE_STATIC_FILES
ENV RAILS_SERVE_STATIC_FILES=${RAILS_SERVE_STATIC_FILES:-true}
ARG PORT
ENV PORT=${PORT:-3000}
ARG VERSION_TAG
ENV VERSION_TAG=$VERSION_TAG
ENV PATH=$PATH:$RAILS_ROOT/bin
WORKDIR $RAILS_ROOT
RUN bundle config --local deployment 'true' \
    && bundle config --local without 'development:test'

FROM base as build

ARG PACKAGES='xz alpine-sdk libpq-dev tzdata imagemagick yarn'
RUN apk update \
    && apk upgrade \
    && update-ca-certificates \
    && apk add --no-cache ${PACKAGES}

COPY Gemfile Gemfile.lock ./
RUN bundle install --no-cache \
    && bundle doctor

COPY package.json yarn.lock ./
RUN yarn install --production --frozen-lockfile \
    && yarn cache clean

COPY . ./
ENV SECRET_KEY_BASE=DUMMY
RUN rails assets:precompile

FROM base as prod

ARG PACKAGES='libpq-dev tzdata imagemagick bash'
RUN apk update \
    && apk upgrade \
    && update-ca-certificates \
    && apk add --no-cache ${PACKAGES}

COPY . ./
COPY --from=build $RAILS_ROOT/vendor/bundle ./vendor/bundle
COPY --from=build $RAILS_ROOT/app/assets/builds ./app/assets/builds
COPY --from=build $RAILS_ROOT/public/assets ./public/assets

EXPOSE ${PORT}
ENTRYPOINT [ "./bin/start" ]

The multiple COPY in build could obviously be replaced by a single COPY . ./ , but I thought like this it is clearer which files are used at which point.

EDIT: Even in this version there is stuff missing, which only becomes apparent when using external auth. Will update when I think I have a real final version. EDIT2: Yeah, of course. esbuild would have to be run again on start because I'm using modifications to add SAML/LDAP and they add their OMNIAUTH_PATH there.

morya commented 8 months ago

this is the dockerfile im am using, and with #rails assets:precompile commented in bin/start file.

Hi, thanks for sharing this excellent idea! I came back to this after I had to experience that a well-tested container image didn't start properly when deployed to production, because yarn wasn't able to get a certain package at that point in time. So that got me thinking how to avoid an accident like this in the future and I remembered about your issue.

However, when testing this out I wasn't able to use your Dockerfile as you posted it. Especially, I had to drop the package "build" (which doesn't exist and is also not used in the original Dockerfile) from the first/build PACKAGES, but add the tzdata package like in the second/prod PACKAGES. Furthermore, it's not necessary to include the package xz in the prod PACKAGES.

Most importantly though, for me it was also required to copy the public/assets directory in addition to the app/assets(/builds) . The assets:precompile task does put a lot of stuff there as well.

Sorry about that, the dockerfile I posted didnot work, I am a newbie to Ruby and ROR.

Thanks for sharing, it would be great when this dockerfile been merged into source of greenlight

Ithanil commented 8 months ago

@morya Unfortunately, the fact that esbuild depends on configuration variables basically breaks the whole idea.

farhatahmad commented 8 months ago

There likely is a way around this, though its something I'd need to spend a lot of time investigating

Ithanil commented 8 months ago

@farhatahmad @morya Just FYI: What I do now isn't pretty, but works.

First of all, the final Dockerfile:

FROM ruby:alpine3.17 AS base

ARG RAILS_ROOT=/usr/src/app
ENV RAILS_ROOT=${RAILS_ROOT}
ARG RAILS_ENV
ENV RAILS_ENV=${RAILS_ENV:-production}
ARG NODE_ENV
ENV NODE_ENV=${RAILS_ENV}
ARG RAILS_LOG_TO_STDOUT
ENV RAILS_LOG_TO_STDOUT=${RAILS_LOG_TO_STDOUT:-true}
ARG RAILS_SERVE_STATIC_FILES
ENV RAILS_SERVE_STATIC_FILES=${RAILS_SERVE_STATIC_FILES:-true}
ARG PORT
ENV PORT=${PORT:-3000}
ARG VERSION_TAG
ENV VERSION_TAG=$VERSION_TAG
ENV PATH=$PATH:$RAILS_ROOT/bin
WORKDIR $RAILS_ROOT
RUN bundle config --local deployment 'true' \
    && bundle config --local without 'development:test'

FROM base as build

ARG PACKAGES='xz alpine-sdk libpq-dev tzdata imagemagick yarn'
RUN apk update \
    && apk upgrade \
    && update-ca-certificates \
    && apk add --no-cache ${PACKAGES}

COPY Gemfile Gemfile.lock ./
RUN bundle install --no-cache \
    && bundle doctor

COPY package.json yarn.lock ./
RUN yarn install --production --frozen-lockfile \
    && yarn cache clean

COPY . ./
ENV SECRET_KEY_BASE_DUMMY=1
RUN rails assets:precompile

# remove anything not to be copied into prod
RUN rm -rf ./node_modules
RUN rails tmp:clear

FROM base as prod

ARG PACKAGES='libpq-dev tzdata imagemagick bash'
RUN apk update \
    && apk upgrade \
    && update-ca-certificates \
    && apk add --no-cache ${PACKAGES}

COPY --from=build $RAILS_ROOT .

EXPOSE ${PORT}
ENTRYPOINT [ "./bin/start" ]

Main difference to the above is that it avoids accidentally missing any assets by just copying the whole RAILS_ROOT from the build layer, but only after explicitly deleting what isn't needed.

So, like above, rails assets:precompile is executed, including an (so far) unmodified esbuild. But what I do is preventing the deployment of gzipped assets, via config/applcation.rb

    # don't serve compressed assets so we can sed-replace in bin/start
    config.assets.gzip = false

Then, in bin/start I have the following instead of assets:precompile:

# instead of rails assets:precompile we do this:
if [[ -n $OPENID_CONNECT_ISSUER ]]
then
    sed -i 's/\/auth\/saml/\/auth\/openid_connect/g' app/assets/builds/main.js public/assets/main-*.js
    sed -i 's/\/auth\/ldap/\/auth\/openid_connect/g' app/assets/builds/main.js public/assets/main-*.js
elif [[ -n $SAML_ENTITY_ID ]]
then
    sed -i 's/\/auth\/openid_connect/\/auth\/saml/g' app/assets/builds/main.js public/assets/main-*.js
    sed -i 's/\/auth\/ldap/\/auth\/saml/g' app/assets/builds/main.js public/assets/main-*.js
elif [[ -n $LDAP_SERVER ]]
then
    sed -i 's/\/auth\/openid_connect/\/auth\/ldap/g' app/assets/builds/main.js public/assets/main-*.js
    sed -i 's/\/auth\/saml/\/auth\/ldap/g' app/assets/builds/main.js public/assets/main-*.js
fi

This is obviously a bit more complicated than it would be in vanilla GL3, because we support different external authn methods (not at the same time / instance though!). In vanilla GL3 this wouldn't even be necessary, because there is only one valid setting anyway. What would be necessary though is a way to support RELATIVE_URL_ROOT , which can't really be done by simple sed-replace. It could be done if a unique placeholder was inserted in esbuild, but then the variable could only be set on the first start of the container.

I hope that was in some way helpful. The solution might seem hacky / a lot of effort, but for me it was really worth it to have deterministic and a lot lot quicker container startups.