The frontend dockerfile currently has many RUN instructions, which creates an image with 30 layers and almost 3 GBs in size.
One technique to solve this is to batch as many commands as possible on the same RUN command (see for instance the first RUN where we run APT).
Another optimisation that can be improve the final size is the usage of multi stage docker build.
The icing on the cake is to have the final image (the one that will serve the frontend) not depend on yarn, but just serve the page built on a previous stage through a simple nginx container, which is a much smaller dependency than yarn + node, plus much more efficient as a server.
For improvement of development experience, one could also use something to cache the dependencies from the cargo build, either manually or using something like cargo chef (never used it, usually I do it manually but it is a PITA).
The frontend dockerfile currently has many
RUN
instructions, which creates an image with 30 layers and almost 3 GBs in size.One technique to solve this is to batch as many commands as possible on the same
RUN
command (see for instance the firstRUN
where we run APT).Another optimisation that can be improve the final size is the usage of multi stage docker build.
The icing on the cake is to have the final image (the one that will serve the frontend) not depend on
yarn
, but just serve the page built on a previous stage through a simplenginx
container, which is a much smaller dependency thanyarn
+node
, plus much more efficient as a server.For improvement of development experience, one could also use something to cache the dependencies from the cargo build, either manually or using something like cargo chef (never used it, usually I do it manually but it is a PITA).