NIAEFEUP / niployments-old

NIAEFEUP's deployments management
1 stars 0 forks source link

Docker build without cache #31

Open DoStini opened 2 years ago

DoStini commented 2 years ago

NIJobs had a problem deploying the new services at the master branch because it was using the cached version of the other branch

Shouldn't we use a --no-cache flag here?

https://github.com/NIAEFEUP/niployments/blob/4801cf6ab9385f270611ec303624b9937c1667d2/deployments/deploy-types.sh#L31

imnotteixeira commented 2 years ago

Which exactly was the problem? Can you post logs/give a better explanation: why was caching specifically a problem?

DoStini commented 2 years ago

Since NIJobs project beta and master version use the same repository and dockerfiles and all that, the build is using cache of the develop version for example

imnotteixeira commented 2 years ago

The image tag used when building the dockerfile will be different (nijobs-fe---master vs nijobs-fe---develop) [0], so it's essentially a different thing. The envfile is different, so the image should take those changes and build from there. They use the same base image of node,and (currently) the same npm dependencies, so that part could be cached (not sure if they are sharing that, since, again, the images are different)

I think we should really find the problem here, disabling docker cache outright is not a good solution. It means it will take way longer to build, and consume resources we need to actually serve the apps' user requests, so we should avoid that.


[0] https://github.com/NIAEFEUP/niployments/blob/4801cf6ab9385f270611ec303624b9937c1667d2/deployments/deploy-types.sh#L17

miguelpduarte commented 2 years ago

@imnotteixeira is right here, btw. The images are different so this should not be an issue. Was there anything that pointed to this being a problem?

There might be cache between deploys, but at most this should cache dependencies (depending on the order of the instructions in the Dockerfile) given that following steps will copy the code, which will be different for each deploy.

I'd say to close this as a false positive but I want to first confirm there is not another underlying issue at play here.