GSS-Cogs / dd-cms

A data-driven content management system prototype, based on Plone/Volto and data blocks
1 stars 0 forks source link

Jenkins docker-compose race condition #350

Closed ajtucker closed 2 years ago

ajtucker commented 2 years ago

We're using docker-compose to do some integration testing on Jenkins and due to the way containers are named, we can end up with concurrent Jenkins jobs using the same name for containers.

Steve suggested either locking/disabling concurrent builds, or using docker-compose -p ${env.BUILD_TAG} to help.

ajtucker commented 2 years ago

The current attempt is:

pipeline {
    agent any
    stages {
        stage('Frontend tests') {
            agent {
                dockerfile {
                    dir 'volto'
                    filename 'Dockerfile-test'
                    args '-u root:root'
                    reuseNode true
                }
            }
            steps {
                dir('volto') {
                    /* only expecting this to be a soft link from a previous run
                       but it could be a dir in existing jenkins workspaces, so
                       recursively delete it there
                    */
                    sh "(test -d ${env.WORKSPACE}/volto/node_modules && rm -rf ${env.WORKSPACE}/volto/node_modules) || true"
                    sh "(test -f ${env.WORKSPACE}/volto/node_modules && rm -f ${env.WORKSPACE}/volto/node_modules) || true"
                    /* this is a safer version of the git clean, but in Jenkins we have various
                       "dirty" workspaces from before the caching attempts.
                        so leave the git clean in for a bit, and when this is merged, after a while,
                        go back to this one. */
                    /* sh "find ${env.WORKSPACE}/volto/src/addons -type l -maxdepth 1 -exec rm \\{\\} \\; " */
                    sh "git clean -f ${env.WORKSPACE}/volto/src/addons"
                    sh "ln -s /app/node_modules ${env.WORKSPACE}/volto/"
                    sh "for n in \$(find /app/src/addons -type d -mindepth 1 -maxdepth 1); do ln -sf \$n ${env.WORKSPACE}/volto/src/addons/\$(basename \$n); done;"
                    sh "yarn test-ci"
                }
            }
        }
        stage('End user tests') {
            steps {
                script {
                    dir('tests/climate-change-v2') {
                        sh "docker-compose -p ${env.BUILD_TAG.toLowerCase()} build"
                        sh "docker-compose -p ${env.BUILD_TAG.toLowerCase()} up -d plone"
                        sh "docker-compose -p ${env.BUILD_TAG.toLowerCase()} up -d volto"
                        sh "docker-compose -p ${env.BUILD_TAG.toLowerCase()} up -d proxy"
                        def puppeteer = docker.image("${env.BUILD_TAG.toLowerCase()}_test")
                        puppeteer.inside("--rm --entrypoint= --network ${env.BUILD_TAG.toLowerCase()}_test_net") { testContainer ->
                            sh './run.sh'
                        }
                    }
                }
            }
        }
    }
    post {
        always {
            script {
                dir('volto') {
                    junit allowEmptyResults: true, testResults: '*.xml'
                }
                dir('tests/climate-change-v2') {
                    cucumber 'test-results.json'
                    sh "docker-compose -p ${env.BUILD_TAG.toLowerCase()} down"
                }
            }
        }
    }
}

We may also want to try using docker-compose build --no-rm to avoid throwing layers away. Note that I'm not sure we need the -p on docker-compose build as it's the container names we're worried about, though it would be good to figure out how to name the images better.