Open igor-ivanov opened 3 years ago
This isssue becomes critical as far as it appears regullary http://hpc-master.lab.mtl.com:8080/blue/organizations/jenkins/LIBVMA-PRO/detail/LIBVMA-PRO/81/pipeline/1369 See for extra/x86_64/r-aa-zorro014/2
what is an issue? i see WS are fine and diff http://hpc-master.lab.mtl.com:8080/job/LIBVMA-PRO/81/ws/
Is it possible that ci uses WORKSPACE
as special variable and corrupts mine?
Example of issue see abs_path
and WORKSPACE
:
output:
[2021-03-26T11:26:52.581Z] # rel_path -----------------> ./contrib
[2021-03-26T11:26:52.581Z] # abs_path -----------------> /scrap/jenkins/workspace/LIBVMA-PRO/contrib
[2021-03-26T11:26:52.581Z]
[2021-03-26T11:26:52.581Z] # WORKSPACE ----------------> /home/jenkins/agent/workspace/LIBVMA-PRO
[2021-03-26T11:26:52.581Z] # BUILD_NUMBER -------------> 98
[2021-03-26T11:26:52.581Z] # TARGET -------------------> default
real script:
contrib/test_jenkins.sh
rel_path=$(dirname $0)
abs_path=$(readlink -f $rel_path)
echo
echo "# rel_path -----------------> ${rel_path} "
echo "# abs_path -----------------> ${abs_path} "
echo
source ${abs_path}/jenkins_tests/globals.sh
echo
echo "# WORKSPACE ----------------> ${WORKSPACE} "
echo "# BUILD_NUMBER -------------> ${BUILD_NUMBER} "
echo "# TARGET -------------------> ${TARGET} "
echo
where: jenkins_tests/globals.sh
#!/bin/bash
WORKSPACE=${WORKSPACE:=$abs_path}
WORKSPACE=${WORKSPACE:=$(pwd)}
BUILD_NUMBER=${BUILD_NUMBER:=0}
This issue appears 4 of 5 times.
For example on bare metal (r-aa-zorro014) pipeline.log output shows PWD=/scrap/jenkins/workspace/LIBVMA-PRO WORKSPACE=/home/jenkins/agent/workspace/LIBVMA-PRO before running contrib/test_jenkins.sh
[Pipeline] unstash
[Pipeline] pwd
[Pipeline] sh
[2021-03-31T12:24:58.649Z] pwd=/scrap/jenkins/workspace/LIBVMA-PRO -- ws=/home/jenkins/agent/workspace/LIBVMA-PRO
[2021-03-31T12:24:58.649Z] XXX found
[2021-03-31T12:24:58.876Z] + set -eE
[2021-03-31T12:24:58.876Z] + bash -c 'shopt -s dotglob; rm -rf /home/jenkins/agent/workspace/LIBVMA-PRO/*'
[2021-03-31T12:24:58.919Z] ======================================================
There is a workaround to hide this jenkins issue as setting WORKSPACE=$PWD explicitly in Run
but this workarond can not be applied for BlackDuck step.
As a result:
[2021-04-06T09:18:51.917Z] + set +x
[2021-04-06T09:18:51.917Z] Cloning into '/home/jenkins/agent/workspace/LIBVMA-PRO/blackduck'...
[2021-04-06T09:18:51.917Z] /home/jenkins/agent/workspace/LIBVMA-PRO/blackduck /home/jenkins/agent/workspace/LIBVMA-PRO
[2021-04-06T09:18:51.917Z] INFO:(run_bd_scan.sh) Using JAVA: /usr/bin/java
[2021-04-06T09:18:52.173Z] INFO:(run_bd_scan.sh) JAVA Version: 1.8.0_282
[2021-04-06T09:18:52.173Z] Required parameters value:
[2021-04-06T09:18:52.173Z] ====================================================
[2021-04-06T09:18:52.173Z] SPRING_APPLICATION_JSON = {"blackduck.url":"https://blackduck.mellanox.com/","blackduck.api.token":"ODMwOWYwMzEtODA2ZC00MzBjLWI1ZDEtNmFiMjBkYzQzMzkwOjNmNjExN2M1LWE2ZmEtNDZlYS1hZjRiLTZlNDgwNjAwOTVjNw=="}
[2021-04-06T09:18:52.173Z] PROJECT_NAME = libvma
[2021-04-06T09:18:52.173Z] PROJECT_VERSION = 0.1.0
[2021-04-06T09:18:52.173Z] PROJECT_SRC_PATH = /scrap/jenkins/workspace/LIBVMA-PRO/src
[2021-04-06T09:18:52.173Z] ----------------------------------------------------
[2021-04-06T09:18:52.173Z] INFO:(run_bd_scan.sh) Running: source scan
[2021-04-06T09:18:52.173Z] INFO:(run_bd_scan.sh) Dry Run: false
[2021-04-06T09:18:52.173Z] ERROR:(run_bd_scan.sh) Source scan failed. PROJECT_SRC_PATH should be directory
Clonning is done into the right path as /home/jenkins/agent/workspace/LIBVMA-PRO/blackduck
But PROJECT_SRC_PATH
is set to /scrap/jenkins/workspace/LIBVMA-PRO/src
added fix as we discussed offline (move workspace var calc under stage()
context
please let know if it helped
I relauched job and see issue again (http://hpc-master.lab.mtl.com:8080/blue/organizations/jenkins/LIBVMA-PRO/detail/LIBVMA-PRO/171/pipeline/844).
I also faced this issue in the similar scenario when I had docker images and physical server usage in the same matrix file. in my case usage of dockers was not necessary, so after i removed dockers and left with server only this issue has gone. @mike, when we are using agentSelector, WORKSPACE comes from jenkins configuration, not from docker image or ci-demo user.
This issues originaly was here https://github.com/Mellanox-lab/libvma-pro/pull/33#discussion_r598849608
I am using following:
and periodically observe for bare-metal:
and for container:
For example: http://hpc-master.lab.mtl.com:8080/blue/organizations/jenkins/LIBVMA-PRO/detail/LIBVMA-PRO/76/pipeline/885 default/x86_64/fc31/1 and default/x86_64/r-aa-zorro014/1