Open tspascoal opened 2 years ago
Hi @tspascoal,
Thanks for reporting this issue. We've investigated this issue before and are working on resolving it 👍
2022-08-22 Update: Correctly translating github.workspace causes some regressions, considering introducing github.host-workspace
Hello @fhammerl,
Did you make any progress on this issue ?
Has anyone figured out a workaround for this?
We've started experiencing same issues since we moved to running jobs inside containers and what was the most interesting is the fact that we used in all cases ${{ github.workspace }}
variable. Our pipeline looked like this:
build:
name: Build, Test & Publish
runs-on: self-hosted
container:
image: internalregistry.io/internal-image:3
steps:
- uses: actions/checkout@v3
- name: Build, Test and analyze
shell: bash
run: |
dotnet test
- name: Publish
shell: bash
run: |
for f in $(find src -name '*.csproj'); do
d=$(dirname $f)
outputFolder=${{ github.workspace }}/${{ env.ARTIFACT }}/$d
( cd "$d" && dotnet publish --no-self-contained -c $BUILD_CONFIGURATION -o $outputFolder )
done
- name: Publish artifacts
uses: actions/upload-artifact@v3.1.1
with:
name: ${{ env.ARTIFACT }}-v${{ github.run_number }}
retention-days: 1
path: ${{ github.workspace }}/${{ env.ARTIFACT }}/src
if-no-files-found: error
And $outputFolder
looked like this /__w/<project_name>/<project_name>/<env.ARTIFACT>/src/SomeProject
.
But still in Publish artifacts
step we were getting error:
Error: No files were found with the provided path: /__w/
/ / /src. No artifacts will be uploaded.
Then we changed workflow to this to overcome this problem and it started working:
build:
name: Build, Test & Publish
runs-on: self-hosted
container:
image: internalregistry.io/internal-image:3
steps:
- uses: actions/checkout@v3
- name: Build, Test and analyze
shell: bash
run: |
dotnet test
- name: Publish
shell: bash
run: |
echo "GITHUB_WORKSPACE=$GITHUB_WORKSPACE" >> $GITHUB_ENV
for f in $(find src -name '*.csproj'); do
d=$(dirname $f)
outputFolder=${{ env.GITHUB_WORKSPACE }}/${{ env.ARTIFACT }}/$d
( cd "$d" && dotnet publish --no-self-contained -c $BUILD_CONFIGURATION -o $outputFolder )
done
- name: Publish artifacts
uses: actions/upload-artifact@v3.1.1
with:
name: ${{ env.ARTIFACT }}-v${{ github.run_number }}
retention-days: 1
path: ${{ env.GITHUB_WORKSPACE }}/${{ env.ARTIFACT }}/src
if-no-files-found: error
Now paths in both steps look similar: /__w/<project_name>/<project_name>/<env.ARTIFACT>/src
.
So in short we saved $GITHUB_WORKSPACE
to environment variables and started using it instead of ${{ github.workspace }}
. Maybe we don't even need to save it and can just use it out of the box, didn't try this.
I started using Artifacts. In the first Job 1: push file to Artifact
# Archive zip
- name: Archive artifact
uses: actions/upload-artifact@v3
with:
name: files-${{ github.run_id }}.tar.gz
retention-days: 1
if-no-files-found: error
path: |
${{ github.workspace }}/files-${{ github.run_id }}.tar.gz
In the second Job, download the file in order to use it.
# Fetch Dump from artifact storage
- uses: actions/download-artifact@v2
with:
name: files-${{ github.run_id }}.tar.gz
After all other tasks: Delete the artifact, we dont want to keep it outside of the job run
# Delete artifact
- uses: geekyeggo/delete-artifact@v2
with:
name: |
files-${{ github.run_id }}.tar.gz
The infra people at our organisation wanted the workers to be ephermeral and not link in storage. The "workaround" with the arifacts actually works quite well.
Also an issue for me after switching to container-based workflows. Makes some github actions on the marketplace that rely on this variable unusable inside containers, such as rsync
.
One thing to note is that ${{runner.workspace}}
inside the run
part of the step, yields different results than ${{runner.workspace}}
being inside the working-directory
of a step.
In my case the working-directory
will spit out a /__w/REPOSITORY/
, while run
will spit out a /home/ubuntu/actions-runner/_work/REPOSITORY/
.
can anyone has solution on this I am experiencing the same issue.
Hey @abhijeetdhoke,
I would suggest using environment variables until this PR is finalized and merged :relaxed:
Correctly translating github.workspace causes some regressions, considering introducing
github.host-workspace
@fhammerl When I add github.host-workspace its not identifying in github action. can you give the complete example?
Hey @abhijeetdhoke,
I would suggest using environment variables until this PR is finalized and merged ☺️
@nikola-jokic do you mean for action checkout or where ? can you give the example?
Hey @abhijeetdhoke, I would suggest using environment variables until this PR is finalized and merged ☺️
@nikola-jokic do you mean for action checkout or where ? can you give the example?
@nikola-jokic
Here is my code
name: azure bicep deployment env: GitHubRepo: 'bicep-with-modules' CorrectedRootPath: ''
on:
push: branches: [ Master ] paths:
'.github/workflows/BicepDev.yml' pull_request: branches: [ master ]
workflow_dispatch:
jobs:
bicep_build: name: bicep build
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Check out repository code
uses: actions/checkout@v3
- name: Login to Azure
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_SPN_CREDENTIAL }}
- name: Remove Extra Folder
run: |
workspace_path="${{ github.workspace }}"
echo "${{ github.workspace }}" # This will output 'workspace'
corrected_workspace_path="$(dirname "$workspace_path")"
echo "corrected_workspace_path=$corrected_workspace_path" >> "$GITHUB_ENV"
- name: 'Build Azure Resourece by bicep'
working-directory: '${{ github.workspace }}/src/functionApp'
run: |
git config --global --add safe.directory $GITHUB_WORKSPACE
az deployment group create --what-if --resource-group RG-fun-aapss --template-file './main.bicep' --parameters @parameters.json
bicep_deployment: name: bicep deployment
runs-on: ubuntu-latest
needs: [bicep_build]
environment: production
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
- name: Login to Azure
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_SPN_CREDENTIAL }}
- name: 'deploy Azure Resourece from bicep'
working-directory: '${{ github.workspace }}/src/functionApp'
run: |
az deployment group create --resource-group RG-fun-aapss --template-file './main.bicep' --parameters @parameters.json
What I meant is that you can use $GITHUB_WORKSPACE
. Your run should be something like:
run: |
cd $GITHUB_WORKSPACE && az deployment ...
The problem is that during the workflow evaluation, we weren't consulting the step host to translate the github.workspace
into a path that will be usable inside a container. We implemented step host based evaluation under a feature flag, but it is currently turned off due to a problem with hashFiles. If the new context variable is accepted, then we will turn the feature flag on and the workflow syntax you just posted should evaluate correctly. But until then, I would suggest you use the environment variables whenever you are running inside the container :relaxed:
What I meant is that you can use
$GITHUB_WORKSPACE
. Your run should be something like:run: | cd $GITHUB_WORKSPACE && az deployment ...
The problem is that during the workflow evaluation, we weren't consulting the step host to translate the
github.workspace
into a path that will be usable inside a container. We implemented step host based evaluation under a feature flag, but it is currently turned off due to a problem with hashFiles. If the new context variable is accepted, then we will turn the feature flag on and the workflow syntax you just posted should evaluate correctly. But until then, I would suggest you use the environment variables whenever you are running inside the container ☺️
@nikola-jokic let me check and get back to you.
This is a very frustrating one. I am unable to deploy my helm charts in the container. I tried exporting $GITHUB_WORKSPACE and even inside the env of the same step prior to execution, it shows correctly:
GITHUB_WORKSPACE: /runner/_work/<repo>/<repo>
If I list this directory everything is there.
But then inside the container it uses /github/workspace
instead. I even tried hard coding the above path into the config that gets passed into the container but that still doesn't work because there is a mapping of that to the /github/workspace
path:
-v "/runner/_work/helm-charts/helm-charts":"/github/workspace"
I don't think I can modify this part. I'm out of ideas.
@dylac I know you probably already tried it but my fix https://github.com/actions/runner/issues/2058#issuecomment-1361469076 works just fine. We're actively using it and it's a good fix while this path is not fixed.
@dylac I know you probably already tried it but my fix #2058 (comment) works just fine. We're actively using it and it's a good fix while this path is not fixed.
I really appreciate your response, but I'm having trouble understanding how your example solves the problem. Even if I upload/download it prior to the container, how can I make that downloaded artifact available inside the container? The action I am using is:
https://github.com/marketplace/actions/deploy-helm-to-eks
My step is almost identical to "example 1" in the above link. In a previous step I'm editing a helm chart .yaml file and saving it to the workspace dir; this part works. But this action's config-files
option only accepts a valid filesystem path, so I don't think I can run other bash commands or anything else inside of this step. Happy to hear if I missed something here, though.
I see the artifact actions allow you to choose a path to write to. Is there one that would be available in the container?
Wh I do differently is that I have the helm charts in a separate rep and aI pull them in
- name: Checkout helm repo
uses: actions/checkout@v3
with:
repository: woutersf/drupal-helm-charts-k8s
path: upstream_helm
token: ${{ secrets.GIT_CHECKOUT_TOKEN }}
ref: '${{ inputs.aks_branch }}'
and then in a later step
# Validate helm stuff to not leave a broken state.
- name: DRY RUN
run: |
cd helm
helm upgrade --dry-run --timeout 15m --install -n ${{ inputs.app_key }}-${{ inputs.environment }} -f values-${{ inputs.environment }}.yaml --set deploy.runid=${{ github.run_id }} --set image.tag=${{ inputs.version }} --set image.version=${{ inputs.version }} ${{ inputs.app_key }} .
This takes the yaml files in the CICD and trues to apply them to kubernetes. the files are available over the multiple steps.
If you realley want the files on the pods, that's what kubectl cp is for but That should not be needed to apply helm charts.
steps:
-
name: Checkout
uses: actions/checkout@v3
To move files in and out of a POD i use the following:
# COPY FILE TO WORKSPACE
- name: COPY FILE FROM POD TO WORKSPACE
shell: bash
run: |
kubectl cp -n ${{ inputs.app_key }}-${{ inputs.environment }} ${{ inputs.app_key }}-${{ inputs.environment }}/$POD_NAME:/tmp/${{ inputs.app_key }}-${{ inputs.environment }}-${{ github.run_id }}.sql.gz ${{ github.workspace }}/${{ inputs.app_key }}-${{ inputs.environment }}-${{ github.run_id }}.sql.gz
I would suspect that first a checkout step fetches the helm charts you need and then in a next step your example would suffice.
It's almost 2 years, you could at least update the documentation, so people would not hit this bug. The same goes for #2185 (and maybe other context variables where path is involved) Why not just bind the paths 1:1 instead making them shorter? Would solve all these problems.
I got a nice workaround for this, thanks to Pi AI for the help, by using working-directory directive for the container and for the steps, like:
jobs:
build:
name: Build
defaults:
run:
working-directory: ./github-actions-docker
runs-on: ubuntu-latest
container:
image: ghcr.io/me/myimage
steps:
- uses: actions/checkout@v4
- name: Configure CMake
run: |
cmake -B build -DCMAKE_BUILD_TYPE=Debug
env:
CXX: g++-12
working-directory: ${{github.workspace}}
- name: Build
run: cmake --build build --config Debug -- -j 2
working-directory: ${{github.workspace}}
- name: Run unit tests
run: ./tester
working-directory: ${{github.workspace}}/build/test/doctests
Then beware to not use ${{github.workspace}} directly in the run commands, just use the working-directory directive instead.
Any update on this? Even when setting a custom environmental variable, the runner actions strips this out during run-time:
Run echo "CUSTOM_WORKSPACE=/actions-runner/_work/<repo>/<repo>" >> $GITHUB_ENV
echo "CUSTOM_WORKSPACE=/actions-runner/_work/<repo>/<repo>" >> $GITHUB_ENV
shell: sh -e {0}
Run echo "CUSTOM WORKSPACE: $CUSTOM_WORKSPACE"
echo "CUSTOM WORKSPACE: $CUSTOM_WORKSPACE"
echo "github.workspace: /actions-runner/_work/<repo>/<repo>"
shell: sh -e {0}
env:
CUSTOM_WORKSPACE: /actions-runner/_work/<repo>/<repo>
CUSTOM WORKSPACE: /__w/<repo>/<repo>
github.workspace: /actions-runner/_work/<repo>/<repo>
Is there any solution to the runner referencing an absolute directory on the host? I'm running a docker container that needs to mount checked out directories.
If anyone comes across this, a fix that works but could be better is the following:
- name: Set abs-path file
run: |
echo "${{ github.workspace }}" >> abs-path
cat abs-path
echo
the {{ github.workspace }}
variable into a file on the runner. This will give the absolute host path which can then be read in later steps, in my case, docker in docker for rootless builds on the runner. Any way I tried, the runner appears to re-write the /actions-runner/__work/<route>/
to /__w/<route>
- hopefully this saves some people a few hours...
Not affiliated with GitHub but I think it is fair to assume that this inconsistency will never be fixed as it would break containers and actions that depend on the current behaviour.
@st3fan never say never. That's what major version upgrades are for. That way, it would be OK.
Describe the bug
github.Workspace
andrunner.Workspace
don't point to container valid paths when executing inside a container job.The values are also inconsistent with the values for env variables
GITHUB_WORKSPACE
andRUNNER_WORKSPACE
(both contain a valid path.To Reproduce Steps to reproduce the behavior:
Create a workflow with the following jobs
Expected behavior
On the container job
github.workspace
andrunner.workspace
should point to a path in the directory/__w
Runner Version and Platform
Version of your runner? 2.295.0
OS of the machine running the runner? ubuntu 20.04.4 (ubuntu-latest)
What's not working?
The values for
github.workspace
andrunner.workspace
are incorrect in the container job (and inconsistent with respective env variables)Job Log Output
If applicable, include the relevant part of the job / step log output here. All sensitive information should already be masked out, but please double-check before pasting here.
Output of container
dump
step in the container jobRunner and Worker's Diagnostic Logs
If applicable, add relevant diagnostic log information. Logs are located in the runner's
_diag
folder. The runner logs are prefixed withRunner_
and the worker logs are prefixed withWorker_
. Each job run correlates to a worker log. All sensitive information should already be masked out, but please double-check before pasting here.