Closed JamesIves closed 2 years ago
Hi @JamesIves, Thanks for reporting this issue! We will try to reproduce this issue and we will get back to you as soon as we have more information π
Hi @JamesIves, I tried to reproduce the reported problem, but I didn't succeed, your workflow works for me (see this). However, I discovered the cause of the problem: during the cleanup, you download too many files and therefore receive such an error message from the OS. I am also sending a link in which the error you received is explained in more detail. Since this is not a runner problem, I will close this issue. Feel free to reopen it if you have more questions and doubts.
I am somewhat confused about how I can run the exact same workflow in a different project and get different results. I read up on the error so I'm somewhat familiar with it but I don't really know how this is causing it?
Could the size of the contents of the repo be the problem? And if so is there a hard upper limit?
I further researched and concluded that this is not a runner bug, so I'll leave this issue closed for now. Please take a look at this issue, it is about a similar/same bug as the one you reported.
In case it's helpful to anyone else that runs into this, I ran into this error with post-run steps, and it was because I had an environment variable set in $GITHUB_ENV
that was too large (in my case a diff for kubernetes changes). Removing that env variable fixed the issue for me.
I am also getting the Argument list too long
error when trying to work with a larger (~ 7k characters) terraform plan output between multiple steps in my workflow.
Why was this closed? I'm having lot's of trouble trying to upload longer text streams
The issue is still present on v6.4.0
Run actions/github-script@v6.4.0
Error: An error occurred trying to start process '/runner/externals/node16/bin/node' with working directory '/runner/_work/repo-name/repo-name'. Argument list too long
I got the same one today
Post Run actions/checkout@v3 Post job cleanup. Error: An error occurred trying to start process '/home/runner/runners/2.305.0/externals/node16/bin/node' with working directory '/home/runner/work/huma-rpm-server/huma-rpm-server'. Argument list too long
Definitely still valid. We got the issue trying to store the git diff
in an environment variable so only one job in the workflow needs a fetch-depth: 0
.
The best way to work around this is to store large outputs in a file instead of in an env var.
Still no fix? We got the same issue with a very long Terraform plan output.
I found this stack overflow article where a user suggests truncating the tfplan result if the plan is too long.
add these to your workflow:
- name: truncate terraform plan result
run: |
plan=$(cat <<'EOF'
${{ format('{0}{1}', steps.plan.outputs.stdout, steps.plan.outputs.stderr) }}
EOF
)
echo "${plan}" | grep -v 'Refreshing state' >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: create comment from plan result
uses: actions/github-script@0.9.0
if: github.event_name == 'pull_request'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Initialization βοΈ\`${{ steps.init.outcome }}\`
#### Terraform Plan π\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${ process.env.PLAN }
\`\`\`
</details>
*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ inputs.TF_WORK_DIR }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})```
My solution is to run the terraform show inside the comment step:
- name: Plan
id: plan
working-directory: ${{ inputs.working_directory }}
run: terraform plan -out tf.plan
- name: Comment Plan
uses: actions/github-script@v7
continue-on-error: true
with:
script: |
const fs = require('fs');
const { execSync } = require('child_process');
// Get the terraform plan output
const planOutput = execSync('terraform show -no-color tf.plan', {
cwd: '${{ inputs.working_directory }}'
}).toString();
const {data: comments} = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.number,
})
const botComment = comments.find(comment => comment.user.id === 41898282 && comment.body.includes("${{ github.workflow }}"))
if (!botComment) {
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Terraform Plan for ${{ github.workflow }}:\n\`\`\`${planOutput}\`\`\``
})
} else {
github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: botComment.id,
body: `Terraform Plan for ${{ github.workflow }}:\n\`\`\`${planOutput}\`\`\``
})
}
Describe the bug
Multiple GitHub-provided actions are failing in their post-run steps due to an
Argument list too long
error.To Reproduce
fs.writeFile
function.set-output
is true in thefetch-api-data
step as that makes the action save it as an environment variable:An error occurred trying to start process '/home/runner/runners/2.288.1/externals/node16/bin/node' with working directory '/home/runner/work/project/project'. Argument list too long
.Having looked at this briefly I suspect this is occurring as there's some degree of logging that occurs behind the scenes that are causing the argument list to overflow. If you toggle
ACTIONS_STEP_DEBUG
to true in the secrets menu you see this. The data parsed is large but I'm not really sure why this would be a problem.Expected behavior
I am not really sure. If this is indeed the runner causing this due to logging I would expect there to be some form of fail-safe to prevent it from crashing.
Runner Version and Platform
What's not working?
An error occurred trying to start process '/home/runner/runners/2.288.1/externals/node16/bin/node' with working directory '/home/runner/work/project/project'. Argument list too long
.Job Log Output