Closed vladinator1000 closed 1 week ago
Hey 👋 These issues are always quite tricky to debug so let me ask some simple debugging questions first
cloudflare/wrangler-action@3
, here's an example of what of of my projects looks like (note the inclusion of secret values in env)
Run cloudflare/wrangler-action@v3
with:
accountId: ***
apiToken: ***
wranglerVersion: 3.26.0
secrets: TURBO_TOKEN
command: deploy --minify
quiet: false
env:
PNPM_HOME: /home/runner/setup-pnpm/node_modules/.bin
TURBO_TOKEN: ***
Uploading secrets...
group, is there any additional debugging information? Remember groups can have additional information inside them, you have to expand to find out more.CLOUDFLARE_ACCOUNT_ID
in the environment, specify it directly in the options for wrangler-action
Any news on this? I also can't make the wrangler-action
upload secrets (I use wrangler v3.48.0
).
Here's my code:
- name: Deploy to Cloudflare
uses: cloudflare/wrangler-action@v3
with:
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
workingDirectory: ./apps/api/
packageManager: bun
environment: production
quit: false
secrets: |
RESEND_API_KEY
env:
RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}
And the result is always the same:
Finished processing secrets JSON file:
✨ 0 secrets successfully uploaded
✘ [ERROR] 🚨 1 secrets failed to upload
I was able to resolve this issue by reverting to the "legacy" secrets upload method (i.e. setting wranglerVersion: '3.3.0'
to get more detailed logging. Doing this means you will get logging per attempted secret upload, so if any individual secret has a problem you should get a clear message of why (rather than bulk upload failing with a generic error).
Example with bulk upload with wrangler >= 3.4.0 (I used 3.50.0)
Example with wrangler <=3.3.0
To "properly" fix it I guess CF would have to improve the error output on wrangler secret:bulk
Thanks Nick, reverting to wranglerVersion: 3.3.0
helped with logging, although in my case, it's still not clear what's causing the error. My action logs look like this:
✨ Success! Uploaded secret SUPABASE_SERVICE_ROLE_KEY
✘ [ERROR] A request to the Cloudflare API (/accounts/123/workers/scripts/im--135348507-example_com-staging/secrets) failed.
global variable USER_PAGERDUTY_API_KEY already set [code: 10053]
The logs seem to indicate USER_PAGERDUTY_API_KEY
is set as a variable instead of a secret, but inspecting the Worker Variables tab in the dashboard confirms USER_PAGERDUTY_API_KEY
is a secret as expected.
USER_PAGERDUTY_API_KEY Value encrypted
It's not clear to me why the SUPABASE_SERVICE_ROLE_KEY
secret gets uploaded correctly but USER_PAGERDUTY_API_KEY
fails when both of them are secrets that have been previously set using Wrangler (when the script was created).
Happening to me as well. In my case, deleting the worker and creating it again fixed the issue.
I was able to break it again by manually adding variables within Cloudflare Dashboard, anything random like test = foobar
and try to deploy again afterwards through GitHub Action. Adding the environment variables would fail with the error
Finished processing secrets JSON file:
✨ 0 secrets successfully uploaded
✘ [ERROR] 🚨 4 secrets failed to upload
Deleting again the manually added value allowed me to deploy again successfully, so my suspicion is any discrepancy with the variables in the dashboard and the Action file are the cause of the issue.
config:
- name: Deploy
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
secrets: |
RESEND_API_KEY
EMAIL_USERNAME
TEST_EMAIL_ADDRESS
env:
RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}
EMAIL_USERNAME: ${{ secrets.EMAIL_USERNAME }}
TEST_EMAIL_ADDRESS: ${{ secrets.TEST_EMAIL_ADDRESS }}
log: https://github.com/willin/resend-cloudflare-service-worker/actions/runs/10491270196/job/29059995916
✅ Wrangler installed
🔑 Uploading secrets...
/home/runner/.bun/bin/bunx wrangler secret:bulk
⛅️ wrangler 3.13.2 (update available 3.72.1)
---------------------------------------------
🌀 Creating the secrets for the Worker "email-sender-worker"
✘ [ERROR] uploading secret for key: RESEND_API_KEY:
A request to the Cloudflare API (/accounts/***/workers/scripts/email-sender-worker/secrets) failed.
✘ [ERROR] uploading secret for key: TEST_EMAIL_ADDRESS:
A request to the Cloudflare API (/accounts/***/workers/scripts/email-sender-worker/secrets) failed.
✘ [ERROR] uploading secret for key: EMAIL_USERNAME:
A request to the Cloudflare API (/accounts/***/workers/scripts/email-sender-worker/secrets) failed.
Finished processing secrets JSON file:
✨ 0 secrets successfully uploaded
✘ [ERROR] 🚨 3 secrets failed to upload
If you think this is a bug then please create an issue at https://github.com/cloudflare/workers-sdk/issues/new/choose
Error: The process '/home/runner/.bun/bin/bunx' failed with exit code 1
Error: Failed to upload secrets.
Error: 🚨 Action failed
I just spent my portion on this issue. Found the solution. If you have already defined keys in your worker they need to be type "secret" (add -> value -> encrypt) or delete them. This way the secret bulk
is able to create them OR update them. If key is not secret it cannot be updated thus the action will fail. Of course secret:bulk delete doesnt exist. Thats also why @gentlementlegen was successfull with deleting his worker.
thanks @kocendavid It worked on the first try, but when I redeployed, it got replaced again with plaintext
.
My workaround for supporting multiple custom vars and secrets:
Make sure your secret vars are already using secret type
in your cloudflare Variables and Secrets configuration.
name = "xxxxxxxx"
compatibility_date = "2024-11-06"
main = "./dist/worker/index.js"
assets = { directory = "./dist/public", binding = "ASSETS" }
[vars]
NUXT_OAUTH_AUTH0_CLIENT_ID = ""
NUXT_OAUTH_AUTH0_DOMAIN = ""
# make sure to exclude vars inside the TOML file when using secrets so that when using this github action, your vars are not replaced with plaintext
# NUXT_OAUTH_AUTH0_CLIENT_SECRET = ""
# NUXT_SESSION_PASSWORD = ""
github yaml
- name: Deploy
uses: cloudflare/wrangler-action@v3
env:
NUXT_SESSION_PASSWORD: ${{ secrets.NUXT_SESSION_PASSWORD }}
NUXT_OAUTH_AUTH0_CLIENT_ID: ${{ secrets.NUXT_OAUTH_AUTH0_CLIENT_ID }}
NUXT_OAUTH_AUTH0_CLIENT_SECRET: ${{ secrets.NUXT_OAUTH_AUTH0_CLIENT_SECRET }}
NUXT_OAUTH_AUTH0_DOMAIN: ${{ secrets.NUXT_OAUTH_AUTH0_DOMAIN }}
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
environment: 'production'
vars: |
NUXT_OAUTH_AUTH0_CLIENT_ID
NUXT_OAUTH_AUTH0_DOMAIN
secrets: |
NUXT_SESSION_PASSWORD
NUXT_OAUTH_AUTH0_CLIENT_SECRET
Also, it seems that vars
does not exist in the README.md, so I need to check the code myself to confirm that it actually exists for gihub action input.
Hi! As others have pointed out, this can happen if there is already a binding/environment variable set on the Worker with the same name as the secret. This includes non-secret text variables set on the Worker.
To fix this, remove the conflicting binding/environment variable from your Worker and try again.
The logging that wrangler outputs is admittedly not helpful here, so I've raised an issue to improve that in wrangler itself: https://github.com/cloudflare/workers-sdk/issues/7287
Closing this out in favor of the workers-sdk issue.
I just migrated to wrangler-action v3 and this started happening to me. It fails every time I run it.
Here's my workflow file:
I ran it in debug mode, but didn't see any useful logging
I tried pinning
wranglerVersion
to the one in my package.json, but that didn't change anything.