Closed timrettop closed 4 years ago
Attempting double quotes for the values did not resolve the issue. Attempting to remove the single quotes (so key=valuestring instead of key='valuestring') for the values RESOLVES the issue when using the -e hardcoded values in the PODMAN_CMD however it results in a different error when running with the env file (this error could be due to changing the certificate hostname, but it works hardcoded).
2020/07/15 04:15:16 [INFO] [controller.mydomain.com] acme: Preparing to solve DNS-01
2020/07/15 04:15:41 [INFO] [controller.mydomain.com] acme: Cleaning DNS-01 challenge
2020/07/15 04:15:54 [WARN] [controller.mydomain.com] acme: cleaning up failed: failed to determine Route 53 hosted zone ID: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020/07/15 04:15:54 [INFO] Deactivating auth: https://acme-v02.api.letsencrypt.org/acme/authz-v3/removed
2020/07/15 04:15:54 Could not obtain certificates:
error: one or more domains had a problem:
[controller.mydomain.com] [controller.mydomain.com] acme: error presenting token: route53: failed to determine hosted zone ID: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I ran into some similar weirdness when I was writing this initially. It seemed like lego wouldn't see certain variables set in an env-file vs explicitly set with -e
as though they weren't being exported in the environment in the lego container. I never quite figured out if it was a bug in the way podman sets variables passed in or in lego and how it expects environment variables to be set.
The env-file approach seemed the most versatile to me since people like yourself that aren't using Cloudflare as a DNS provider I envisioned just being able to drop some additional variables in the file and change the challenge type. I think there's some more investigation to be done though to figure out where the problem lies and what the most versatile way of setting those variables is so they work in the most use cases for other DNS providers.
What I didn't want to do was some like, for loop business where it looped through a file and cobbled together some long list of -e variable=foo -e variable=foo -e variable=foo
string to pass into the container because that seemed really inelegant.
Great work on putting this together. I use route53 for my LE authentication and was running into an issue using this tool.
I set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_HOSTED_ZONE_ID, and AWS_REGION values in udm-le.env so the contents of the file look like this (with secret replacing the actual strings:
digging into the script, I see that the command that is run is constructed like so:
podman run --env-file=/mnt/data/udm-le/udm-le.env -it --name=lego --network=host --rm -v /mnt/data/udm-le/lego/:/var/lib/lego/hectormolinero/lego --dns route53 --email secret -d mydomain.com -d *.mydomain.com --key-type rsa2048 --accept-tos run && deploy_cert
Running the tool unmodified, I'd get this output:
I attempted to run this command outside of your script on the UDM shell and I get the same error:
# podman run --env-file /mnt/data/udm-le/udm-le.env --name=lego --network=host --rm -v /mnt/data/udm-le/lego/:/var/lib/lego/ hectormolinero/lego --dns route53 --email secret-d mydomain.com -d *.mydomain.com --key-type rsa2048 --accept-tos run && deploy_cert
However if I alter the shell command and replace the --env-file with -e values like this:
podman run -e AWS_ACCESS_KEY_ID='secret' -e AWS_SECRET_ACCESS_KEY='secret' -e AWS_HOSTED_ZONE_ID='secret' -e AWS_REGION='secret' -it --name=lego --network=host --rm -v /mnt/data/udm-le/lego/:/var/lib/lego/hectormolinero/lego --dns route53 --email secret -d mydomain.com -d *.mydomain.com --key-type rsa2048 --accept-tos run && deploy_cert
I'm able to successfully pass the authentication.
HOWEVER, interestingly, attempting to hardcode those values into the PODMAN_CMD variable in your script DOES NOT work as it does on the shell.
Thoughts?
As far as I can tell the environment variables do not get referenced from the file when the container runs, but I haven't been able to determine if that is accurate. adding --log-level=debug to the podman command output anything more useful (like that the environment variables are being set).