Open mianosm opened 5 years ago
I have an issue in the DNSAPI for ultradns which causes it to fail 95% of the time. This seems to occur when requesting for the record to be added. It seems when i set the script to output fully I see it repeating com 100 times. the reason it seems to do this is it is querying the ultradns api for any zone finishing in .com and then outputting that .com as many times as it can find it.
This area seems to be the culprit which it look like it is used to strip the prefix from the request so that the api can get to the right zone
h=SITE.com
it keep running that downwards until it ends up stripping it to be a check for com
it ends up with a call like curl -L --silent --dump-header /root/.acme.sh/http.header -g --user-agent 'acme.sh/2.8.1 (https://github.com/Neilpang/acme.sh)' -X POST -H 'Content-Type: application/json' -H 'Authorization: Bearer REDACTED"]}' 'https://restapi.ultradns.com/v2//zones/com
If it is getting down to a bare domain of .com - then it isn't finding the domain that you've requested a certificate for.
Are you certain that the domain you're using for SITE is available to your user, and creating a record via the API works aside from using acme.sh ?
I'm unable to replicate this issue, unless I don't have the domain already listed an in existence within UltraDNS at this point.
If you'd like to post a sanitized running of acme.sh with --debug that would be more beneficial than what I assume is an edited strace.
Now you have said that I have worked out what the problem is, basically I have more 100 zones on my account and the api call that checks for the zone only get the request for the first 100 zones on the account and then at the end of that request leaves a link to the next 100. I can prove this as if I request a cert from a domain with a lower alphabetical value it works. Ironically I have actually been on a call the support team from ultradns and they will send me a different way of requesting the zone list which will "hopefully" request all of the zones at once.
Ah, the organization I have access to, that is using UltraDNS, doesn't have over a hundred zones. If you want to share what the API call is; or provide a patch...either one should fix your issue, thanks @syndex101 !
waiting on the nice folks at ultradns to get back to me (will shout when they do)
Hello I think I am a little futher on with this (to be honest I now think I may just have a temporary issue stopping me), UDNS have got back to me and told me to add "?limit=HIGHLIMIT&Offset=HIGHLIMIT" to my call to the restapi to get the zones list which I think would look like this
_get_root() { echo "GET_ROOT called" domain=$1 i=2 p=1 while true; do h=$(printf "%s" "$domain" | cut -d . -f $i-100) _debug h "$h" _debug response "$response" if [ -z "$h" ]; then
return 1
fi
**if ! _ultra_rest GET "zones?limit=1000&Offset=1000"; then**
return 1
fi
if _contains "${response}" "${h}." >/dev/null; then
_domain_id=$(echo "$response" | _egrep_o "${h}")
if [ "$_domain_id" ]; then
_sub_domain=$(printf "%s" "$domain" | cut -d . -f 1-$p)
_domain="${h}"
_debug sub_domain "${_sub_domain}"
_debug domain "${_domain}"
return 0
fi
return 1
fi
p=$i
i=$(_math "$i" + 1)
done return 1 }
I can now see the record being added and deleted, thing is it is still failing, but now there are occasional issues with the DNS check, it might have been temporary but I think Ultradns may not propagate the txt record fast enough for the sleep to be enough, I will test more on monday to make sure it is not a temporary bit of slowness, if not I will up the sleep timeout and see if that evens the problem out a little. Thanks S
Nice.
I will do shortly thanks so much
I will do shortly thanks so much
Actually, this doesn't seem like a good fix at all. There would ideally be a better way of finding out the number of zones available to the user, and then iterating through the zones instead of blindly trying a limit of 1000, with an offset of 1000.
It feels like kind of a blind attempt - unless Neustar has hidden/buried this to reply with all zones (being that it's starting at 1000, and returning 1000?
This is the text of what they sent me directly "Getting a list of zones - Use “GET Zones of an Account”. For this request the default limit is 100 zone return. The maximum value is 1,000 zones per request. Use the offset parameter to specify where to start the next request. Request Limit: 1000 Offset:1000 will return zones 1000-2000.
GET /v2/accounts/portaldemos/zones?limit=1000&Offset=1000
"
Yep, that was my fear after reading the documentation. So we need to figure out or find out how many zones there are first and foremost.
Then make calls based on that knowledge (if less than 100, then business as usual). If more than 100, then we can obtain 1000 at a time. The first request can/could be ?limit=1000&offset=0, and the next would be ?limit=1000&offset=1000, and so on as we get the zones. I'll try to see if I have time to dig into this at some point this week. For the time being: https://github.com/mianosm/acme.sh/tree/ultradns_2118_manydomains has a limit of 1000, without iteration, so it will work...until someone has more than 1000 zones (at which point they'll also fail at programmatically getting certificates).
I have a really ugly idea .... I am fairly sure that I have read that the reports api can get an amount of zones on the account for billing purposes, it might be a call to that can pull the info and then be fed back into the limit.
I have also asked UDNS if there is a cleaner way to do this
The total count of zones is available with a curl to: https://restapi.ultradns.com/v2/zones/ and a: cut -d, -f4 | cut -d: -f3, so going from there, we would need to figure how best to implement the logic around a return of say: less than 100 (the default), and more than 100 (to iterate through perhaps by way of modulo, and increasing the offset by 100 at each request).
yeah probably a fair bit cleaner than mine
Had a problem today that the domain_id
is returned totalCount
times.
Added a uniq
and everything worked properly.
--- dnsapi/dns_ultra.sh.orig 2020-07-15 17:58:53.044038164 -0300
+++ dnsapi/dns_ultra.sh 2020-07-15 17:56:01.286048391 -0300
@@ -121,7 +121,7 @@
return 1
fi
if _contains "${response}" "${h}." >/dev/null; then
- _domain_id=$(echo "$response" | _egrep_o "${h}")
+ _domain_id=$(echo "$response" | _egrep_o "${h}" | uniq)
if [ "$_domain_id" ]; then
_sub_domain=$(printf "%s" "$domain" | cut -d . -f 1-$p)
_domain="${h}"
I can provide a proper PR for this one.
@fzipi Thank you I had the exact same issue and your fix worked for me. For some reason the domain it spread over multiple lines and from that point on the Curl requests are malformed.
[Wed 5 Aug 15:44:56 BST 2020] domain='somedomain.com' [Wed 5 Aug 15:44:57 BST 2020] _domain_id='somedomain.com somedomain.com' [Wed 5 Aug 15:44:57 BST 2020] _sub_domain='_acme-challenge.swb-uat.sys' [Wed 5 Aug 15:44:57 BST 2020] _domain='somedomain.com' [Wed 5 Aug 15:44:57 BST 2020] Getting txt records [Wed 5 Aug 15:44:57 BST 2020] zones/somedomain.com somedomain.com/rrsets/TXT?q=value:_acme-challenge.swb-uat.sys.somedomain.com [Wed 5 Aug 15:44:57 BST 2020] TOKEN='eyJhbGsfggNiJ9.eyJleHAiOjE1OTertgdfhherhdCI6MTU5NjYzODY5NiwiY2xpZdfgergIsInVzZXJuYW1lIjoiY2VydGlmaWNhdGVtYW5hZ2VtZW50In0.8LgVVYl65PBOXFaWJ-D6_QUbYGbCB10wOjSxcjSPx5Y' [Wed 5 Aug 15:44:57 BST 2020] POST
Ola, UltraDNS API calls have changed endpoints, rendering the zone requests blank, updated the new API endpoints (auth is still /v2) and fixed the multiple _domain_id issue similarly to @fzipi (using head
instead).
It's on this PR: https://github.com/acmesh-official/acme.sh/pull/4128
This is the place to report bugs in the UltraDNS API.
If you experience a bug, please report it in this issue.
Thanks!