Closed lmuhlha closed 1 year ago
hi @lmuhlha , in the second example, it looks like your date range is returning more than 10,000 records, which the PD REST API doesn't allow... I was meaning to work around that so maybe I should do that sooner rather than later.... your first example works fine for me... can you do the following please:
pd version
I suspect both are related to having too many (over 10,000) records returned by the query but that will help me be sure so that I can fix the problem...
Thanks...
Thanks for the quick reply!
I'm using pagerduty-cli/0.1.6 darwin-arm64 node-v16.15.0
Just ran
pd rest:fetch -e oncalls -P 'since='2022-02-13 'until='2022-02-14 -t --sort escalation_policy.summary,-escalation_level -k start -k end -k escalation_policy.summary -k escalation_policy.id -k escalation_level -k schedule.summary -k schedule.id -k user.summary --output=csv > oncallexport-test.csv
and you're right, it worked fine. However this command
pd log -O --since '2022-02-13' --until='2022-02-14' -k 'type' -k 'agent.id' -k 'incident.id' --filter 'created= 0[0-5]:' --sort agent.id --output=csv > incidentactivity-test.csv
Getting log entries 195/195 👍, 0/195 👎... done
returned an empty csv other than the headers.
Using the --debug flag for the log command with a week long range:
[DEBUG] Error in response:
[DEBUG] Response Status: 400
[DEBUG] Response Data: {
error: {
message: 'Invalid Input Provided',
code: 2001,
errors: [ 'Offset must be less than 10001.' ]
}
}
Also re-ran
pd rest:fetch -e oncalls -P 'since='2022-02-13 'until='2022-05-13 -t --sort escalation_policy.summary,-escalation_level -k start -k end -k escalation_policy.summary -k escalation_policy.id -k escalation_level -k schedule.summary -k schedule.id -k user.summary --output=csv > oncallexport1.csv
with the debug flag:
Mostly lots of rate limiting and now oddly this time it worked too...
Thanks for this info...
I think with your log command you showed, the filter just isn't matching... I see you're trying to match a space followed by 0[0-5] but the filter flag that the oclif framework provides is... sensitive. Can you try without the filter or with a different filter and see if that works? Sometimes I use -k created_at to get the raw RFC3339 timestamp because that's easier to use with regexes...
It definitely looks like you are encountering the 10,000 record limit - each call gets 25 records so anytime the progress line says more than 400 calls you can expect you'll hit that limit. I should handle that better, will work on that...
It should handle the http 429 rate limit fine though, so just seeing rate limited in the log should not be an indication that anything will fail...
Oh one more thing ugh I just realized, is it possible that your list of oncalls going back that far might include a deleted user? It might be trying to show you properties of a deleted user... I will check on that when I get a chance...
Yeah the oncalls going back that far certainly have some deleted users. But it managed to work after the first few tries, so not sure exactly what the initial issue was.
For the log command, we need logs between midnight and 5am daily which is what I'm trying to filter on. Is this possible? Maybe with a different syntax?
I managed to make the error pop up again a twice. I didn't see anything in the debug worth noting though:
Fetching 797/1014 👍, 0/1014 👎... !
TypeError: Cannot read properties of undefined (reading 'status')
Ah actually there's a hint in the csv:
ok so one of the problems with filtering on 'created' is that it accounts for your locale, so that is not only your time zone but also how you like to see times... for example some locales will show times in a 24-hour format with leading zeros like 01:00:00, 02:00:00 etc. while others will show things like 2:00:00 AM... So what I might do is figure out which hours i am interested in in UTC (for example I am in EST so 00-05 here would be 05-10 UTC) then include the UTC time in ISO8601 format in my table with -k created_at
and then i would filter on that with a slightly different regex, enumerating the hours 05, 06, 07, 08, 09:
pd log -O \
--since '2022-02-13' \
--until='2022-02-14' \
-k 'type' \
-k 'agent.id' \
-k 'incident.id' \
-k created_at \
--filter 'created_at=T(?:05|06|07|08|09)'
--sort agent.id \
--output=csv > incidentactivity-test.csv
Or, if you are ok with making it match only when the local is the same as your current locale, you could still filter on created but just make sure the regex matches whatever it's giving you... so like in my locale the times look like
Log Entry ID type Created Summary agent.id
────────────────────────── ───────────────── ────────────────────── ────────────────────────── ────────
R6KVG3WY3XQDBEKAI91SCKLUDG trigger_log_entry 2/13/2022, 7:00:11 PM Triggered through the API. PLV71SH
RR1XTBZX6CD2L9YKRLK1RUQ0MG trigger_log_entry 2/13/2022, 9:01:03 PM Triggered through the API. PLV71SH
RNF6JONU06FTKQMVO40O75REB9 resolve_log_entry 2/13/2022, 11:00:13 PM Resolved by timeout. P962AS8
ROOMZD317L2L53VBYUHJ1UH8Z9 resolve_log_entry 2/14/2022, 1:01:03 AM Resolved by timeout. P962AS8
R72KJZG5GSMYPIXRHA89SEK1DQ trigger_log_entry 2/14/2022, 4:07:58 AM Triggered through the API. PLV71SH
R0QMCCJNED503RGMHGV4HEAJ8D trigger_log_entry 2/14/2022, 7:59:04 AM Triggered through the API. PLV71SH
R963F8YGDKNZRPYUNL8ACFFBHG resolve_log_entry 2/14/2022, 8:07:59 AM Resolved by timeout. P962AS8
RN0TYHK2SDOSQPLGJ5IF5E5QGB trigger_log_entry 2/14/2022, 8:13:01 AM Triggered through the API. PLV71SH
ROS00YUKEVSN2IFQSY9732ZFV9 trigger_log_entry 2/14/2022, 8:28:54 AM Triggered through the API. PLV71SH
R91JJ1QSB9YEEAOTVEW8H8E5AK trigger_log_entry 2/14/2022, 8:53:54 AM Triggered through the API. PLV71SH
R7MC6CTPCK2YT6KPZKDH3BUMNM trigger_log_entry 2/14/2022, 9:56:23 AM Triggered through the API. PLV71SH
RQ6TTTBBLPU2ZFI3F3L5UV5PZH resolve_log_entry 2/14/2022, 11:59:04 AM Resolved by timeout. P962AS8
so if i wanted, say, 7-9am, i could do:
pd log -O \
--since '2022-02-13' \
--until='2022-02-14' \
-k 'type' \
-k 'agent.id' \
--filter 'created=, (?:7:|8:).* AM'
this might actually be better because it would account for daylight saving time if you have that where you are. anyway, maybe a long winded answer but it was interesting figuring this out, hope it helps...
oh and regarding that error, that is weird because (1) that's the error you would usually get if your DNS failed but the host entry should be cached, why would one of them fail and (2) it definitely shouldn't show up in your CSV output... I will do some more looking, it would be nice if i could repro this but I haven't managed to do that yet
That new filter worked, thank you for the detailed explanation. And thanks for all the help and quick responses, really appreciate it! If you want to leave this open for the intermittent DNS issue feel free, but I think I'm all set now. 😄
Thanks, I'll leave it open for a couple of days in case you see it again?
I tried running the following commands recommended by Pagerduty support and ended up getting
TypeError: Cannot read properties of undefined (reading 'status')
each time.