martindstone / pagerduty-cli

A command line interface for PagerDuty
MIT License
91 stars 12 forks source link

Incident Counts differ when using limit param on the incidents endpoint #46

Closed BillSidney closed 1 year ago

BillSidney commented 1 year ago

Using the limit query param with the incidents endpoint produces different results.

In hindsight ... I realized pd handles pagination and the limit param is not necessary - but was surprised by the results

start_date=`date +"%Y-%m-%dT%H:%M:%SZ" -d "2022-11-13"`
end_date=`date +"%Y-%m-%dT%H:%M:%SZ" -d "2022-11-14"`

incidents=$(pd rest:fetch -e incidents \
   -k alert_counts.all \
   -k service.summary \
   -P "since=$start_date" \
    -P "until=$end_date" \
    -t \
    --output=csv \
    --no-header 2> /dev/null | wc -l)

incidents_with_limit=$(pd rest:fetch -e incidents \
    -k alert_counts.all \
    -k service.summary \
    -P "since=$start_date" \
    -P "until=$end_date" \
    -P "limit=10000" \
    -t \
    --output=csv \
    --no-header 2> /dev/null | wc -l)

echo "#incidents: $incidents"
echo "#incidents_with_limit": $incidents_with_limit

output:

#incidents: 604
#incidents_with_limit: 2266
martindstone commented 1 year ago

Hi @BillSidney I believe your goal is to limit the total number of incidents that can be fetched overall? If so, you want to use --limit 1000, not -P limit=1000 which just passes that as a query param to the underlying HTTPS GET that is used by the fetch command. My bad, because I should not allow you to pass limit as a query param as it will mess up the pagination that fetch is doing. Can you try that and let me know what you observe? Thanks

BillSidney commented 1 year ago

--limit works as expected

#incidents: 604
#incidents_with_limit: 604
martindstone commented 1 year ago

Thanks @BillSidney, I will fix the validation on that flag

martindstone commented 1 year ago

Hi @BillSidney, I fixed that param validation in RC 0.1.7 just uploaded; I will probably promote it to stable soon, so if it's ok with you I'll be closing out this issue... thanks again for reporting it