Closed neptunian closed 1 week ago
Just comment with: - `/oblt-deploy` : Deploy a Kibana instance using the Observability test environments. - `run` `docs-build` : Re-trigger the docs validation. (use unformatted text in the comment!)
/ci
Pinging @elastic/obs-ux-infra_services-team (Team:obs-ux-infra_services)
/oblt-deploy-serverless
docker.elastic.co/kibana-ci/kibana-serverless:pr-182884-e036d1e4562c
Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app
id | before | after | diff |
---|---|---|---|
apm |
3.2MB | 3.2MB | -1.0B |
The Canvas "shareable runtime" is an bundle produced to enable running Canvas workpads outside of Kibana. This bundle is included in third-party webpages that embed canvas and therefor should be as slim as possible.
id | before | after | diff |
---|---|---|---|
module count |
- | 5407 | +5407 |
total size |
- | 8.8MB | +8.8MB |
To update your PR or re-run it, just comment with:
@elasticmachine merge upstream
Fixes https://github.com/elastic/kibana/issues/178491
Summary
The user receives a
too_many_buckets
exception when querying for 90 days worth of data and in many other longer time ranges. This is due to the date histogram within each service having time intervals that are too small.Solution
Lowering
numBuckets
cause the time periods to increase because the algorithm divides the date the user selects by this number (duration / numBuckets). The larger the time range is, the more likely it will choose an interval that is larger, resulting in less buckets per date histogram.The exception can still be thrown when users select time ranges that aren't caught in the algorithm, for eg selecting 4 years or more will cause the error should a user have around the max # of dependencies (1500). This is because our smallest time interval is 30 days and that interval becomes too small in a large time range. We can recommend in this case to increase the max bucket size in elasticsearch. There needs to be a balance with how much we try to stay under the default bucket limit vs letting the user change that size and get more data.
Scenarios of duration and numBucket size and the resulting # of buckets with the max of 1500 dependencies:
Changes
numBuckets
to 8 when callingcalculateAuto.near
calculateAuto.near
andgetBucketSize
Testing
const NUMBER_OF_DEPENDENCIES_PER_SERVICE = 15; const NUMBER_OF_SERVICES = 100;
node scripts/synthtrace many_dependencies.ts --live --clean
locally