Closed gdey closed 3 years ago
@johngian which cache providers you think I should make sure to implement first. And if the above makes sense.
@gdey I wonder if we should adjust the response_size_bytes
a bit to include a 500kb bucket as 500kb is a warning threshold for tile size. We could potentially drop the 5+mb and bucket anything over 1MB
@gdey I think a good start is to add support for the fs based backend and s3.
@gdey Can you ellaborate a bit on the duration_seconds
metric? Is it the time for the cache get operation?
Some suggestions for the cache metrics:
tegola serve
tegola cache seed
tegola cache purge
@johngian for the tegola cache
sub command I would assume we would need to setup Prometheus push type configuration?
@gdey that's a good point as the cache seed / purge commands don't have a server component, there's no /metric
endpoint available. For seeding / purging maybe it makes more sense to analyze the log output?
Can't we have a configuration for that and maybe allow exposing the /metrics
endpoint for the cache subcommand as well.
@johngian we could. The thing is the cache sub-command at current does not spin up a web server, it runs and then exits. No webserver involved.
@gdey we could add a push_url
to the observer config to allow for ephemeral job reporting.
Different labels for cache operations on
* `tegola serve` * `tegola cache seed` * `tegola cache purge`
wouldn't you set these labels in the Prometheus configs?
As long as its configurable then that's fine. My point is mostly about being able to differentiate the metrics between different use cases.
@gdey
wouldn't you set these labels in the Prometheus configs?
Do you know how this would work for push? I have not checked into how Prometheus configs work for ephemeral jobs.
The Cache metrics will be prefixed with
tegola_cache_
and the following things will be observedEach one will have a label of
type
that will be the cache type name