prometheus / prometheus

The Prometheus monitoring system and time series database.
https://prometheus.io/
Apache License 2.0
55.34k stars 9.1k forks source link

Expose more scrape_config parameters via relabeling #1176

Closed fabxc closed 8 years ago

fabxc commented 8 years ago

As suggested by @jimmidyson we should be able to set scrape_interval, scrape_timeout, and authentication parameters through relabeling.

Essentially allowing full scrape configuration through service discovery mechanisms requires having full coverage. To be more adaptive to changes and extensions we should find a more generic approach for this as manual mapping is error prone.

brian-brazil commented 8 years ago

I wonder here if we're better pointing such use cases towards file_sd and procedurally generating config files. Something like auth is going to be hard to get right from a security standpoint.

fabxc commented 8 years ago

If I can configure my whole scrape_config via my SD but still have to complete it manually in a prometheus.yml I won little. That's not some extraordinary use case we have to provide a fallback solution for, it's fundamental to all SD mechanisms.

brian-brazil commented 8 years ago

If I can configure my whole scrape_config via my SD but still have to complete it manually in a prometheus.yml I won little.

I'd disagree here, if you feel the need to have things with that level of dynamism I'd argue that you've failed to establish standards across your organisation and are likely overthinking your monitoring. I think we should be encouraging standardisation, rather than every time I come across a new job having to determine what that team thought was a good name instead of /metrics. Or having to figure out what the scrape interval was, which is important information for debugging that you want to change rarely.

There is such a thing as having too generic a solution.

fabxc commented 8 years ago

Then I have to ask myself why we are supporting relabeling of, for example, the metrics path at all. The only way it's different from scrape interval is, that while I actually do have a fixed path across my org, this is certainly not the case for my scrape interval.

brian-brazil commented 8 years ago

I wonder that about the metric path myself, but see it as mostly harmless and there's cases like where you have one scrape config covering many jobs and a handful of them are exceptions.

Interval is more problematic, as depending on the setup you can violate implicit assumptions we or our user's expressions make about timestamps (see recent rate discussion for example, we presume scrape interval is a constant) or take out a Prometheus server though overload. Anyone doing this would have to take some care that they were sufficiently protected, and as this would all be opt-in that's kinda okay. It does open us up to things like on-the-fly or procedural changing of interval and inconsistent intervals across jobs, which my instinct is to actively discourage as I can see that being both alluring and causing a lot of complexity for little gain.

fabxc commented 8 years ago

The interval being constant is a wrong assumption in general already.

It would be good to recall why we added the schema and metrics path in the first place then. I assume there was some exotic use case. Them changing within a job is a fairly unusual and can be worked around otherwise. Schema and paths are not in the general scope of service discovery either. So I would then change the proposal of this issue to remove those as relabelable parts.

I suppose the general idea was that you can configure multiple jobs within the same scrape config and want the flexibility there. But that's of no use if that doesn't carry over to authentication and scrape timings.

Having an indecisive approach just complicates operation. Generally users can easily adapt to however it is supposed to be done – as long as there's a single straightforward way.

brian-brazil commented 8 years ago

The interval being constant is a wrong assumption in general already.

That's a little worrying, as anyone changing intervals on the fly has likely greatly overengineered their monitoring.

I believe that a given Prometheus server should have one scrape interval, and on rare occasions also a slower scrape.

It would be good to recall why we added the schema and metrics path in the first place then. I assume there was some exotic use case.

For scheme https://github.com/prometheus/prometheus/pull/967 which points to https://github.com/prometheus/prometheus/issues/910#issuecomment-128305407 which implies because metrics path was relabelable.

Metrics path is relabelable due to https://github.com/prometheus/prometheus/pull/654, which appears to be an incidental change.

So no exotic use cases, I'd guess it was just the way you happened to implement that change.

I suppose the general idea was that you can configure multiple jobs within the same scrape config and want the flexibility there.

I've used metric_path that way, though I could also have used two scrape configs. We do say that if it can be done with configuration management then you should rather than adding features. There is an efficiency argument here, as having many duplicate SDs may be a resource hog - though if that's true then it's not a very good SD method.

Having an indecisive approach just complicates operation. Generally users can easily adapt to however it is supposed to be done – as long as there's a single straightforward way.

I'm strongly against scrape interval being relabelable, against auth on the basis that I don't think we can make it work in a secure way, params needs to stay and I'm mostly ambivalent on the rest.

So if we're going for consistency, then that means I'm for dropping scheme and metrics path and that the way to approach this is multiple scrape configs. This'd also reduce the complexity of relabelling a tad, which never hurts.

jimmidyson commented 8 years ago

If you take away relabelling of metrics path then you're enforcing a consistent metrics path on all discovered targets in a job. In the case of Kubernetes that would be a PITA as multiple services are deployed with different requirements & scraped using the same job, using Kubernetes annotations to configure stuff like metrics path & scheme.

fabxc commented 8 years ago

That's a little worrying, as anyone changing intervals on the fly has likely greatly overengineered their monitoring.

Not saying that it is a problem – but it doesn't become any more problematic when setting it through relabeling. How often this changes is up to the user either way.

So no exotic use cases, I'd guess it was just the way you happened to implement that change.

Thanks for digging up the relevant parts. It was implemented as a label more for internal reasons – back then there wasn't even relabeling. Then relabeling come into play. From IRC logs I infer a use case came up and I eventually pointed out it is somewhat possible via relabeling.

I guess the whole story of relabeling internal labels started from there. And it's not bad in general – but doing it half-way is.

I've used metric_path that way, though I could also have used two scrape configs. We do say that if it can be done with configuration management then you should rather than adding features. There is an efficiency argument here, as having many duplicate SDs may be a resource hog - though if that's true then it's not a very good SD method.

I cannot think of any SD that would consume a notable amount of resources right now. Dropping support would result in requiring more than one scrape configuration. But that's also what you need for the other non-relabelable options right now.

Job names currently being unique though being relabelable to non-uniqueness is a problem right now anyway. Unfortunately with some internal complications – those should disappear with #1064.

@jimmidyson yes, but one has to get a setup in place to handle this anyway, right? If targets require different authentication or scrape intervals that is currently not solvable. And based on @brian-brazil's arguments, it shouldn't be. With that the whole idea of having a single scrape configuration for an entire cluster would fall apart, which from my point of view, also makes __metrics_path__ and __scheme__ relatively obsolete.

As said, it's a 0 or 1 thing.

jimmidyson commented 8 years ago

If you take away the ability to relabel target config then you're pushing people to make their own ways to generate the dynamic config, whatever config management tool they decide to use. I would personally introduce the chance for service discovery to be able to set all target config which gives Prometheus more flexibility without the need for users to each roll their own ways of doing dynamic config.

In my dream (& we're very close to now with Kubernetes SD) I want users to just deploy Prometheus, expose metrics & profit. I would love Prometheus to be this simple to deploy & only introduce complexity when it's needed, such as sharding, federation, etc.

IMO we should expand SD capabilities via direct target config (or labels if you'd prefer), not remove functionality. Seeing as target config is a fixed set, I'd personally go with direct target config rather than the relabelling approach as I'm not sure what this gains us.

fabxc commented 8 years ago

How would such a direct target configuration look like, e.g. in the case of Kubernetes SD? Relabeling is very verbose, but relatively infrequently touched and very flexible. My initial ideas to handle it more explicitly were not promising at all (too rigid vs. high implementation cost for each SD), but maybe you got another angle here.

I agree with @brian-brazil on the problem of authentication parameters. Those are the most likely to be equal across services though.

My main point is that the whole idea falls apart if the scrape intervals are not supported. That's where I do not see as much of a problem as Brian does. Yes, it should not change often – but that's up to the user and I don't see it being more or less volatile in a Kubernetes service config (as an example) or the Prometheus config.

jimmidyson commented 8 years ago

These settings (https://github.com/prometheus/prometheus/blob/master/config/config.go#L335-L356) would be what I would consider consistent properties of a target.

Auth parameters need to be thought about more, agreed, but that doesn't mean that an SD can't securely provide auth parameters. That would be SD specific though & security should be highlighted as a possible problem. But just because it is a potential problem doesn't mean it shouldn't be enabled.

Also agreed that it makes no difference where stuff like scrape interval is changed: if it's changed manually in config or changed via SD then it's changed regardless & the problems noted above will be experienced.

brian-brazil commented 8 years ago

In my dream (& we're very close to now with Kubernetes SD) I want users to just deploy Prometheus, expose metrics & profit. I would love Prometheus to be this simple to deploy & only introduce complexity when it's needed, such as sharding, federation, etc.

I believe the same, there's two ways to approach the problem:

  1. Give users complete flexibility of how they configure exposition, complexity cascades through the system
  2. Standardise on one way to do exposition within an organisation/team. Complexity only for those not following the standard

I don't see a big benefit in allowing/forcing users to have to provide all these settings via kubernetes labels. Having your standard libraries automatically provide exposition in the expected place means you'd get a lot of this for free, without users needing to think about any aspect of it.

brian-brazil commented 8 years ago

These settings (https://github.com/prometheus/prometheus/blob/master/config/config.go#L335-L356) would be what I would consider consistent properties of a target.

I'd consider those to be properties of the scrape config, rather than an individual target. They're all things that are going to be standard across a job or set of jobs. In the case of honor_labels for example that's only something that the Prometheus administrator should have control of, particularly in any form of shared Prometheus setup, so that one job can't accidentally take out another's metrics.

That would be SD specific though & security should be highlighted as a possible problem.

Anything available via SD is going to be available via the Prometheus web UI for everyone to see. Any protections around that are likely to be fragile (how do we know all the label names we need to protect if every team can chose their own?), so we can't realistically keep anything that's available to relabelling a secret. We also need to make sure that the SD mechanism is designed to pass around secrets. Usually authentication/identity and credential distribution are completely separate concerns to service discovery, so the required security and controls won't be present.

Without a solid security story that works for non-experts and keeps them on the right path we shouldn't offer security features. For example encouraging users to put credentials for monitoring into an insecure SD is probably okay as monitoring tends not to be sensitive, but it'd be better to have no authentication in that case rather than giving a false sense of security. In the worst (and indeed expected) case that'd lead to users thinking it's okay to put important credentials in the insecure SD.

I don't think security is an area Prometheus should be getting into, getting this sort of thing right is extremely difficult and an entire project onto itself. This is an area where I think our stance should be that if you want to do something like this then you need to write an automatic config generator.

if it's changed manually in config or changed via SD then it's changed regardless & the problems noted above will be experienced.

To be clear, I'm not worried about someone changing it once every few years as is the usual case. I'm worried about someone changing it continuously, on an hourly or daily basis. Considering how rarely scrape intervals tend to change, some friction here is okay (and arguably beneficial, if it prevents overengineering in choices of scrape intervals). Scrape interval is a Prometheus-level setting, not a scrape config or target level setting.

jimmidyson commented 8 years ago

Config has sane defaults, but is configurable via discovery. There is no forcing to use kubernetes labels, only if it differs from defaults. I don't see a problem with that. Could you explain how "complexity cascades through the system" by being able to configure targets?

I can understand the desire to standardise things like metrics path, but if you're trying to plug in Prometheus exposition to an existing app that already has a metrics endpoint used by other systems then you need to be able to change the Prometheus path. If we can't do that via SD config then it has to be done directly in config file, not optimal for a service you want to deploy & scrape all services.

jimmidyson commented 8 years ago

Security needs discussion. We've seen quite a few people asking for it & we provide the ability to configure the scraper now, but only via config file. This might suffice for my use case tbh - it would mean all endpoints would need to have a single security configuration but I think that's fine.

brian-brazil commented 8 years ago

Could you explain how "complexity cascades through the system" by being able to configure targets?

If something was fixed but now is configurable, then everything downstream needs to be able to handle that. Prometheus as we're discussing here is just one part of that, there will also be other automated systems looking for metrics. Humans need to care about this when debugging and when writing those automated systems, so that means training, documentation and libraries to be maintained.

if you're trying to plug in Prometheus exposition to an existing app that already has a metrics endpoint used by other systems then you need to be able to change the Prometheus path

It's my expectation that there's only going to be a handful of such paths (there were 2 in my previous job for example), so the config duplication won't be too bad. Metrics path is the one I consider to have a good standalone argument for being relabelable, as there may be systems where the existing conventions result in paths varying by application (though opening up a control port in those cases would probably be a good idea). The other settings are effectively binary, so config duplication can handle them.

This might suffice for my use case tbh - it would mean all endpoints would need to have a single security configuration but I think that's fine.

That's how I'd handle it by default, monitoring tends to be no more than commercially sensitive. The main risk is usually accidental DoS rather than a breach.

beorn7 commented 8 years ago

Sorry for chiming in late. Slowly working through my piled up back-log...

It seems we have clarity that we don't want to revert the ability to change /metrics via relabeling. Also, we all seem to agree to better not open the can of worms labeled “security”, so don't allow changing any auth related config parameter via relabeling.

Leaves us with the question where to draw the line in between.

If I understand @jimmidyson 's kubernetes use-case correctly, you'd set up a single Kubernetes SD scrape config for all the jobs in your cluster (i.e. job would be relabeled appropriately). In that case, changing scrape_interval and scrape_timeout via relabeling is crucial as some jobs have the need for high-frequency scrapes while others are relaxed about that, and might expose a lot of metrics so that frequent scraping by default might not be an option. Also, for large scrape, we might need a custom timeout. It would be good if kubernetes user could set those parameters from within kubernetes and wouldn't need to exercise a separate tool for propagation into Prometheus.

And BTW: I see the Kubernetes/Prometheus combo as a broadly used set-up in the near future. So the kubernetes use-case matters a lot.

fabxc commented 8 years ago

Metric path and scheme can be relabeled and theoretically be provided per target. So can URL parameters. This the furthest we can and should go to keep complexity manageable and prevent unintended usage.

It will never be possible to configure everything via SD and it most likely shouldn't.

discordianfish commented 8 years ago

What's the situation on auth parameters? In my case I simply want to use the snmp and blackbox exporter which are behind basic auth and configure it with relabling as described in our docs. I don't think there is any way to pass in the credentials this way. Thoughts?

smarterclayton commented 7 years ago

As I commented in the other thread, I think this effectively limits our ability to bring centralized monitoring to a moderately security conscious Kubernetes cluster. A shared secret is not sufficient for tenancy, simply because that would allow any tenant to go fishing for anyone else's metrics. It also prevents application authors for Helm and other out of the box solutions from pre-wiring for metrics because they can't anticipate a cluster wide secret.

I'd really like to make Prometheus capable of providing out of the box metrics gathering for multi-tenant Kubernetes clusters. Without a way for the services themselves to declare their security, it's not really possible to have an out of the box story that isolates those services.

brian-brazil commented 7 years ago

Have you considered a Prometheus per tenant? Any other way is going to be difficult to secure given our security model if you don't want tenants to be able to access each others endpoints.

smarterclayton commented 7 years ago

I think we're seeing the separation of two types of Kubernetes clusters. The simple, sparse tenancy cluster (one or more teams), which can absolutely get away with approaches suggested here. And the other, which is hot dense clusters with thousands of nodes and potentially that many tenants, some of which have different security models, but all of which want to be able to easily expose and aggregate metrics.

The plan has been (up until now) to remove Heapster from Kubernetes, and provide a thin facade that would handle metrics tenancy in front of solutions like Prometheus or others. In that model, the query / UI of Prometheus have tenancy enforced in front of it for the Kubernetes simple use cases (CPU / memory / disk + autoscaling), while potentially allowing admins / high privilege users to access prometheus directly.

For the dense cluster use cases, there's some tradeoff between efficiency and a limited security barrier. I agree that the level of security provided by something shared like this has limitations, but I also think it parallels a set of tradeoffs made by clusters where tenancy is somewhat loose. I.e., the goal for most users is to avoid accidental exposure of metrics that could be used to compromise other tenants (exposing personally identifying information, tenanted info, etc), but accepting that a determined attacker could potentially compromise some or all of those.

EDIT: to answer the question directly, for medium density tenancy and above (hundreds or thousands of Kubernetes namespaces, each with small numbers of services within those namespaces), running individual prometheii has good failure isolation properties, but poor aggregation and management properties. I tend to think that a few hundred of those namespaces per prometheus server is reasonable, but even in those hundred cases some level of further separation is useful.

smarterclayton commented 7 years ago

I'll note as well that there are other longer term efforts to bring centralized authorization and management across many services (things like SPIFFE and LOAS and network level isolation) where it becomes practical to move back to a central config. But in the absence of those, some level of glue is truly missing.

I'm not adverse to carrying a patch for OpenShift / high tenancy Kube clusters to do some extra level of separation if necessary - I respect the concerns raised above about separation of responsibility.

EDIT: Put another way - there's a lot of advantages to be had in Kubernetes to allowing the infrastructure admin (configuring prometheus) to be separate from the applications themselves - that's what the current annotation SD model enables, but only up to the point where some information in the metrics needs to be subdivided. Then the infrastructure admin has to dig in and coordinate with app authors.

brian-brazil commented 7 years ago

some of which have different security models, but all of which want to be able to easily expose and aggregate metrics.

They can do that by each running their own Prometheus.

exposing personally identifying information

Such information should never be exposed to Prometheus. We're not designed to provide the level of security or integrity such data requires.

in front of it for the Kubernetes simple use cases (CPU / memory / disk + autoscaling)

That would all come from ~cadvisor, which is part of the cluster itself and thus should require at most one set of credentials.

accepting that a determined attacker could potentially compromise some or all of those.

The security is either there or is isn't. There is no security for kubernetes annotations accessed from Prometheus, we'll return them to anyone who has access to the http port. If you're happy with this, then an unauthenticated metrics endpoint provides the same security.

I think you're conflating 3 different use cases here, each with different threat models. Each is solvable individually, but can't be solved with one single approach.

smarterclayton commented 7 years ago

There is no security for kubernetes annotations accessed from Prometheus, we'll return them to anyone who has access to the http port.

Our assumption is that in Kubernetes we would never expose prometheus like that to anyone except infrastructure admins. There are many proxies and other solutions for that.

That would all come from ~cadvisor, which is part of the cluster itself and thus should require at most one set of credentials.

Today, but cadvisor is just one small part of the infrastructure. There are edge proxies, local proxies, storage plugins, etc that all have metrics to contribute. Using a shared secret for all of those means that all of those have to have the same level of security, which is not the case.

Such information should never be exposed to Prometheus. We're not designed to provide the level of security or integrity such data requires.

That can be things like service names, pod names, or other labels that are in use. There is a spectrum of PII ranging from "not allowed to be stored outside of the EU" all the way down to "provides information that can be correlated with others". I tend to think of this as the 80% case vs the 20% case - 80% of the applications want their metrics exposed, but don't have strong guarantees. The 20% case has a number of levels, from "don't expose this to the public internet" to "don't expose this if an attacker gets loose on the network".

The security is either there or is isn't.

I think it's reasonable to say that Prometheus acting as a collector for an entire kubernetes cluster has a moderately complex config, a moderate set of roles / secrets that are separated, and that running multiple prometheus instances is totally reasonable for further isolation, and that even within that config there will have to be some separation of duties. Anyone wanting to subdivide what metrics are visible must do that in front of prometheus, no disagreement, and I don't think it's prometheus' job to enforce that as such.

Service discovery that allows collectors to opt in to collection without preconfiguration by an admin, but only if no security at all is imposed, makes the tradeoff much worse the larger the kubernetes cluster gets. I'm looking to have almost every component deployed on Kubernetes expose a scrape endpoint. But not every component is going to tolerate being wide open, which just means that the "drop in, it works" scenario degrades back to "the admin has to go change the config every time a new infrastructure component is gathered".

brian-brazil commented 7 years ago

There are edge proxies, local proxies, storage plugins, etc that all have metrics to contribute. Using a shared secret for all of those means that all of those have to have the same level of security, which is not the case.

I'd presume there's a limited number of those, such that specifying as many credentials as needed in the config is not completely unreasonable.

If you can use a common challenge response mechanism such as an ssl client cert, rather than a shared secret, then this problem goes away. I believe this is the direction things are generally going.

Service discovery that allows collectors to opt in to collection without preconfiguration by an admin, but only if no security at all is imposed,

You can have authentication, just not with credentials that aren't known in advance. If you need arbitrary on-the-fly credentials, you'd be looking at an approach similar to the CoreOS Prometheus operator to generate the config file.

makes the tradeoff much worse the larger the kubernetes cluster gets.

You'll run into scaling or social issues that require splitting out such a Prometheus as the cluster gets larger. Security is only one thing that could trigger this, and I'd guess one of the least common.

smarterclayton commented 7 years ago

Ok - so to capture:

  1. service discovery is only expected to be used in homogenous / predefined security domains - discovery will not grow to allow abstractions to handle that, because security / authentication is better handled above the tool rather than inside of it
  2. dynamic reconfigured of prometheus via config is normal and expected, and therefore a higher level component should manage dynamic security if it so chooses
  3. if a small N multitenant kubernetes cluster wants to allow secure self-registration, that coordination should be managed above
  4. very high N tenancy is not something prometheus believes it can handle well via service discovery since it's a limited use case and has other challenges

On point 4 explicitly, for very high N tenancy (which is something we deal with often in OpenShift), would additional scrape config abstractions be accepted to keep config complexity low? I.e. we could easily have 2-3k very small apps with limited metrics sets that have different auth boundaries - if we generate large config files, do you see Prometheus accepting limited concepts that reduce the need to generate large config files (or make them easy to parse)? This is something we deal with often with HAProxy, where we have multi-megabyte generated config files (tens of thousands of backends) and so have worked with them to optimize their config loading and find gaps.

brian-brazil commented 7 years ago

do you see Prometheus accepting limited concepts that reduce the need to generate large config files (or make them easy to parse)?

The general answer is that if you can in principle it via configuration management, we aren't going to duplicate that functionality. At 2-3k apps, you're well past the point where you must already have configuration management (and possibly past what one Prometheus can handle).

This is something we deal with often with HAProxy, where we have multi-megabyte generated config files (tens of thousands of backends) and so have worked with them to optimize their config loading and find gaps.

Unlike HAProxy, we aim for a seamless in-process config reload. If you've ideas around making config reloads less disruptive, we're open to that.

discordianfish commented 7 years ago

@fabxc @brian-brazil @beorn7 I just ran into this again and reread this issue and I really don't understand your concerns. Relabling allows overwriting every part of the url, except the UserInfo which would be required to set basic auth via relabling. I understand that you don't want to open the 'security can of worms', but the required changes are trivial and all implications are straight forward. Yes, configuration management could solve that but frankly that's not a good answer for a official cloud native project. And looking at who all has interest in commercial use of prometheus, especially in Enterprise where some authentication is strictly required, I'm worried that this technical purism will cause people to build heavier and heavier workarounds, if not even fork/replace (been there you know).

discordianfish commented 7 years ago

It looks like there isn't even an workaround.

To be clear, maybe I miss something: If you want any minimal form of auth, you are required to introduce configuration management which discovers targets on it's own, rendering the SD in Prometheus useless.

That cfg mgmt would then render one static_config for each service endpoint with labels for that service endpoint. Maybe you can combine few with common labels, but it might be as many static_configs as service endpoints, giving you a huge config file. Possible to large to be human readable, let alone readable nicely in the /config.

Now you need to figure out where and how to run your cfg mgmt, probably you would have a volume in your pod for the config and use @jimmidyson's config reload. But you also need to monitor this, because you can't use the prometheus_sd_* metrics to do so. So you might end up running the node-exporter in the pod too and use the textfile collector for that.

Finally you need to figure out how to update your cfg mgmt sidecar. Maybe that works with some pull based system like chef, or you end up glueing something together. Possibly several days of work for that.

I mean, really? You can argue you need config mgmt anyway but that's a self-fulfilling prophecy. Prometheus is right now the only reason I need to think about it. Not my postgres databases, my gateways, my reverse proxies nor my backend service. To be fair, instead of introducing configuration management I'll rework all my applications to have a separate listener for prometheus metrics. But try telling that to a new users of prometheus with a straight face..

Rudd-O commented 7 years ago

Finally you need to figure out how to update your cfg mgmt sidecar.

We generally combine confd and confd-sidecar for this, but we aren't using Kubernetes.

jkroepke commented 6 years ago

Don't force an ideology idea which works in a perfect world.

Let the user choose what's the best for his own.

If you want/need this feature you may have to use the Red Hat fork unless the Prometheus heroes realize what the community want.

discordianfish commented 6 years ago

@jkroepke Is there a redhat maintained fork with this functionality?

simonpasquier commented 6 years ago

copying from https://github.com/openshift/origin/issues/17685#issuecomment-380386472

@discordianfish IIRC the patches were included in the openshift/prometheus image at some point but not anymore.

discordianfish commented 6 years ago

Well, I wouldn't be surprised if they fork Prometheus. It's no unheard of Red Hat forking popular projects and even introduce incompatible changes.

beorn7 commented 6 years ago

You could talk about that with one of their principal software engineers. ;o)

discordianfish commented 6 years ago

Still stand by my words ;)

MatthewLymer commented 5 years ago

It would be nice if you could tell Prometheus the bearer token on a per-scrape basis. It kinda sucks to have to re-use the same authentication information for every scrape.