mesosphere / marathon

Deploy and manage containers (including Docker) on top of Apache Mesos at scale.
https://mesosphere.github.io/marathon/
Apache License 2.0
4.07k stars 845 forks source link

Allow instance number to be passed in as an environment variable #1242

Closed SEJeff closed 7 years ago

SEJeff commented 9 years ago

Say I have a docker container ie: kafka:0.8.2.0 and want to run it under mesos. In marathon terminology, for each app, I want 10 instances. I need an integer that is unique amongst all instances in that app, but only for that app.

Currently I've got a start script in python which does terrible black magic along the lines of:

sha = hashlib.sha1(os.environ.get('HOSTNAME'), str(random.randint(1, 100000)))
default_broker_id = 10000 * int(sha.hexdigest()[:5], 16) / int('10000', 16)

This gives me a unique integer I can pass in as a kafka broker. However, I'm having marathon start up 10 instances of said brokers. It would be super nice if the instance number was passed from marathon to the container. Then that above code could be more like:

default_broker_id = os.environ.get('INSTANCE_NUMBER')

That way I get a per-app unique integer for each instance of said app. It seems like this wouldn't be super difficult to expose.

Thoughts?

kolloch commented 9 years ago

Hi @SEJeff,

thanks for your idea. That seems to be a pretty specific use case and I am not sure that I understand it well. Please elaborate if you think I am missing something crucial.

We do export the Mesos Task ID as the environment variable MESOS_TASK_ID. Of course, that is unique cluster-wide and not an integer.

As of now, I would like to close the issue.

lusid commented 9 years ago

I need something like this for creating volumes in a networked storage that are immediately available when a crashed instance comes back up on another server. Right now, if an instance comes up on a different server, the data isn't available. If I could identify individual instances of a single application, this would become insanely easy. At the moment, I have two options: no shared filesystem or using the same shared path for all instances which works for some applications but not most of them.

kolloch commented 9 years ago

Hi @lusid, can you elaborate, please? What do you mean by identifying the instances?

MESOS_TASK_ID uniquely identifies a task.

https://mesosphere.github.io/marathon/docs/task-environment-vars.html

lusid commented 9 years ago

Let's say I have 10 physical servers. I run an app that creates a Docker container with 5 instances that are constrained uniquely by hostname. I want each instance to attach to a volume on the physical server that includes a number of the instance in the scaling group that it represents.

Instance 1 = ID 1 Instance 2 = ID 2 ... and so on

Now, let's say Instance 3 dies and is recreated on a completely different server. I want that newly created instance to be able to take over the storage volume it created originally before it died. If I used the MESOS_TASK_ID, then I get a completely unique ID that is in no way related to the previous task that died.

Because I have a networked file system between all servers in my cluster, this would basically solve the problem of not being able to locate data when a crashed instance returns on a completely different server, especially when the data stored by each instance must be stored in a different location to avoid data corruption.

kolloch commented 9 years ago

Let me summarize: You want all tasks to have some kind of sequentially assigned ID. If a task fails, you want its replacement task to get the same sequentially assigned ID. So that if you specify "instances": 10, you want to make sure that you always have tasks with the the IDs 1-10 running somewhere. You assign network volumes using these IDs. Thus you always have on task per network volume.

@mhausenblas / @air What's our current best practice for dealing with persistence in cases like this?

lusid commented 9 years ago

Correct. It would be nice if it worked automatically with scaling to different sizes as well, but I can see where difficulties would start appearing in those instances.

I've been thinking about this a lot, and having an ID like this is the only thing I've been able to come up with for this use case. I have no idea how anyone runs long running processes on Marathon in its current state when they require persistence and when they aren't scaling to the full capacity of the cluster. If I could find a reliable alternative that worked in most cases, I would be happy. I would prefer to not have to constrain an app to X machines by hostname, and never be able to scale them up further than that.

I'm sure I'm missing something, but it is driving me crazy. As soon as I need to store persistent data, all the awesomeness of Marathon starts to turn into crazy tedious Bash hacking tricks, or constraining myself to one machine which defeats the purpose altogether.

kolloch commented 9 years ago

This has come up again a number of times. Maybe this idea has more applications than I originally thought.

air commented 9 years ago

Another use case where having a strong 'I am instance N of M' identity is useful: Cassandra nodes. e.g. instances 1 and 2 know that they are the leaders (their instance numbers are lowest) and configure themselves as seeds.

I'm not convinced Marathon is the right level to provide this level of guaranteed identity. It seems like something a minority of apps would benefit from - the implementation weight would be wasted on other apps that scale horizontally with true independence.

Marathon's current guarantee is, 'I'll run N of these for you and uniquely identify them' - but they are cattle, and the sense of 'being instance 5' is not carried over if #5 dies and is replaced. That feels like more of a pet.

Technically do we see difficulties? I wonder if - in the event of network partitions, restarts - we might run into issues where e.g. there are two 'instance 5s'.

cberry777 commented 9 years ago

I am +infinity for this feature. I can think of several places that having an instance number would make things much simpler.

Most prominent for me is with monitoring. Let’s say that I have 7 Foos. Typically I’d then want to see a graph with 7 Foo metrics (lines) that I can compare and contrast. The fact that they are ephemeral doesn’t really matter. Conceptually I have 7 Foos — that may move about. I don’t want to see disjointed (and likely different colored) lines and multiple instances on the legend of my graph. I want to see 7 lines. And if I spot an anomaly I want to be able to overlay “event bars” that show me when an instance moved. Something like; “Whoa, what happened to 7. Oh, it flipped onto that spotty server…”

And more important. A named instance (what we are asking for with “lasting instance numbers”) helps to keep the number of metric datasources from becoming ridiculously large. Rather than having a zillion instances in the history of a given metric, I can have 7. I.e. 7 datasources versus a brand new one for every time that docker instance is redeployed .

In fact, we have had to create exactly this capability (instance numbers) on top of mesos/marathon, which is a real PITA. Make sense?

Honestly, I believe that most people think in terms of "instances of services”. It gives us something to hang our hat on. We say "Node 7 is acting up — what’s up with that." We don’t say "Node bfa1c68ce497 is acting up” — particularly when that name changes every time we redeploy a new version!

Yeah sure, maybe you don’t name your cattle like that. But really, I think we all kinda do. (I’ve never raised cattle. But I have raised chickens, and while they certainly weren’t pets, I could tell them apart. And it was the same chicken whether it was in the yard or in the coop :~)

Most of us don’t run 1000s of Foos. We run 10s or 100s. And clustered solutions (e.g. elasticsearch, cassandra, a bank of proxy servers, …) often want us to conceptually identify Nodes, so we can do things like traffic shaping (e.g. hot spots are routed to specific Nodes, etc). I don’t particularly care where Node 7 lives, but it is servicing only XYZ or is operating on this set of Shards.

I like to think of these things as workers — not cattle vs. pets. (Personally, I think the whole cattle/pets analogy misses the mark somewhat.) My workers should be relatively interchangeable — think check-out people at the big-box-store. But they have do have names. And if Bill is working aisle 1 today and aisle 2 tomorrow, I don’t care. But I do care about Bill’ productivity, or whether he died last night. And it would be problematic if every time Bill worked at a different aisle, he had a different name…

Thanks, — Chris

jamesbouressa commented 9 years ago

Instances may live like cattle, but we need to treat them a little bit like pets when they get ill. Even real cattle are numbered.

Even if this were only to function as a way to make it easier for humans to keep instances straight in their heads for a few minutes, it would be worth the effort (as the DNS was, and for much the same reason). Human-friendly naming imposes no burden upon automation, and it eases the cognitive load on the humans involved.

air commented 9 years ago

Hey @BenWhitehead have you thoughts on this? We were talking similar issues recently.

eyalfink commented 9 years ago

+1 While the 'put N identical replicas' to increase the load capacity of your service is common, there is a not less common pattern of 'shard your data to N pieces and put an instance per shard'. In fact I'm quite sure that the later is more common with services which deals with a lot of data/computation which need to be serve in low latency (e.g. search engines of different sort)

Without this requested feature, is there a way to have an instance knows it 'shard id' and load its own data when going up?

kolloch commented 8 years ago

@air, @mwasn This should be reasonably easy to implement. Since there is only one Marathon instance that is currently leader and starting tasks, there should be no problems with network partitions (except of course those unrelated to this feature, e.g. that we don't restart tasks in that case).

Implementation Proposal:

sielaq commented 8 years ago

@kolloch does it mean that restart (or killing) application, marathon will remember INSTANCE_NUMBER? and in case of restart: for small amount of time both (new and old instance) gonna have same INSTANCE_NUMBER ?

BenWhitehead commented 8 years ago

Conceptually it's not difficult for marathon to pass a value as an environment variable. The complicated part is what that value should be and what is done with it during failure scenarios.

For example, what should the instance number be for new instances of an app that are being started for an updated app? Once the apps are healthy the old instances will be torn down, should those numbers be re-used, or is it safe to abandon them? If numbers are supposed to be re-used what are the semantics around re-use?

Here is a more concrete example that exposes some questions: Imagine you're trying to run 5 kafka brokers vi marathon. Each broker needs a unique id. Once an id is defined for a broker that id has to stick around, since the data that has been written to disk is directly associated with that broker and it's id. This means marathon would have to keep track of this new metadata and corresponding association "Broker id 4 maps to mesos slave slave-a.mesos" (not something it currently does). Assuming it could, there are many more challenges that arise when dealing with failure cases. In addition to keeping track of the id for the slave, marathon now has to change its offer evaluation code to effectively constrain "restarting a lost task" to only restart on a slave that was previously running.

Managing state of distributed systems is a very challenging thing to do well. Marathon (currently) is first and foremost a system for running stateless applications. If you application has a lot of complex state that needs to be managed/coordinated it would be a good idea to look into what it would take to write a mesos framework where you will have full control over managing the specific considerations of your app. This is why there are frameworks specific to Kafka, Cassandra, HDFS and other stateful apps.

If you're asking for anything more than "In the history of my app, what task number am i?" I don't think it's a good idea for marathon to support it. Marathon already creates a task id that is available as an environment variable MESOS_TASK_ID that can be used to identify tasks. This task id is a UUID so that it specifically can be identified uniquely across the whole cluster and its lifetime.

To the point about pets vs. cattle vs. Bill; from the standpoint of mesos the thing running here is a task. Attempting to further map the analogy to Mesos, Bill is a worker (mesos-slave) that when his resources are available (he's at work) are used to perform a task (checking out customers). This task has the same shape of work day-to-day but it is not the exact same every day. It could also be argued that Bill is a pretty stateless task that could easily be taken over by someone else if Bill was no longer able to perform his task for the day (sickness, break etc).

eyalfink commented 8 years ago

If I understand correctly your concern I think that the problems you are raising can be overcome by leaving these things to the application level and not pull them into the framework (Marathon) level - Instead of supporting a "which task number am I" via the framework, just let the Job creation API specify small variations between the replicas agrs. For example:

{
    "id": "/product/service/myApp",
    "instances": 3,
    "cmd": "cp /path/to/remote/data/shard_$INSTANCE_NUMBER /local/data && run_my_service --data /local/data",
...

And let $INSTANCE_NUMBER be replaced with a running number for each instance. Now we've defined 3 mesos tasks which are similar but not the same, so if one dies it's clear what need to be rerun. It's also clear that it's the application responsibility to make sure it handles restarts or coexistence of a task replica due to lost of communication correctly. In your Broker example I would expect the application to deal with the association of the data with ID 4, by writing it to a network location and fetching it from a new task with ID 4, or by being able to create it if needed.

kolloch commented 8 years ago

@sielaq: What I specified would actually not reuse the MARATHON_APP_INSTANCE_NUMER of a running task. But that might be a problem. I guess after you have upgraded an app successfully, you would expect:

This would be achievable by updating the rules I provided above with only considering tasks with the same configuration.

It would NOT ensure that a task with a certain MARATHON_APP_INSTANCE_NUMBER is respawned on the same slave on upgrade or failure.

There are plans in Marathon to use "dynamic reservations" to allow sticky tasks that are restarted in-place on failure or upgrade. It would definitely nice, if the MARATHON_APP_INSTANCE_NUMBER would be preserved in this case. But I would consider that a distinct issue.

xargstop commented 8 years ago

+1

drewrobb commented 8 years ago

+1, my use case is limited to monitoring as well. We just want a way of enumerating app tasks in a way that doesn't have duplicates but is otherwise as small as possible. I think it is also worth noting that when scaling down, this feature would mean that the highest number tasks would need to be terminated first. That might make satisfying placement constraints difficult. Also, when deploying if upgradeStrategy.maximumOverCapacity > 0 you have a problem. (I wouldn't actually care about these aspect of correctness, but others would I'd assume)

cberry777 commented 8 years ago

I agree that “instance numbering” is not all that simple when one considers failure scenarios, but I don't think that that necessarily means it isn’t worth doing. In fact, I think that a “best effort” solution is completely adequate. I have enumerated some scenarios below.

For those apps that don’t care about instance-naming, they can simply ignore it altogether.

Also I believe that “host affinity” is a separate concern (although related by the common underlying use case), although I do I think this is another valuable addition to the ecosystem.

AFAICT, The different scenarios for instance numbering are as follows; Let’s assume the initial mapping: 1=>ab, 2=>bc, 3=>cd

A) we scale down: bc is destroyed. (2 is now free) So 1=>ab, 3=>cd

B) we scale back up, Add de and ef. We reuse the free slot (2) and add a new one. So 1=>ab, 2=>de, 3=>cd, 4=>ef

C) de & ab die, and are replaced by aa, bb So 1=>aa, 2=>bb, 3=>cd, 4=>ef

D) Version X --> Y D1) Spin up Y: 1=>aa, 2=>bb, 3=>cd, 4=>ef, 5=>cc, 6=>dd, 7=>ee, 8=>ff D2) Bring down X 5=>cc, 6=>dd, 7=>ee, 8=>ff that would result in 2X the buckets... next roll would reuse 1,2,3,4 but if you "roll in place" then it would stay 1,2,3,4 always

Optionally, if you have blue/green Blue : 1=>aa, 2=>bb, 3=>cd, 4=>ef Green: 5=>cc, 6=>dd, 7=>ee, 8=>ff and we "flip" Blue to Green with a "live” alias (as is often done in Elasticsearch)

About Mesos frameworks (per Ben's comment above)— my problem with them is that they are often not layered on top of Docker. They use whatever OS, JVM, etc that is already host resident. Docker’s promise and raison d’être is to bring repeatability all the way down to the OS level. We have all been bitten by an OS that has a different set of patches, or has swap turned on, etc. IMHO, when we step away from that vision, it is a step backwards.

Cheers, — Chris

rasputnik commented 8 years ago

It sounds like people are discussing two different use cases here. I'd also dearly love a way to get metrics consolidated for an app rather than at either the task or slave level.

But the re-using instance numbers seems a bit wrong - taking the cattle/pet analogy, this is akin to renaming your new cat 'Mr Tiddles' because that was the old cats name.

Doesn't anyone else think it might be confusing to operators to notice Mr Tiddles suddenly grew his leg back and lost 10 pounds?

memelet commented 8 years ago

I am looking for INSTANCE_NUMBER to be able to assign the correct Flocker volume. Maybe this will be handled some other way soon?

bydga commented 8 years ago

Hi, I think this would be really nice feature and i have another use case:

we are logging app metrics (cpu, rss, event-loop hangs, etc...) into Graphite. Our app is usually a long running service with stable instance count between 2 - 8 instances. The metrics mentioned definitely needs to be logged per instance (=per task in Marathon terminology). And when one of the tasks fails/restarts/whatever, we want the line in graphite to continue. We definitely dont want to have hunderds of metrics in Graphite (its difficult to read them and it will take too much disk space).

So this feature would be really helpful - one sequential number that will get recycled (if its free) on a new task start.

pgkelley4 commented 8 years ago

We would also like to use this feature for monitoring similar to @drewrobb, @cberry777 @bydga and @rasputnik. But I agree with @rasputnik that it doesn’t make sense for new tasks to get the same ID as another already running task. An INSTANCE_NUMBER should only be reused if the old task has been killed. So it sounds like we have two use cases described in this thread. Use Cases: 1) Assign the lowest available instance number for new tasks, and it’s not a problem if during updates those numbers go higher than the instance count. 2) Assign the lowest available instance number for new tasks, and new tasks should use the same numbers as old tasks, never going higher than the instance count.

I think we can cover both of these cases if we are careful about how we handle updates. @Kolloch, your initial proposal is close to what we want. To reiterate with some clarification:

Use case 2 can be met by constraining the number of tasks. This can already be done by setting maximumOverCapacity = 0 and minimumHealthCapacity < 1. This way it would tear down an existing task to make room for the new task and the INSTANCE_NUMBER would effectively be transferred to the new instance.

air commented 8 years ago

The ask here is for Marathon to manage state on behalf of an app. The state is a map of instance numbers to running containers. Before now, good practice was that apps should use a coordination service like ZK/etcd for that shared state. This kept Marathon itself relatively state-free.

  1. It would be great to have a tutorial showing how you can achieve this with ZK/etcd.
  2. Going forward I think we can satisfy this use case and keep complexity out of core Marathon by implementing this as an optional plugin.
matiwinnetou commented 8 years ago

+1

fafuisme commented 8 years ago

ZK/etcd is too heavy in most of situations, i think.

cberry777 commented 8 years ago

Marathon already manages state (# of Instances, etc.). This additional, very small bit of state would be extremely helpful for the several use cases as described above. I think a simple, best-effort solution would meet everyone’s needs. As I said previously, we have already had to implement something outside of Marathon for Instance numbers — this is a real world problem after all — but it is clunky and bolted on. Instead, IMHO, this should be something that an Instance is self-aware of.

apognu commented 8 years ago

+1.

Was looking for something like this that would allow for mounting the same Ceph volume on a Docker container and its replacement.

Having something in the likes if INSTANCE_NUMBER would solve that use-case.

rbjorklin commented 8 years ago

I think this might be related to #717 and #1899

SEJeff commented 8 years ago

@rbjorklin: not really, no. My original request has very little to do with either of those. I want a unique instance number, not the task name.

rbjorklin commented 8 years ago

I wrote a really hacky work-around if anyone is interested. Wouldn't recommend it for production use but would appreciate some feedback on how/if it's working for you. https://github.com/rbjorklin/marathon-instance-tracker

cberry777 commented 8 years ago

I would like to reiterate the case for Instance Numbers, since this issue seems to be getting nowhere.

When one uses a metrics collector mechanism, such as statsd, collectd, graphite, dd_agent, etc. — running either on every Host or centrally. Then there must be a way to uniquely and consistently identify a given Instance of a service.

Let’s look at the problem with a specific use-case using the DataDog (DD) collector: dd_agent as the example

Say we have the application-environment combination (I.e. "App/Env”): serviceX/UAT — that produces metric: foo.bar -- on Host A In dd_agent, this should become; serviceX.foo.bar — with the tags: ["env:UAT", “host:A"] And within DD, this is a unique metric (a unique time-series)

If we have, say, serviceX/SIT on the same machine -- then we have {serviceX.foo.bar, ["env:SIT", "host:A"] Again a unique metric ("time-series") in DD. So everything is fine

BUT if I want TWO or more serviceX/UAT on Host A, which as we all know is very likely in a container-driven world. Then I MUST have a way to delineate them. This is because the dd_agent AGGREGATES metrics. So both serviceX/UAT foo.bar would aggregate together on Host A (or within DD itself). And we would see a “super-instance” (i.e. a double counting), rather than 2 instances contributing to the whole.

So what we need is: {serviceX.foo.bar, ["env:UAT", "host:A", "iid:01"] {serviceX.foo.bar, ["env:UAT", "host:A", "iid:02"]

Which are again 2 unique time-series

Yes. I know that we could use the Mesos TaskId to delineate Instances, but IMHO that is going to be a PITA over time.

If you have 2 instances of serviceX -- that you roll twice every week. Then you will have 2x2x52 = 208 unique time series per year within DD, for the simplest example. That works, but I think it will be really awkward. You will have tags like this: “iid:ababababababababababab”, without any clear way to ask only for the collective; “01”.

Instance Number is LOT cleaner/clearer here. Instead of 208 labels in the legend, you'd see 2. After all, you are never running more than 2 Instances in our example.

This doesn’t make this a Pet. It’s just bookkeeping. And it helps the human better organize their information.

bydga commented 8 years ago

@cberry777 Theres another think you forgot to handle:

After all, you are never running more than 2 Instances in our example.

Actually yes, you are - because of upgradeStrategy. During a deploy/restart, you never want to stop all your instances first (have downtime) and then start the new ones.

That led us to another logging approach - we are logging per-task metrics (RSS, CPU, etc...) just for a short period of time (1 or 2 weeks) and then we aggregate them (avg,min,max) for whole application on a long timerage (months/years) - to have the "trends". You usually dont need to know (in a long term period) exactly that your 4 instances ran at: [40%, 30%, 30%, 30%] cpu and [252 MB, 260 MB, 228 MB, 240 MB] RAM. You should be just fine with avg stats like 32,5% and 245 MB RAM (plus we are saving min and max values as well).

sielaq commented 8 years ago

@bydga - not true. You are assuming that all mesos slave are similar. If you consider that some of hosts are VMs and others are bare metal. Some have different kernel / docker / mesos / (whatever) version. then it might be useful to know which instance and when was behaving abnormally. Sure that instance number is not gonna help you spot it - but gonna help you with storing data in any logging system.

sielaq commented 8 years ago

I think this task is easy to implement, but somehow has very low prio in marathon development team.

the easiest solution - without logic

(but still helps with keeping logs)

  1. Pre-situation : instance a - INSTANCE_NUMBER=1 (healthy) instance b - INSTANCE_NUMBER=2 (healthy)
  2. Restart application requested (constraints host uniq) instance a - INSTANCE_NUMBER=1 (healthy) instance b - INSTANCE_NUMBER=2 (healthy) instance c - INSTANCE_NUMBER=3 (not-healthy-yet) instance d - INSTANCE_NUMBER=4 (not-healthy-yet)
  3. Post restart application task (kill a and b) instance c - INSTANCE_NUMBER=3 (healthy) instance d - INSTANCE_NUMBER=4 (healthy)

---- so easiest solution switching between 1,2 / 3,4 (or 1,2,3 / 4,5,6 etc. ) take smallest free slot and use it

variant B - with logic:

  1. Pre-situation : instance a - INSTANCE_NUMBER=1 (healthy) instance b - INSTANCE_NUMBER=2 (healthy)
  2. Restart application requested (constraints host uniq) instance a - INSTANCE_NUMBER=1 (healthy) instance b - INSTANCE_NUMBER=2 (healthy) instance c - INSTANCE_NUMBER=1 (not-healthy-yet) instance d - INSTANCE_NUMBER=2 (not-healthy-yet)
  3. Post restart application task (kill a and b) instance c - INSTANCE_NUMBER=1 (healthy) instance d - INSTANCE_NUMBER=2 (healthy)

---- so harder solution - need to be more clever when scaling up / down / deploying more/less than before knowing which one was killed / which slot is free etc

wleese commented 8 years ago

+1. Needed for admitting graphite into the New World.

wleese commented 8 years ago

Found this workaround using terraform and terraform-provider-marathon:

resource "marathon_app" "app_template" {
  app_id = "/myapp/${count.index}"
  container {
    docker {
      image = "${var.registry}/${var.app_name}:${var.version}"
    }
  }
  env {
    INSTANCE_ID = "${count.index}"
  }
  cpus = 0.01
  instances = 1
  count = 2
}

This works for us because we don't use marathon to do instance scaling, but we raise the count in terraform instead. That's the only downside I can think of at the mome

marenzo commented 8 years ago

+1

ottoyiu commented 8 years ago

+1 for metrics gathering like graphite.

fxgsell commented 8 years ago

+1

raphtheb commented 8 years ago

+1'd as well.

Radek44 commented 8 years ago

Adding a scenario inline with this request. We have tasks deployed with Marathon that we want to scale but with the caveat that when we scale a given process we need to make sure it talks to a given queue (without going into details, ordering of items in the queue is important so only 1 process at a time should be consuming the queue): For example let’s say we have 2 queues:

We now deploy a task called queue.consumer

We want to scale queue.consumer using Marathon to 2 instances. But now we would want to make sure that queue.consumer-Instance1 talks to queue.01 and queue.consumer-Instance2 talks to queue.02

It would be great if there was a way in Marathon to either:

  1. Get the information on the task itself (from an env variable for example) on which instance number it is (1 or 2)
  2. Pass a dynamic Env variable on scaling for example by setting a script that sets ENV_QUEUE_TO_LISTEN to queue.{%i} where {%i} is the number of the instance
air commented 8 years ago

Also see Cardinal Service idea in Kubernetes https://github.com/kubernetes/kubernetes/issues/260#issuecomment-57660731

...which on further reading became the PetSet proposal https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/petset.md

cherweg commented 8 years ago

+1

Krassi10 commented 8 years ago

+1

harpreet-sawhney commented 8 years ago

+1

samwiise commented 8 years ago

+1

krestjaninoff commented 8 years ago

+1

air commented 8 years ago

Good news everyone! This is officially on the radar and we'll look at prioritizing it. Thank you for all the excellent use case examples. Internal tracker https://mesosphere.atlassian.net/browse/MARATHON-983