Closed jmmills closed 7 years ago
@jmmills Can't you start your containers such as: "docker-compose scale worker=5" and that service would start with 5?
@aanm Yes, but I think that functionality should be mirrored as a default in the service definition. Maybe I have a minimal set of workers that should always be running and I want that to be clearly declared as a default.
@@aanm Do you expect that a docker-compose up
should take the scale
parameter into account? The only problem with this that I see is that we're introducing parameters into the declarative configuration that are NOT compatible with the underlying Docker / Docker API concepts but very specific to Docker Compose.
If we were to do this going forward; I'd suggest something like:
worker:
build: rqworker
$scale: 5
links:
- redis
command: rqworker -u tcp://redis
Where $<parameter>
denotes a Docker Compose specific thing.
We've gone back and forth on whether scale
belongs in the YAML; there's been a PR to add it before (#630). I'm still of the opinion that most people shouldn't put scale numbers in their config, because it makes it less portable, but I understand the use cases for doing so.
Now that we've got a rudimentary way to augment Compose files with extends
(and hopefully better ones soon - #1380, #758), the concerns I raised in https://github.com/docker/compose/pull/630#issuecomment-69210279 are perhaps less of an issue.
I'd like to set scale=0
in my yml for test-related services that I don't normally want started. I only want to create those services explicitly with docker-compose run
or an explicit scale
.
@jamshid I've often wanted that, a definition that sets up an environment but doesn't run by default. I've been relegated to creating a base image (which a zero/no-op scale would also help with) in which I run my unit tests out of (via docker run
), and then my container composition consumes the base image.
Something like this seems pretty useful for dev configurations
myproject:
build: .
command: nosetests
scale: 0
links:
- redis
redis:
image: redis
apiserver:
image: myproject
command: plackup
links:
- redis
workerserver:
image: myproject
command: rqworker
links:
- redis
@jamshid @jmmills What about a enabled
parameter/key in the YAML file per service? Such that you can disable/enable a service?
@prologic Why do that when the a "scale" parameter would solve both needs?
If you want to imagine a running process/container as an instance of a class, one could even name it instances
@jmmills I'm just trying to find a solution to your use-case that doesn't involve breaking the current docker-compose
as such. I do tend to think scale=0
doesn't seem that fitting and I'm in two minds about whether scale=X
should even be part of Compose itself.
In my opinion scale (or number of copies) is part of the composition of a service, thus should be included in compose.
Well I think we either have a scale=0
or disabled
key.
:+1: on having the capability of setting a default scale
size for an instance. And I agree, once scale is in, there is no need for a disabled
key, as you'd simply set scale to 0.
+1
Also, another use case: What if I want to scale the number of containers but don't want to background all of the services, or have to jump over to another terminal (or process) and set my scale numbers... e.g:
$ docker-compose up && docker scale thing=4
Doesn't work because up doesn't exit. But if my composition file sets the scale of my containers...
$ docker-compose up
Becomes DWIM.
I'm not sure that I really like this; all of a sudden up
takes on two capabilities:
scale
parameter.scale=0
.We're now abusing the "up" command really. "Scale" also takes on new meaning in that it now does two things:
`scale=0
Why would up bring up containers with a scale=0
?
Build would build images with a scale=0
, thus facilitating a base image need.
I could be wrong but reading your last comment it kind of implied that :)
Let me elaborate:
base_thing:
build: .
scale: 0
thing:
image: base_thing
# scale: 1 implied by default
workers_for_thing:
image: some_other_image
scale: 4
links:
- thing
test_harness:
image: base_thing
command: nosetests --where=my_test_code_dir --all-modules
scale: 0
Now, expected behavior: docker-compose build
builds any containers with "build" (does compose pull in external images on build? don't remember), docker-compose up
would run anything with a positive scale (default is 1), docker-compose run test_harness
would build "base_thing" if needed and run my one command. Savvy? :)
Edit: docker-compose up
would run 1
"thing" and 4
"workers_for_thing"
Okay :) Thanks; your example makes a bi more sense and a bit clearer as to the intention of scale=0
I think docker-compose
"pulls" images on up
/ run
.
I need create recipe indicate number of de intances (scale), for test, production, qa, etc.
+1 for scale=X
. This would be very helpful.
And +1 for @jmmills comment with configuration description and expected results.
Yay! for scale=x
. Initializing a set of containers would definately help to identify potential race conditions when setting up cluster configurations.
+1 for scale=x
(including scale=0
to disable services for the initial docker-compose up
)
+1 for scale=x
.
x
is NaN, I would propose -1
instead.
+1 for scale=x.
+1
+1
How about we stop with the +1's, please?
+1'ing is useful to see the level of interest for a feature.
@shofetim I know a better way to do just that: implement the feature in question and send out a pull request...
+1ing is also a good way to see people agree on a proposed solution. Its pretty common behavior across github. Clicking the unsubscribe button from notifications will turn these off if thats a problem.
Well, it looks like people like this. There is a similar item in the compose backlog (left over from Fig), I'm pretty sure I made a comment on it at some point. I'll try and follow up with it later tonight. I'm at PuppetCon most of this week, so hopefully that affords me some hack time - I'll see if I can write this.
Here is a work around for the "scale=0" use case:
app:
build: .
environment:
- DATABASE_URL=postgres://user:password@host:5432/dbname
command: sleep 999999999999
app_web:
extends:
service: app
ports:
- "3000:3000"
command:
# intentionally blank to use Dockerfile's default RUN command
app_worker:
extends:
service: app
command:
rake jobs:work
@wkonkel yeah, I've done similar things in the past.
I'm currently working on familiarizing myself with the compose codebase, once I know where all the things are for compose, I'll hack up a PR for a scale configuration parameter.
It doesn't seem like it's going to be too hard, there is a scale
method for a project which is the backend for the cli interface, so all I should really have to do is add "scale" to the field schema, make sure that if it's present to call it after container creation... then make sure it doesn't run a container if it's set to zero.
There is actually a really old PR open for this: #630.
The problem is that scale is an operational concern, it's not part of the application definition, so it doesn't really fit with the compose file.
It would be nice to support a configuration for a default scale level, but I don't think the compose file is the right place for it.
The case of scale: 0
should already be addressed by #1754. A "no-op" container can just have a command that exits right away (echo, true, etc). The cases for wanting scale: 0
are usually one of two: data-volume containers, or "adhoc/admin tasks".
Pretty soon we shouldn't need data volume containers because volumes are getting new API endpoints, and we'll be able to define volumes without the need for a container.
Administrative tasks are better handled with #2051. You can define admin.yml
which extends the core docker-compose.yml
and allows you to link your administrative tasks to the "core composition" without muddying the definition of each.
Both of these changes are in master now and will be available in the 1.5.0 release (which is right around the corner).
So that only leaves the case of wanting to scale a service to > 1 by default. It's already pretty easy to script something like this, but being able to put it in a config file would still be nice. I think we're going to explore the idea of a separate config in #745. It is my opinion that this new config would be a good place for things that are not part of the application definition (project name, default network name, default scale, etc).
Respectfully I disagree with scale being only an operational concern. Applications can care about minimum count of services running.
As far as no-op container, it feels kludgey to actually run a container when the purpose of that container is trigger a base image to be built in which other containers use for their image
field.
Applications can care about minimum count of services running.
Could you give an example of this case?
As far as no-op container, it feels kludgey to actually run a container when the purpose of that container is trigger a base image to be built in which other containers use for their image field
Is there a reason the base image needs to be part of the same build? As I call out in #1455 compose is not primarily a "docker build" tool. It's goal is to provide a composition of containers at runtime. Trying to support every possible build scenario greatly increases the scope of compose and brings the focus away from container composition. It would be difficult for even a tool designed around building images to support every one of these requests. I think a better direction is to keep as much of the build complexity out of compose, and let users swap in the appropriate build tool in place of docker-compose build
.
The use case I care about is scale=0 (perhaps abstract=true would be a better descriptor). I want to share images and environment variables amongst different commands. Specifically I want one web server running and one background jobs server running, both with the same code and both with the same environment variables.
@wkonkel using your example, I guess this would also work?
app_web:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:password@host:5432/dbname
app_worker:
extends:
service: app_web
command: rake jobs:work
ports: []
You trade having an abstract service with a no-op override command
for no abstract with a no-op override on ports
. Does that sound right?
@dnephin Yes, that does work in my case.
Actually, I take that back... I just tried it with docker-compose-1.4.2 and it seems that "ports: []" doesn't override the parent, so the app_worker fails to start with "port is already allocated".
@dnephin The thread has maybe better explanation, but I'll attempt to articulate it here.
The first that comes to mind is job systems where separate containers running the same code could be modeled like a parent->fork()->child pool type of service in which the default application configuration wants a minimum number of workers for concurrency.
My inspiration for this came from an app that uses RQ workers attached to different queues that shared a base image that contained my python packages (worker code), but had multiple instances running for each queue. Minimum availability of concurrency was a requirement of the application because of long running jobs.
I just think that having a no-op command seems like a waste of resources just to get a shared base image built in the same way as the rest of ones application stack. You end up with a while/sleep loop just for a base image, which is a cool work-around, but doesn't seem like an intuitive way to accomplish this. Not to mention leaves an item in our process tree with no runtime function.
If docker-compose is to truly not cross-over to the domain of codifying image build relationships, then maybe build option should go away, and some other build definition systems should be created so that I can define a build target to build a base image, and then other images that consume that base with a few of the modified artifacts/configuration, and in the correct order?
Or maybe, I'm just being too opinionated and should wrap my docker compose commands in shell scripts to start up my services with scale, and define all my images builds as make targets with dependancies.
I just think that having a no-op command seems like a waste of resources just to get a shared base image built in the same way as the rest of ones application stack.
With #1754 (in the next release) that isn't necessary anymore. A service can exit and it won't stop the rest of the services. So you can have the base just exit and not worry about it.
@dnephin Cool, so would you then provide a link
to that base/intermediate container in order to make sure it gets built first?
@dnephin: I have a CI runner which does the equivalent of docker compose up
. I want my test environment to run with multiple versions of a service (i.e, scale). I could copy the whole configuration block but this would involve repeating myself. In this case, it isn't "just an operational concern", it is something I need in my development environment while I develop a clustered application, which is currently fully described a compose file. At the moment I would have to have some out-of-band scale configuration and then somehow invoke docker-compose scale
, I suppose, but this doesn't seem ideal and introduces further opportunities for failure and racing.
In production cases you may want to star your services with a minimum scale. Let's say for example you are migrating from one cluster to another, and lets keep some hard parts of the migration as copy data out for the simplicity of the example; and you really need to start dealing with some traffic from the start so you need to deploy with docker-compose up at least (n) instances of some service lets say web.
Having scale under the service on the config file will really handle that. It's also an usability use case since if the use case in fact expose the requirement of having at least (n) instances of some service runnign and not just one at start point.
From my point of view scale is in fact a defining parameter of a topology and composing.
@dnephin
Could you give an example of this case?
Consul, MariaDB Master-Master or any other distributed app on Swarm really needs to have at least 3 nodes in cluster to be reliable. There are definitely use cases for number of services being available in config file, I do not understand why are you so against it. Big :+1: from me here.
I'm not against a way to configure scale, I just don't think it belongs in the compose file because it is state that changes independently from the compose file.
Take this example which assumes we add some scale config to the compose file:
docker-compose up -d
, my service scales to 3docker-compose scale service=4
docker-compose up -d
.. What happens? Does it down-scale to 3 again? Does it ignore scale entirely now? Neither of these scenarios sound good or appropriate, and this is just a trivial example which ignores instance failures (which makes it even more complicated).
If it were to get added to the compose file, I think we'd want to remove the scale
command, so they can't conflict.
The example you give is an example of an operational requirement. You don't need multiple masters to run the application, you need it for reliability (operation) of the service. And you can still accomplish that with scale db=x
.
As a user of compose (formally fig), I would like to be able to specify the number of nodes started for any given definition (a.k.a scale) from inside the manifest (yaml configuration file), so that I can ship my cluster definition with my service orchestration.
E.g. syntax: