Closed deitch closed 7 years ago
Is the recommended pattern to configure containerpilot.json to be something like this:
Yes!
Alternately, you can use tags
for the services and then just make sure your backend onChange
handlers respect those tags. (We should probably include them in the check for changes but this would be a Consul-only setting which wasn't available to us in previous versions.)
Yes!
Thanks Tim.
but this would be a Consul-only setting which wasn't available to us in previous versions
Yeah, I was wondering about that, but then the other proposed issue to eliminate other backends, given consul's strength with services beyond basic KV store clarified it.
OK, right now working on my own variant which:
"mysql"
with "{{ .SERVICE_NAME }}
(well, with a fallback to "mysql"
)Yeah, I know, Manta is Joyent, and containerpilot is joyent, and I still think Joyent engineers are among the smartest I have seen, but real world out there with clients who are AWS-centric.
If you want a PR, let me know.
Yeah, I know, Manta is Joyent, and containerpilot is joyent, and I still think Joyent engineers are among the smartest I have seen, but real world out there with clients who are AWS-centric.
We're trying to be as platform-agnostic as possible with ContainerPilot and the various Autopilot Pattern blueprints. They really should work anywhere so don't worry about hurting our feelings about that! 😀
If you want a PR, let me know.
If you're talking about snapshots for autopilotpattern/mysql, you might want to take a look at https://github.com/autopilotpattern/mysql/issues/35. I want to get to that but it isn't on the top of my personal TODO list. Would love a PR on that.
They really should work anywhere so don't worry about hurting our feelings about that
Heh, I know you guys, I am not worried at all about hurting your feelings. Great technologists don't get hurt about suggestions, even if they stomp on a toe or ten.
Would love a PR on that.
So I have 2 sets of changes here.
"mysql"
. Tiny, but does the job.BACKUP_DRIVER
. Of course, it has to have the library installed, which means my version installs boto
in addition to manta
via pip. And NFS would mean adding more, and WebDAV, and SMB, and git (I created a simple mysql-backup image that stores nicely to git, seriously).The right thing to do is have something fully pluggable run-time, as opposed to image build time, but if you are willing to tolerate it at build time for now, then happy to submit the PR. You tell me.
And, yeah, I know, this issue should now be on https://github.com/autopilotpattern/mysql/issues, but it was a general question at first. :-)
One other piece, side note. I found it confusing figuring out the pattern between consul:
in config and the consul agent coprocess. I figured that containerpilot binary has consul client built in, which in turn can talk to an agent or the server (subject, of course, to the problems with service catalog if you talk directly to a server and lose it vs talking to an agent which monitors local).
So consul:
tells the client "here is the host:port
of the consul (agent or server) you should talk to", while the coprocess starts up an in-container agent.
One other piece, side note. I found it confusing figuring out the pattern between consul: in config and the consul agent coprocess.
Yeah, that's an unfortunate consequence of the no-host-local-services model you have w/ containers on Triton (or any PaaS). We're looking to come up with a less clunky solution in https://github.com/joyent/containerpilot/issues/246
The right thing to do is have something fully pluggable run-time, as opposed to image build time, but if you are willing to tolerate it at build time for now, then happy to submit the PR.
I think the suggestion in https://github.com/autopilotpattern/mysql/issues/35 is to have a container that proxies these backends, in which case either build time or run time would be fine (and could be improved in a second go later on).
Yeah, that's an unfortunate consequence of the no-host-local-services model
The idea of discovery is cool, makes sense. Kind of redundant, "service discovery to my service discovery"? :-)
I don't mind at all the running local, actually, although in another client project, we did registrator+consulagent+traefik on the host.
It was just the confusion of knowing why and what each was for.
I think the suggestion in autopilotpattern/mysql#35 is to have a container that proxies these backends, in which case either build time or run time would be fine (and could be improved in a second go later on).
Oh, so you run 2 containers, and mysql talks to a "sidekick" that just exposes a standard storage API and talks whichever backend you want? I like it, but it isn't instant.
In any case, do you want either of these? Commits are on my fork at https://github.com/deitch/autopilotpattern-mysql/commits/pluggable-snapshot-storage and https://github.com/deitch/autopilotpattern-mysql/commits/variable-service-name
And, no, I haven't done proper tests yet... :-(
@deitch I may have missed this from the thread, but I'd like to be sure I understand one of the details here:
I have 3 unique clusters of 3 containers each running mysql. If I use the autopilotpattern/mysql, the service name is baked in as
"mysql"
. That means that instead of 3 endpoints in the consul catalog named, e.g.,"serviceA-db"
, 3 named"serviceB-db"
, 3 named"serviceC-db"
, I will end up with 9 named"mysql"
.
Are the three MySQL services part of the same application, and the same instance of the application? Or, are they part of different environments the application is running in?
Data in the application is sharded in three separate MySQL clusters. The application connects to all three clusters simultaneously.
We have three separate MySQL clusters for dev, staging, and prod. The data set for the dev and test clusters is taken from backups of the prod instance (the backups may be sanitized). It is critical that nothing leak between environments, such as a production application connecting to the dev database or vice versa.
The discussion and proposed solutions I read through in here are very good for the first case, but I'd suggest they're less suitable for the second.
For that second case, I'd strongly recommend running separate Consul instances. That way you can isolate everything without needing to namespace it in the application. After all, a mistaken namespace in your application could cause leakage between environments that might be very bad.
I'd further recommend that you consider running those environments in different data centers or on different custom network fabrics to add even more isolation.
A similar question was raised in https://github.com/autopilotpattern/wordpress/issues/41.
Example of multiple clusters in the same application Data in the application is sharded in three separate MySQL clusters. The application connects to all three clusters simultaneously.
Exactly. It might be one service connecting to three databases, or three microservices in the same environment each connecting to a database, but all are in the same environment. That is exactly the case I was describing.
For that second case, I'd strongly recommend running separate Consul instances
Of course! As they say, "friends don't let friends drive production, staging, dev, qa and other environments drunk, I mean, together." :-)
So, to summarize, the right way is to extend (or PR) the mysql so that the SERVICE_NAME
is configurable run time.
Glad I got it.
Circling back to this: the PR for https://github.com/deitch/autopilotpattern-mysql/commits/variable-service-name would be great as-is.
The work in https://github.com/deitch/autopilotpattern-mysql/commits/pluggable-snapshot-storage looks good in terms of code and gets my personal 👍 "nice work", but I think we want to have the official approach here be the sidecar proxy container.
the PR for https://github.com/deitch/autopilotpattern-mysql/commits/variable-service-name would be great as-is.
Keep it simple, right? I am opening the PR right now...
Closing this issue as resolved and taking up the review of that PR. Thanks @deitch
As a general rule,
containerpilot.json
is baked into the container image, and then configured via env vars, usually indocker-compose.yml
ordocker run -e ...
but could be anywhere.What is recommended practice for reusing an image for multiple services with unique names? E.g. I have 3 unique clusters of 3 containers each running mysql. If I use the autopilotpattern/mysql, the service name is baked in as
"mysql"
. That means that instead of 3 endpoints in the consul catalog named, e.g.,"serviceA-db"
, 3 named"serviceB-db"
, 3 named"serviceC-db"
, I will end up with 9 named"mysql"
.I could extend the autopilotpattern/mysql image to 3 distinct images, but that too seems silly and wasteful (although the final layer of the images will be very small with just a tiny difference).
Is the recommended pattern to configure
containerpilot.json
to be something like this:and then set the env var at runtime via
-e SERVICE_NAME=serviceA-db
or indocker-compose.yml
?