Open itamarst opened 7 years ago
Another possible option is to add the proxy container to the docker-compose file which would start up in the network created for that Compose project. Not 100% sure if that works in the same way as --net=container:...
.
For what it's worth, I was able to get this running by doing the following:
$ telepresence --new-deployment telepresence --method=container --docker-run -ti bitnami/minideb bash
I keep that container running to keep the telepresence deployment around the sshuttle? stuff around. Then, I copied the command from the telepresence proxy container that the above command starts and add a service to my docker-compose file:
version: '2'
services:
telepresence:
privileged: true
image: datawire/telepresence-local:0.63
command: ["proxy", "{\"port\": 59968, \"cidrs\": [\"10.0.0.0/24\"], \"expose_ports\": [], \"ip\": \"198.18.0.254\"}"]
ports:
- 8080:8080
ui:
tty: true # Enables debugging capabilities when attached to this container.
image: bitnami/angular:2.0.0
ports:
- 4200:4200
volumes:
- ./src/ui:/app
command: "ng serve --host 0.0.0.0"
api:
tty: true
build: ./dev_env/api
volumes:
- ./src/api:/go/src/github.com/kubernetes-helm/monocular/src/api
# Config example file
- ./docs/config.example.yaml:/root/monocular/config/monocular.yaml
environment:
- ENVIRONMENT=development
network_mode: service:telepresence
The API container then uses the telepresence network stack via network_mode. I could have used the container telepresence started with network_mode: container:<id>
, but I saw this as an experiment to inject the telepresence container in the Compose file.
The downside was that I had to move the ports to the telepresence service instead of the API service.
It should be possible to have the telepresence CLI take a docker-compose file, inject the telepresence service and move all port mappings to that service and then start up. That way this could work with some Compose files out of the box. Does that sound about right?
Excellent! Roughly speaking, you've implemented somewhat of a variant of option 1 by hand. I agree that your approach should work in many cases. I am going to experiment some more tomorrow and see if I can get a first pass implemented. Thank you for the detailed description; that will help a lot.
Hi @ark3 @prydonius I am really interested in using telepresence with docker-compose... could you provide me some pointers on how to use the command and the exposed ports in your file?
@matthyx starting a telepresence deployment with the container method $ telepresence --new-deployment telepresence --method=container --docker-run -ti bitnami/minideb bash
will start up a container. In the docker inspect
output of that container you can copy the command it starts up with and add it to your Compose file. Does that help?
@prydonius thanks for your answer!
I have done some tests, and found it better to parse the docker-compose file, build the resulting docker run, and start telepresence with --docker-run
.
I can provide the python script doing that if anyone is interested...
@matthyx The python script would be great! (or better yet, would you be interested in a guest blog post?)
Sure, how does it work? We agree on the content, I write something and you review before publication? The reason why we have to rely on docker-compose.yml is that we use https://github.com/swissquote/carnotzet/ with a special plugin to deploy on Kubernetes... then telepresence allows us to debug one service when running on developer workstations. I will try to ask if it would make sense to publish simultaneously on your blog and https://medium.com/swissquote-engineering
@matthyx Yes, we're planning on revamping the tutorial section on the Telepresence website, so it could fit there. And yes, of course you could publish it on your blog! (Happy to have you publish there first, or second, it doesn't matter to us. We would also be happy to give you credit for it from the Telepresence website.)
I really like that idea of running telepresence inside a container leveraging docker-compose locallu, but habe most of the dev env running inside k8s. The dev doesn't have to spin up everything locally, only the parts that he/she is changing..
Cant Find the blogpost. Was it ever written? Is the idea flawed, or superseded?
@xeor sorry, I kinda forgot about it... the script works, and we're still using it from time to time. Let me work a bit on it and I will share with some background here.
@matthyx is that script still floating around? This use case is exactly what I have been hoping to find.
Hello, and sorry for the delay. The script is now here and was made for my own needs, so it's not feature complete. I let you try it, and open issues if you're missing something...
I am searching desperately for a method to connect a single container started by docker-compose on "Docker for Desktop" (visual studio 2017, dotnet 2.2 in debug) to a remote kubernetes cluster
The method mentioned above with docker-compose network_mode: service:telepresence looks very promising.
Does/Did it work? If not, I would not brother to install WSL on windows
Any updates on that? :)
I don't see why this wouldn't work in v2 since it seems like there was a manual version running in tp1, but would require some validation to be sure.
Telepresence currently supports single containers. One potential user asked about Docker Compose support, which could be done.
Implementation notes:
By default Compose creates a new, shared network namespaces for each compose file (https://docs.docker.com/compose/networking/). It can however be overridden (https://docs.docker.com/compose/compose-file/#network_mode is one way, which matches the one Telepresence uses for
--docker-run
.) So we could either:The latter option wouldn't provide env variables or volumes, but would be useful in general, since you could also attach to running Docker containers and start proxying their networking, rather than current UX which forces user to run container using
telepresence --docker-run
.