Closed kellrott closed 7 years ago
What's the motivating use case for this?
We are working on a TES based Jupyter hub launcher ( https://github.com/ohsu-computational-biology/tesnotebookspawner ).
@kellrott The docs in that repo don't really explain the why and how, could you provide more details? I could see me opining that something like this goes beyond what I'd personally view as the scope for TES but that's difficult for me to confirm or deny at the moment ;)
In essence, we have a Juypter launcher that can use TES to start up new Juypter kernels (the actual process doing the compute for the code you are interacting with in the notebook server). The web server needs to be able to interact with the job it launched and does so by connecting with the kernel runner on the worker node and pushing the user interaction requests to it.
I think this feature opens up some cool possibilities, and doesn't create huge issues for implementors. The main reasons for this are:
1) It doesn't do anything more then use the existing -p
flag in the docker command. So conformance/implantation isn't onerous, we're simply uncovering a feature that was already there.
2) This doesn't require/enable the creation of a synthetic network, much like kubernetes or docker-compose would be doing ( see https://github.com/coreos/flannel ). That kind of configuration is more high level and is best left to those systems. This is just exposing a port on the host machine.
3) We can stipulate in the spec that if the request does't meet with the a deployments requirements (maybe an implementation/specific deployment disables this feature completely) they are allowed to reject the task request with error. In essence, its an optional feature/extension for the spec.
Gotcha, that makes sense. :+1:
The message type is PortMapping
and the attribute name is portBindings
, they should be the same. hostIP
could just be ip
. containerPort
could be container
and hostBinding
could be host
.
I don't agree that a task should fail if the port is already used on the worker. The implementation should know not to schedule a task on a worker with a conflicting port. If there are no workers available, the task should stay queued.
This is an important bit of semantics that can't be properly captured in a schema. We should think about where we want to document expected runtime behavior.
hostBinding
is documented as required but it's not actually required.
Just as a note (and this probably needs to be fixed other places) we should probably follow the protobuf style guide: https://developers.google.com/protocol-buffers/docs/style
Thanks for the feedback. I pushed some changes to address the comments from @buchanae
Adds ability to define port mapping for containers to be deployed. Also adds field for IP address info for job host machine.