nipy / nipype

Workflows and interfaces for neuroimaging packages
https://nipype.readthedocs.org/en/latest/
Other
741 stars 524 forks source link

Create a network of containers to run interfaces #2367

Open kaczmarj opened 6 years ago

kaczmarj commented 6 years ago

Nipype could be installed in a clean environment without other software (like FSL, FreeSurfer, SPM, etc.), and nipype would orchestrate other containers that include necessary software to run nodes/workflows. One can imagine one or several versioned containers per interface and potentially minified containers.

chrisgorgo commented 6 years ago

Each Interface would have a dedicated container (but many Interfaces could share one container). Container name would bet a property of the Interface.

This would allow running workflows without installing any software other than nipype and Docker or Singularity.

kaczmarj commented 6 years ago

Kubernetes might be useful for this.

kaczmarj commented 6 years ago

Related to #2071.

anibalsolon commented 6 years ago

A container spawning jobs on Kubernetes would beautifully do the trick.

The minikube could be used to run Kubernetes locally, however, these are the installation requirements. It might be cumbersome for initial users, but managed Kubernetes clusters from Google Cloud, Azure, IBM and AWS (on preview) could provide usage traction for this feature.

After this initial minikube setup, a Nipype container with some cluster capatibilites could use the Kubernetes python client to spawn jobs using BIDS apps images, sharing a persistent volume to provide the initial data and retrieve the results.

I believe this job scheduler could be a nipype's execution plugin to run seamlessly the workflows. :ok_hand: :shipit:

unikzforce commented 5 years ago

I suggest you take a look at Argo workflow. It's a kubernetes workflow engine.. it could be used as execution engine behind the scene. It's api is based on openapi so generating python client is seamless for it.