This is a long-term goal of this package, and there isn't anything in pyutilib.workflow that currently supports this. Here are some considerations for this, which are related to an upcoming workshop on many-task computing:
Compute Resource Management
Scheduling
Job execution frameworks
Local resource manager extensions
Performance evaluation of resource managers in use on large scale systems
Dynamic resource provisioning
Techniques to manage many-core resources and/or GPUs
Challenges and opportunities in running many-task workloads on HPC systems
Challenges and opportunities in running many-task workloads on Cloud Computing infrastructure
Storage architectures and implementations
Distributed file systems
Parallel file systems
Distributed meta-data management
Content distribution systems for large data
Data caching frameworks and techniques
Data management within and across data centers
Data-aware scheduling
Data-intensive computing applications
Eventual-consistency storage usage and management
Programming models and tools
Map-reduce and its generalizations
Many-task computing middleware and applications
Parallel programming frameworks
Ensemble MPI techniques and frameworks
Service-oriented science applications
Large-Scale Workflow Systems
Workflow system performance and scalability analysis
Scalability of workflow systems
Workflow infrastructure and e-Science middleware
Programming Paradigms and Models
Large-Scale Many-Task Applications
High-throughput computing (HTC) applications
Data-intensive applications
Quasi-supercomputing applications, deployments, and experiences
This is a long-term goal of this package, and there isn't anything in pyutilib.workflow that currently supports this. Here are some considerations for this, which are related to an upcoming workshop on many-task computing: