Closed karlkfi closed 5 years ago
Hi karlkfi,
You can use the --workers
and --worker-size
flags on concourse-up deploy
to configure the number and type of workers. Supported values for worker types can be found in the docs.
You are correct that setting a custom type would be overwritten by concourse-up if you deployed again. We currently only support t2 and m4 type vms but we would be interested to learn about use cases for other worker types.
The use case would be to exercise ML models on GPU instances, like g3 or p3, as part of a pipeline that includes other work on normal m4 instances, like compiling, release building, unit tests, etc.
This is not currently something on our roadmap. It is possible to modify the manifest/cloud config to allow whatever vm types are required but this will remove the ability to use concourse-up
to update/manage that Concourse going forward as it would revert the manual changes.
I see there's only one type of Concourse worker defined in the bosh deployment. Is there any way to easily add more worker types?
If I wasn't using concourse-up I guess I would just modify the bosh deployment to have another worker entry, but if I do that manually concourse-up will probably kill it when it auto-updates or someone tries to scale with the cli, right?
The primary use case here is to have some GPU nodes with a custom worker name so that you can use them for some but not all concourse jobs. Another desirable might be network or disk optimized workers.