Open jkopriva opened 7 years ago
Makes sense to me. #2 would share the volume with all slaves provisioned this way, right?
I do not think, that cinder volume could be linked with more than 1 slave.
@jkopriva, what would be the use-case for an exclusively used persistent drive?
@olivergondza It could be for example when you have one slave used for nexus repository - you do not have to delete this drive, but sometimes you want to restart slave, but anyway, this is a special case, first approach would be used much more often.
I have a related use case that could use #2 but since I also use the shared volume for file caching, it would be better to allow #3.
My main use case, is that I launch around 80 Jenkins jobs from a buildflow (will migrate to pipeline later), in parallel. Each launched Jenkins job, saves the test result files. After the buildflow completes the parallel job execution, a report job is started that reads the test results from each completed job and writes them to the shared disk volume test result files and generates a web report and sends email. The test result files are proprietary format.
As @jkopriva mentioned, cinder volume can not be linked with more than 1 slave,which I think is mentioned in this proposal for changing that. Or, perhaps Cinder has already addressed multi-attach-volume, as mentioned here..
In summary, I really need #2 but could make use of #3, if that became an option, which is why I am invading this issue with my additional (optional #3) request (if that makes sense to also include).
+1. It would be useful when a Jenkins slave requires a large storage.
I have got this change in progress though it was postponed weeks ago due to other work coincidentally in the same plugin.
+1 What is the current state for this feature? Adding volumes would drastically speedup the jenkins instances for many workloads.
This is currently the most demanded feature and I expect to start working on this in coming weeks.
Please give the options to set the storage backend, region, size, filesystem, mountpoint and to be able to mount several different volumes. It would also be nice to be able to mount existing volumes without destroying them afterwards.
Thanks!
How are we doing?
I have pushed a simple patch allowing boot from volume with images at #194.
Support fo booting from volume (https://github.com/jenkinsci/openstack-cloud-plugin/pull/195) was released in 2.32.
Will anything more happen on this topic? The #195 addressed only the first use-case in which the volume is created by the plugin but does not address the second use-case in which the user specifies the volume by himself and it's just attached to the instance.
@Ziomalon, the fact only a half of this was addressed is the reason I am keeping this open. Personally, I do not have the motivation to address that part. Any takers?
I would also like to have this 2. feature. It would be a really nice way to speed up building as non-persistent volumes cause a build from scratch while this would allow incremental builds. Or is there another good workaround for getting incremental builds?
Please add support for Cinder volumes in templates. I would suggest two different behaviors: