Closed xyloman closed 7 years ago
We have created an issue in Pivotal Tracker to manage this. Unfortunately, the Pivotal Tracker project is private so you may be unable to view the contents of the story.
The labels on this github issue will be updated when the story is started.
We are seeing the same issue. We are unable to use this pipeline at all due to the export being ~12G and import causes the default 50G volume to fill up. We confirmed this is the actual issue by connecting to the instance and watching disk usage as import is happening.
The bug here is that we didn't create an equivalent VM/disk with the new AMI.
The point of the upgrade pipeline is to basically duplicate the settings of the VM but with a new AMI. I don't think we should try to make it configurable.
In that case, this actually sounds like a bug in pivotal-cf/cliaas for the replace-vm
command.
Would it be appropriate to log an issue against cliaas then? I think if the replace-vm preserved the settings of the ops manager VM that would be sufficient. We can change the scope of this issue to focus on that. However, if the OpsMan vm needed to get resized would that remain a manual process?
My opinion is that, yes, reporting this on pivotal-cf/cliaas is probably the only way to get this fixed if it's not considered a bug of the pipeline itself and cliaas itself doesn't even have this as a configurable option. It should already be doing the disk size work as part of the replace-vm
.
Yes, I'd say it's a cliaas issue. On Fri, Jun 30, 2017 at 05:05 BK Box notifications@github.com wrote:
My opinion is that, yes, reporting this on pivotal-cf/cliaas is probably the only way to get this fixed if it's not considered a bug of the pipeline itself and cliaas itself doesn't even have this as a configurable option. It should already be doing the disk size work as part of the replace-vm.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pivotal-cf/pcf-pipelines/issues/133#issuecomment-312250027, or mute the thread https://github.com/notifications/unsubscribe-auth/AAE1_dSm3rZCwudJg5PO3Men4AwfisHIks5sJORwgaJpZM4OFrvF .
I'll close this issue then, since we believe this can be addressed appropriately by fixing https://github.com/pivotal-cf/cliaas/issues/4. please re-open if you think we need to address the issue of not being to configure the initial Ops Mgr attached disk size.
@ryanpei thanks for closing this issue. I think the only feature enhancement is an automated way to change instance type and volume size. We will need a way to orchestrate rolling out instance types as they become available. We will also need an ability to orchestra resizing of the ops manager persistent disk as a resultant of ever growing tile usage.
Implement ability to specify ops manager vm instance type and volume size. I recently ran into an issue with upgrading from 1.11.0 to 1.11.3. Here is the trace from the job running in concourse:
The VM in AWS that I am upgrading from is type m3.large with a 100GB volume associated. The image created by the ops manager job is m3.large with a 50 GB volume associated, which I believe is the cause of the error above. Having the ability to declare the instance type and volume size would help fix this issue. We are on pcf-piplines v0.15.0, ERT 1.11.1, and OpsMgr 1.11.0.