cloudfoundry / bosh-backup-and-restore

https://docs.cloudfoundry.org/bbr
Apache License 2.0
26 stars 17 forks source link

Container failed due to space #33

Closed vanillacandy closed 4 years ago

vanillacandy commented 4 years ago

Hi, We have extended the concourse container to bigger size, and the BBR backup had worked for few times in the past, but it failed after few successful director backups. Is it possible to backup director incrementally? Or are there other solutions for this?

Error 1: failed to create volume

Error 2:
Error streaming backup from remote instance. Error: ssh.Stream failed: stdout.Write failed: write XXX/bosh-0-blobstore.tar: no space left on device: ssh.Stream failed: stdout.Write failed: write XXX/bosh-0-blobstore.tar: no space left on device

cf-gitbot commented 4 years ago

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/169685841

The labels on this github issue will be updated when the story is started.

alamages commented 4 years ago

Hello @vanillacandy

Hi, We have extended the concourse container to bigger size, and the BBR backup had worked for few times in the past, but it failed after few successful director backups. Is it possible to backup director incrementally? Or are there other solutions for this?

Unfortunately, bbr does not support incremental backups for the director. At the moment this is the only way to backup the director.

How are you backing up the director in Concourse? Are you using our concourse tasks: https://github.com/pivotal-cf/bbr-pcf-pipeline-tasks ?

vanillacandy commented 4 years ago

Yes, I am using this one: https://github.com/pivotal-cf/bbr-pcf-pipeline-tasks

totherme commented 4 years ago

Hi @vanillacandy

If you're seeing a director size that increases over time (eventually leading to a failed backup, because we don't have enough space to manipulate the backup artifacts), then one option might be to periodically clean the director. In director versions from v270.0 or later, you can do this with the command:

bosh clean-up --all

Beware though, this is a destructive command! It will remove all unused resources including orphaned disks and unused releases. For example, if you have any on-demand service-brokers deployed which do not currently have any instances deployed, then bosh clean-up --all will delete the bosh releases needed to deploy new instances.

If you are a closed source user of Pivotal products such as OpsManager, I recommend our wonderful support team who can might be able to help.

aclevername commented 4 years ago

Closing due to inactivity