I ran out of disk when testing one of the new data sets with a larger library. So I decided to just bump the base disk size. Hopefully 2x is sufficient.
An alternative strategy is to make more disk sizes, and have initial alevin processing use a larger disk, with later (fry) steps smaller, but this seemed sufficient for now.
I ran out of disk when testing one of the new data sets with a larger library. So I decided to just bump the base disk size. Hopefully 2x is sufficient.
An alternative strategy is to make more disk sizes, and have initial alevin processing use a larger disk, with later (fry) steps smaller, but this seemed sufficient for now.