Open pstadler opened 7 years ago
I figured out that VC1S instances are generally limited to 50GB storage - that's probably the reason why servers are not being started after attaching volumes to it using Terraform. This fact makes custom image sizes even more important.
Bump!
Please note that both volumes will be billed separately (as 2x50GB), leading to higher costs than expected.
Discussion: https://community.online.net/t/wrongful-volume-billing/4539 I also tried to resolve this with support, but got no real answer so far.
The only reasonable solution at this point is to not use VC1S with more than a single attached volume, settle for VC1M in this case.
Hello there
Unfortunately, it's not possible to split the maximum disk size of a VC1S server into multiple disks.
Even though I managed to create a 30GB Ubuntu image and successfully attached another 20GB volume during start using Terraform and a volume mapping described here, I can't make this image publicly available, nor transfer it between par1 and ams1. It was even terribly complicated to create the image on ams1 in the first place. This is what I hacked together to make it work.
Besides that, using a scaleway_volume_attachment with Terraform stops the server (to attach the volume), but doesn't start it again afterwards. While fully aware that this could have been reported to the Terraform repository instead, I decided to start the discussion here.
I'm embracing Scaleway for hosting Kubernetes clusters in hobby-kube/guide.