Right now, the default disk space allocated is 10Gi. The original thought had been to start small because people would deploy testing environments, and allow them of course to specify whatever size they want.
That same kind of overall size/speed story repeats on all the major cloud environments, not limited to GCP.
Now, to be fair - most people probably won't need large amounts of disk on their data volume. This over-allocation is being done purely to get better throughput to the disk, which is a critical performance factor for Neo4j.
The intent of this change would simply be to present a better performance picture of Neo4j and not disadvantage testing workloads from the beginning, purely to save a few cents on disk.
Describe the bug
Right now, the default disk space allocated is 10Gi. The original thought had been to start small because people would deploy testing environments, and allow them of course to specify whatever size they want.
The problem with this is that because disks are shared on various cloud providers, you get absolutely awful throughput/iops at small disk sizes. See this table as an example of the issue: https://cloud.google.com/compute/docs/disks/performance#performance_by_disk_size
That same kind of overall size/speed story repeats on all the major cloud environments, not limited to GCP.
Now, to be fair - most people probably won't need large amounts of disk on their data volume. This over-allocation is being done purely to get better throughput to the disk, which is a critical performance factor for Neo4j.
The intent of this change would simply be to present a better performance picture of Neo4j and not disadvantage testing workloads from the beginning, purely to save a few cents on disk.