We want the storage size to be large enough to support current and future use cases of DataverseNO.
The total file volume in DataverseNO might in the future amount to several tens or hundreds of TBs. The storage solution provided in the cloud deployment of DataverseNO has to account for this need. However, the maximum total blob size in MS Azure seems to be 8 TB; see Azure Storage Limits at a Glance. If this is correct, how do we handle file volumes larger than 8 TB? Will there automatically be added another blob?
We might also want to consider to configure one storage account for each collection in DataverseNO, e.g. 1 for UiT, 1 for NTNU etc. Would this help us to track costs more efficiently? Would this kind of storage set-up in Azure be more expensive than just one account for the entire DataverseNO?
8 To is for Page Blob
We are using block blob max : 100mb*50 000 = 5PB
our max size per blob is 5 Petabyte And if somehow that is not sufficient more blobs can be created in a trivial manner.
We want the storage size to be large enough to support current and future use cases of DataverseNO.
The total file volume in DataverseNO might in the future amount to several tens or hundreds of TBs. The storage solution provided in the cloud deployment of DataverseNO has to account for this need. However, the maximum total blob size in MS Azure seems to be 8 TB; see Azure Storage Limits at a Glance. If this is correct, how do we handle file volumes larger than 8 TB? Will there automatically be added another blob?
We might also want to consider to configure one storage account for each collection in DataverseNO, e.g. 1 for UiT, 1 for NTNU etc. Would this help us to track costs more efficiently? Would this kind of storage set-up in Azure be more expensive than just one account for the entire DataverseNO?
This issue is related to issue #1.