Closed Mouhamet317 closed 5 months ago
Segy data isn't designed for performant data access on this scale. You can use the cli converter to convert the data to NetCDF format which allows for lazy access of the data from disk. Alternatively you could try the ZFP compression format or other fast access format for cube data.
I have tried to convert it to NetCDF format but it failed, error message:"No space left on device"
I have tried to convert it to NetCDF format but it failed, error message:"No space left on device"
This sounds like you don't have enough spare hard drive space to create the NetCDF file. Can you move the file to larger hard drive or free up space on your existing disk?
Use this to compress your seismic: https://github.com/equinor/seismic-zfp (pip install works on most platforms, but non Mac M1 I'm afraid) and then use preload=True
when you create a SgzReader: https://github.com/equinor/seismic-zfp/blob/4fa8fdd15930d116a84faa2a6e4217d0bf38cbf7/seismic_zfp/read.py#L80
SEGY-SAK v0.5 adds support for large SEG-Y via lazy loading of data. Alternatively, use a different file format such as ZFP, ZGY, or OpenVDS.
Hello !! I am currently using segysak to work with seismic data with jupyter notebook. I have a system error when i try to load my cube which is nearly 24 Giga with the segy_loader function. Here is the error:
"MemoryError: Unable to allocate 42.0 GiB for an array with shape (2501, 2001, 2251) and data type float32"
How can i do to be able to load huge volume of seismic please ?