geodesymiami / insarmaps

3 stars 0 forks source link

increase insarmaps server memory and diskspace? #54

Closed falkamelung closed 2 years ago

falkamelung commented 2 years ago

Hi @stackTom , I have repeatedly big datasets for which the ingest script throw an error (see below).The reason seems to be running out of memory. Smaller files work fine.

It is time to move to a server with both, more memory and larger disk space. I will ask Darren whether he can increase on the existing VM both, and/or give us a new VM . Is there anything else that comes to you mind we have to consider?

Thank you and I hope all is well, Falk

Miami2SenAT48/minopy[1010] json_mbtiles2insarmaps.py ....
Uploading json chunks...
Clearing old dataset, if it is there
Creating index on S1_IW3_048_0081_0082_20190103_20211112
Inserted chunk_5.json to db
Inserted chunk_1.json to db
Inserted chunk_4.json to db
FAILURE:
Unable to open datasource `JSON/S1_IW3_048_0081_0082_20190103_20211112.mbtiles-journal' with the following drivers.
  -> `PCIDSK'
  -> `netCDF'
  -> `PDS4'
  -> `VICAR'
  -> `JP2OpenJPEG'
  -> `PDF'
  -> `MBTiles'
  -> `EEDA'
  -> `ESRI Shapefile'
  -> `MapInfo File'
  -> `UK .NTF'
  -> `OGR_SDTS'
  -> `S57'
  -> `DGN'
  -> `OGR_VRT'
  -> `REC'
  -> `Memory'
  -> `BNA'
  -> `CSV'
  -> `NAS'
  -> `GML'
  -> `GPX'
  -> `LIBKML'
  -> `KML'
  -> `GeoJSON'
  -> `GeoJSONSeq'
  -> `ESRIJSON'
  -> `TopoJSON'
  -> `Interlis 1'
  -> `Interlis 2'
  -> `OGR_GMT'
  -> `GPKG'
  -> `SQLite'
  -> `OGR_DODS'
  -> `WAsP'
  -> `PostgreSQL'
  -> `OpenFileGDB'
  -> `XPlane'
  -> `DXF'
  -> `CAD'
  -> `FlatGeobuf'
  -> `Geoconcept'
  -> `GeoRSS'
  -> `GPSTrackMaker'
  -> `VFK'
  -> `PGDUMP'
  -> `OSM'
  -> `GPSBabel'
  -> `SUA'
  -> `OpenAir'
  -> `OGR_PDS'
  -> `WFS'
  -> `OAPIF'
  -> `HTF'
  -> `AeronavFAA'
  -> `EDIGEO'
  -> `SVG'
  -> `CouchDB'
  -> `Cloudant'
  -> `Idrisi'
  -> `ARCGEN'
  -> `SEGUKOOA'
  -> `SEGY'
  -> `XLS'
  -> `ODS'
  -> `XLSX'
  -> `Elasticsearch'
  -> `Carto'
  -> `AmigoCloud'
  -> `SXF'
  -> `Selafin'
  -> `JML'
  -> `PLSCENES'
  -> `CSW'
  -> `VDV'
  -> `GMLAS'
  -> `MVT'
  -> `NGW'
  -> `MapML'
  -> `TIGER'
  -> `AVCBin'
  -> `AVCE00'
  -> `HTTP'
Error inserting into the database. This is most often due to running out of Memory (RAM), or incorrect database credentials... quitting//c506-041/scratch/05861/tg851601/Miami2SenAT48/min
opy
falkamelung commented 2 years ago
json_mbtiles2insarmaps.py ....

uploading json chunks...
Clearing old dataset, if it is there
Creating index on S1_IW1_128_0596_0597_20160605_XXXXXXXX
Inserted chunk_10.json to db
Inserted chunk_13.json to db
Inserted chunk_5.json to db
Inserted chunk_12.json to db
Inserted chunk_8.json to db
Inserted chunk_14.json to db
Inserted chunk_11.json to db
Inserted chunk_6.json to db
Inserted chunk_1.json to db
Inserted chunk_4.json to db
Inserted chunk_7.json to db
Inserted chunk_3.json to db
Inserted chunk_2.json to db
Inserted chunk_9.json to db
Uploading mbtiles...
Traceback (most recent call last):
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/json_mbtiles2insarmaps.py", line 209, in <module>
    main()
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/json_mbtiles2insarmaps.py", line 182, in main
    dbController.upload_mbtiles(parseArgs.mbtiles_file)
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/add_attribute_insarmaps.py", line 258, in upload_mbtiles
    curl.perform()
pycurl.error: (26, '')
stackTom commented 2 years ago

I will have to investigate. I can take a look tomorrow. The first error is probably running out of RAM on the server. The second error I am not sure, but I suspect it is a similar issue.

falkamelung commented 2 years ago

Miraculously it was working again the next day after I removed old data. Maybe at midnight permissions are fixed or temporary files removed?

The error happened when I tried to ingest a big dataset (lots of pixels). Do you think the problem was caused by memory limitation or by disk space limitation? Unfortunately I did not check whether disk space was at the limit. Now it is fine (66%, see below). I don't know, though, whether the percentage gets smaller by removing data.

Darren said he can get both, more disk space as well as more memory. I was thinking to ask for more disk space. Would it be OK to just ask for a new partition, e.g. /data3 with 4TB? I am unsure whether to ask for more memory (we have 8GB). Darren looked at the log and did not see that memory was exceeded.

[insaradmin@insarweb ~]$ df 
Filesystem      1K-blocks      Used  Available Use% Mounted on
/dev/sda3        20511312   5790744   13655608  30% /
devtmpfs          3990932         0    3990932   0% /dev
tmpfs             4005564        88    4005476   1% /dev/shm
tmpfs             4005564      9252    3996312   1% /run
tmpfs             4005564         0    4005564   0% /sys/fs/cgroup
/dev/sdc1      1056757172     77848 1002975900   1% /data2
/dev/sdb1      1056752744 654977856  348071688  66% /data
/dev/sda5        20511312   2865176   16581176  15% /var
/dev/sda1          689128    234112     404840  37% /boot
/dev/sda2       110046720  14182124   95864596  13% /home
tmpfs              801116        16     801100   1% /run/user/42
tmpfs              801116         0     801116   0% /run/user/1423

[insaradmin@insarweb data]$ du -sh --total --block-size=M *
25999M  converting
134230M dump.sql
134230M dump.sql.bak
791M    HDFEOS
1M  lost+found
1M  OUT.txt
du: cannot read directory ‘postgresDBData’: Permission denied
1M  postgresDBData
26887M  tileserver
14778M  TILESERVERBACKUP
336911M total
[insaradmin@insarweb data]$ du -sh --total  *
26G converting
132G    dump.sql
132G    dump.sql.bak
791M    HDFEOS
16K lost+found
16K OUT.txt
du: cannot read directory ‘postgresDBData’: Permission denied
4.0K    postgresDBData
27G tileserver
15G TILESERVERBACKUP
330G    total
falkamelung commented 2 years ago

Hi @stackTom , Do you remember how to calculate the RAM requirement for the ingestion? Here is an example of a big file. There are 41 77 MB files, a total of 3.2 GB. (I interupted the ingestion as I don't want to break it right now).

hdfeos5_2json_mbtiles.py S1_IW3_048_0081_0082_20190103_20211112_PS.he5 ./JSON
....
....
reading displacement data from file: S1_IW3_048_0081_0082_20190103_20211112_PS.he5 ...
reading mask data from file: S1_IW3_048_0081_0082_20190103_20211112_PS.he5 ...
Masking displacement
columns: 4161
rows: 866

844621 features, 26007688 bytes of geometry, 256 bytes of separate metadata, 25362914 bytes of string pool
Choosing a base zoom of -B0 to keep 4 features in tile 0/0/0.
With gamma, effective base zoom of 14, effective drop rate of -nan
Internal error: 745 shards not a power of 2
time elapsed: 2279.751658897847
Miami2SenAT48/minopy[1036] dush JSON/*
77M JSON/chunk_10.json
77M JSON/chunk_11.json
77M JSON/chunk_12.json
77M JSON/chunk_13.json
77M JSON/chunk_14.json
77M JSON/chunk_15.json
77M JSON/chunk_16.json
77M JSON/chunk_17.json
77M JSON/chunk_18.json
77M JSON/chunk_19.json
74M JSON/chunk_1.json
77M JSON/chunk_20.json
77M JSON/chunk_21.json
77M JSON/chunk_22.json
77M JSON/chunk_23.json
77M JSON/chunk_24.json
77M JSON/chunk_25.json
77M JSON/chunk_26.json
77M JSON/chunk_27.json
77M JSON/chunk_28.json
77M JSON/chunk_29.json
74M JSON/chunk_2.json
77M JSON/chunk_30.json
77M JSON/chunk_31.json
77M JSON/chunk_32.json
77M JSON/chunk_33.json
78M JSON/chunk_34.json
77M JSON/chunk_35.json
77M JSON/chunk_36.json
77M JSON/chunk_37.json
77M JSON/chunk_38.json
77M JSON/chunk_39.json
75M JSON/chunk_3.json
77M JSON/chunk_40.json
77M JSON/chunk_41.json
76M JSON/chunk_42.json
18M JSON/chunk_43.json
76M JSON/chunk_4.json
76M JSON/chunk_5.json
77M JSON/chunk_6.json
77M JSON/chunk_7.json
77M JSON/chunk_8.json
77M JSON/chunk_9.json
12K JSON/metadata.pickle
20K JSON/S1_IW3_048_0081_0082_20190103_20211112_PS.mbtiles
8.0K    JSON/S1_IW3_048_0081_0082_20190103_20211112_PS.mbtiles-journal
3.2G    total

minopy_CGroveMiamiBeach_2019-2021

falkamelung commented 2 years ago

pycurl error for a 305MB dataset:

hdfeos5_2json_mbtiles.py $hepsfile ./JSON
reading displacement data from file: S1_IW3_048_0081_0082_20190103_20211112_PS.he5 ...
reading mask data from file: S1_IW3_048_0081_0082_20190103_20211112_PS.he5 ...
Masking displacement
columns: 2250
rows: 781
converted chunk 1
...
62330 features, 1847435 bytes of geometry, 64 bytes of separate metadata, 1785062 bytes of string pool
Choosing a base zoom of -B0 to keep 2 features in tile 0/0/0.
With gamma, effective base zoom of 14, effective drop rate of -nan
  99.9%  14/4539/6978  
time elapsed: 23.849978987127542

//c506-021/scratch/05861/tg851601/Miami2SenAT48/minopy[1021] json_mbtiles2insarmaps.py ...

Uploading json chunks...
Clearing old dataset, if it is there
Creating index on S1_IW3_048_0081_0082_20190103_20211112_PS
Inserted chunk_1.json to db
Inserted chunk_4.json to db
Inserted chunk_3.json to db
Inserted chunk_2.json to db
Uploading mbtiles...
Traceback (most recent call last):
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/json_mbtiles2insarmaps.py", line 209, in <module>
    main()
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/json_mbtiles2insarmaps.py", line 182, in main
    dbController.upload_mbtiles(parseArgs.mbtiles_file)
  File "/work2/05861/tg851601/stampede2/code/rsmas_insar/sources/insarmaps_scripts/add_attribute_insarmaps.py", line 258, in upload_mbtiles
    curl.perform()
pycurl.error: (26, '')
//c506-021/scratch/05861/tg851601/Miami2SenAT48/minopy[1023] 
//c506-021/scratch/05861/tg851601/Miami2SenAT48/minopy[1023] dush S1_IW3_048_0081_0082_20190103_20211112_PS.he5 
305M    S1_IW3_048_0081_0082_20190103_20211112_PS.he5
305M    total

dush JSON/*
76M JSON/chunk_1.json
76M JSON/chunk_2.json
76M JSON/chunk_3.json
8.8M    JSON/chunk_4.json
12K JSON/metadata.pickle
2.4M    JSON/S1_IW3_048_0081_0082_20190103_20211112_PS.mbtiles
238M    total
falkamelung commented 2 years ago

Here two example files to ingest (from jetstream)

cdop to set the environment

/data/HDF5EOS/Miami2SenAT48/minopy_CGroveMiamiBeach_2019-2021
/data/HDF5EOS/Miami2SenAT48/minopy_CGrove_2016-2018

json_mbtiles2insarmaps.py .... ....   --json_folder JSON --mbtiles_file JSON/S1_IW3_048_0081_0082_20190103_20211112_PS.mbtiles &
stackTom commented 2 years ago

Please pull, think it's fixed. I've added better error messages and fixed a small bug which could have led to this crash. Reopen if it's not fixed.