GeoscienceAustralia / hazimp

Hazard impact assessment tool
https://hazimp.readthedocs.io/
GNU Affero General Public License v3.0
14 stars 7 forks source link

“Saving aggregated data” step fails for zero structures #16

Closed mahmudulhasanGA closed 3 years ago

mahmudulhasanGA commented 4 years ago

When number of impacted structures are zero, in “Saving aggregated data” step HazImp fails by throwing exception from Geopandas package. This situation could happen if the cyclone track/extent is entirely outside mainland.

Expected behaviour: produce output with zero structures or produce empty file without breaking.

Following are some important information in configuration file to reproduce the issue:

File "zero-size.csv" should only contain the header line (no data) as the following: "lid","LATITUDE","LONGITUDE","mb_code","mb_cat","SA1_CODE","sa2_code","sa2_name","sa3_code","sa3_name","sa4_code","sa4_name","ucl_code","ucl_name","gcc_code","gcc_name","lga_code","lga_name","suburb","postcode","year_built","construction_type","roof_type","wall_type","REPLACEMENT_VALUE","contents_value","wind_vulnerabilty_model_number","WIND_VULNERABILITY_FUNCTION_ID","as4055_class","wind_region_classifcation","wind_vulnerability_set"

Following is the output containing error message: INFO:hazimp.templates:Using wind_nc template INFO:hazimp.pipeline:Executing LoadCsvExposure INFO:hazimp.pipeline:Executing LoadRaster INFO:hazimp.pipeline:Executing LoadXmlVulnerability INFO:hazimp.pipeline:Executing SimpleLinker INFO:hazimp.pipeline:Executing SelectVulnFunction INFO:hazimp.pipeline:Executing LookUp INFO:hazimp.pipeline:Executing MultipleDimensionMult INFO:hazimp.pipeline:Executing SaveExposure INFO:hazimp.pipeline:Executing Aggregate INFO:hazimp.context:Saving aggregated data Traceback (most recent call last): File "/opt/tcrm-processor/hazimp/hazimp/main.py", line 101, in <module> cli() File "/opt/tcrm-processor/hazimp/hazimp/main.py", line 98, in cli start(config_file=CMD_LINE_ARGS.config_file[0]) File "/opt/tcrm-processor/hazimp/hazimp/main.py", line 39, in wrap res = f(*args, **kwargs) File "/opt/tcrm-processor/hazimp/hazimp/main.py", line 88, in start the_pipeline.run(cont_in) File "/opt/tcrm-processor/hazimp/hazimp/pipeline.py", line 72, in run job(context) File "/opt/tcrm-processor/hazimp/hazimp/workflow.py", line 54, in __call__ self.job_instance(*args, **job_kwargs) File "/opt/tcrm-processor/hazimp/hazimp/jobs/jobs.py", line 774, in __call__ use_parallel=use_parallel) File "/opt/tcrm-processor/hazimp/hazimp/context.py", line 304, in save_aggregation boundarycode, filename) File "/opt/tcrm-processor/hazimp/hazimp/misc.py", line 351, in choropleth result.to_file(filename, driver=driver) File "/opt/conda/envs/hazimp/lib/python3.6/site-packages/geopandas/geodataframe.py", line 724, in to_file _to_file(self, filename, driver, schema, index, **kwargs) File "/opt/conda/envs/hazimp/lib/python3.6/site-packages/geopandas/io/file.py", line 239, in _to_file schema = infer_schema(df) File "/opt/conda/envs/hazimp/lib/python3.6/site-packages/geopandas/io/file.py", line 295, in infer_schema raise ValueError("Cannot write empty DataFrame to file.") ValueError: Cannot write empty DataFrame to file.

mahmudulhasanGA commented 3 years ago

This issue is fixed now.