GeoscienceAustralia / eqrm

Automatically exported from code.google.com/p/eqrm
Other
5 stars 4 forks source link

execute_all_demos.py is failing #65

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. from /eqrm_core/demo/plot execute execute_all_demos.py

What is the expected output? 
Lot's of pretty graphs

What do you see instead?
this error;
P0: do site 638 of 6305
P0: do site 639 of 6305
/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_co
de/capacity_spectrum_functions.py:132: RuntimeWarning: overflow encountered in 
exp
  Harea1=cc*x1+aa/bb*(1-exp(-bb*x1))
/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_co
de/capacity_spectrum_functions.py:56: RuntimeWarning: divide by zero 
encountered in divide
  BH = kappa*Harea/(2*pi*displacement*acceleration)

Traceback (most recent call last):
  File "execute_all_demos.py", line 29, in <module>
    create_demo_data()
  File "execute_all_demos.py", line 11, in create_demo_data
    run_scenarios(eqrm_filesystem.demo_plot_scenarios)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/create_demo_plot_data.py", line 67, in run_scenarios
    analysis.main(os.path.join(plot_file))
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/analysis.py", line 436, in main
    damage) = sites.calc_total_loss(SA, eqrm_flags, overloaded_MW)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/structures.py", line 265, in calc_total_loss
    loss_aus_contents=eqrm_flags.loss_aus_contents)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/damage_model.py", line 183, in aggregated_building_loss
    self.building_loss(ci=ci, loss_aus_contents=loss_aus_contents)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/damage_model.py", line 149, in building_loss
    damage_states = self.get_building_states()
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/damage_model.py", line 137, in get_building_states
    beta_nsd_a, SA)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/damage_model.py", line 263, in state_probability
    p = cumulative_state_probability(threshold, beta, value)
  File "/nas/gemd/georisk_models/earthquake/sandpits/duncan/EQRM/trunk/eqrm_core/eqrm_code/damage_model.py", line 321, in cumulative_state_probability
    return norm.cdf(temp)
  File "/usr/local/python-2.7.2/lib/python2.7/site-packages/scipy/stats/distributions.py", line 1198, in cdf
    place(output,cond,self._cdf(*goodargs))
  File "/usr/local/python-2.7.2/lib/python2.7/site-packages/scipy/stats/distributions.py", line 1908, in _cdf
    return _norm_cdf(x)
  File "/usr/local/python-2.7.2/lib/python2.7/site-packages/scipy/stats/distributions.py", line 1895, in _norm_cdf
    return special.ndtr(x)
TypeError: ufunc 'ndtr' not supported for the input types, and the inputs could 
not be safely coerced to any supported types according to the casting rule 
'safe'

Original issue reported on code.google.com by duncan.g...@gmail.com on 8 Aug 2012 at 6:16

GoogleCodeExporter commented 9 years ago
At a guess the root cause is the new building variability.
I reverted to revision 1175 and it didn't fall over, at this point in the code. 
[I stopped it before it completted]

Original comment by duncan.g...@gmail.com on 8 Aug 2012 at 6:42

GoogleCodeExporter commented 9 years ago
Here's the same run, on the head revision, with some print statements added.
It shows how the calculation -bb*x1 gets too big;

bb [[[ 0.6816922 ]
  [ 0.57944567]
  [-0.99316559]
  ..., 
  [-2.35933135]
  [-0.67337015]
  [-0.42201514]]]
x1 [[[  5.08136008]
  [ 98.88233282]
  [-20.43479414]
  ..., 
  [-20.43479414]
  [-20.43479414]
  [-20.43479414]]]
(-bb*x1 [[[ -3.46392352]
  [-57.29693976]
  [-20.29513437]
  ..., 
  [-48.21245041]
  [-13.76018036]
  [ -8.62379247]]]
P0: do site 639 of 6305
bb [[[-3.80778872]
  [-3.80778872]
  [-3.80778872]
  ..., 
  [-3.80778872]
  [-3.80778872]
  [-3.80778872]]]
x1 [[[   0.69660483]
  [ 473.03675947]
  [   0.69660483]
  ..., 
  [ 140.10562641]
  [   0.69660483]
  [   0.69660483]]]
(-bb*x1 [[[    2.65252402]
  [ 1801.22403731]
  [    2.65252402]
  ..., 
  [  533.492624  ]
  [    2.65252402]
  [    2.65252402]]]

Original comment by duncan.g...@gmail.com on 8 Aug 2012 at 6:58

GoogleCodeExporter commented 9 years ago
I've changed csm_use_variability to False for this demo.
It still crushes.

Original comment by duncan.g...@gmail.com on 8 Aug 2012 at 7:01

GoogleCodeExporter commented 9 years ago
Also, the demo 
python setdata_ScenRisk.py 
breaks if it isn't sub-sampling the sites.

Original comment by duncan.g...@gmail.com on 20 Aug 2012 at 4:17

GoogleCodeExporter commented 9 years ago
1. csm_variability_method should be set to None instead of False in case of no 
sampling.

2. More appropriate way to avoid this error is to take samples using log normal 
instead of normal (in capacity_spectrum_functions.py). When normal is used 
there is still a chance to have a negative value of any of the parameters. For 
the reference the C value was -0.00175307 when the run crashed, which should be 
always positive.

Original comment by dyna...@gmail.com on 21 Aug 2012 at 1:14

GoogleCodeExporter commented 9 years ago
Fixed by rolling back the capacity curve with variable parameters in revision 
1220.

Original comment by duncan.g...@gmail.com on 21 Aug 2012 at 7:44