Closed jdha closed 2 years ago
ok.... I've now located the issue on bitbucket:
https://bitbucket.org/jdha/nrct/issues/42/slow-runtime
3 years later - I'm solving the same issues all over again....
@jdha This doesn't affect the tides work. How do I invoke a (benchmark?) test that uses this code? (namelist_remote.bdy doesn't run "out of the box", so I guess you use something else)
@jdha This doesn't affect the tides work. How do I invoke a (benchmark?) test that uses this code? (namelist_remote.bdy doesn't run "out of the box", so I guess you use something else)
I have the benchmark data under /work/jdha/PyNEMO/inputs/benchmark
The remote data route hasn't worked since the JASMIN TDS was turned off
@jdha This doesn't affect the tides work. How do I invoke a (benchmark?) test that uses this code? (namelist_remote.bdy doesn't run "out of the box", so I guess you use something else)
I have the benchmark data under /work/jdha/PyNEMO/inputs/benchmark
The remote data route hasn't worked since the JASMIN TDS was turned off
Got it. Just flick switches in namelist_local.bdy
. E.g. ln_tra = .true.
There is something odd in the j_run i_run ndarrays: they are lists of integers with commas in (see after 77):
Perhaps specifying the indices with the min and max cuts out the nonsense like that...?
There is another possible issue. My set-up throws an error
/Users/jeff/GitHub/PyNEMO_NOC-MSM/pynemo/nemo_bdy_extr_tm3.py:721: RuntimeWarning: invalid value encountered in true_divide
dst_bdy = (np.nansum(sc_bdy[vn][:,:,:] * dist_wei, 2) /
/Users/jeff/GitHub/PyNEMO_NOC-MSM/pynemo/nemo_bdy_extr_tm3.py:752: RuntimeWarning: invalid value encountered in true_divide
dst_bdy = (np.nansum(dst_bdy.flatten('F')[id_121] *
But then it goes on to complete anyway:
Execution Time: 21.273840188980103
so this issue may go unnoticed but may be significant.
I had a poke and basically (in line 721) it is trying to do a divide by zero, with dist_fac
elements going to zero. Though I am not familiar enough to know if it should be or not. It also is not clear how this would interact with the i_run
j_run
indexing issue. Hmmm
There is something odd in the j_run i_run ndarrays: they are lists of integers with commas in (see after 77):
Perhaps specifying the indices with the min and max cuts out the nonsense like that...?
good spot - I'll have a quick look
There is another possible issue. My set-up throws an error
/Users/jeff/GitHub/PyNEMO_NOC-MSM/pynemo/nemo_bdy_extr_tm3.py:721: RuntimeWarning: invalid value encountered in true_divide dst_bdy = (np.nansum(sc_bdy[vn][:,:,:] * dist_wei, 2) / /Users/jeff/GitHub/PyNEMO_NOC-MSM/pynemo/nemo_bdy_extr_tm3.py:752: RuntimeWarning: invalid value encountered in true_divide dst_bdy = (np.nansum(dst_bdy.flatten('F')[id_121] *
But then it goes on to complete anyway:
Execution Time: 21.273840188980103
so this issue may go unnoticed but may be significant. I had a poke and basically (in line 721) it is trying to do a divide by zero, with
dist_fac
elements going to zero. Though I am not familiar enough to know if it should be or not. It also is not clear how this would interact with thei_run
j_run
indexing issue. Hmmm
there is always a ticket (#76) open for this
There is something odd in the j_run i_run ndarrays: they are lists of integers with commas in (see after 77): Perhaps specifying the indices with the min and max cuts out the nonsense like that...?
good spot - I'll have a quick look
I can't seem to reproduce this, maybe it's just formatting by pycharm...
[ 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185
186 187 188 189 190 191 192 193 194 195 196 197 198]
Addressing issue #89
Updated the indexing to use min/max
Not too sure what impact this will have - but it speeds up @vleguenn processing. However, I can't see what the difference between that and the benchmark setup is (the benchmark domain is twice the size)
Interestingly it looks like I put this code change in the old bitbucket code, see:
https://bitbucket.org/jdha/nrct/branch/ORCA0083#chg-Python/pynemo/nemo_bdy_extr_tm3.py
but haven't found any comments as to why!