Closed JueLinYe closed 2 years ago
It looks like you are using data with 1-minute sampling. Have you tried averaging it down to hourly? If the problem is memory usage, I would expect that to take care of it. (Sorry for the slow response to your question.)
That would do the job. Thank you!
Dear all,
Greetings! Thank you so much for your hard work keeping this forum. I would appreciate any guidance on the following issue. My data and code look like this:
time1= [16707.62164352 16707.62233796 16707.62302083 ... 16800.99834491 16800.99903935 16800.9997338 ]
df_year= data quality_old quality RES date
2015-09-29 14:55:10 689.0 1.0 1.0
2015-09-29 14:56:10 717.0 1.0 1.0
2015-09-29 14:57:09 746.0 1.0 1.0
2015-09-29 14:58:10 775.0 1.0 1.0
2015-09-29 14:59:10 803.0 1.0 1.0
... ... ... ... .. 2015-12-31 23:55:37 7550.0 1.0 1.0
2015-12-31 23:56:37 7538.0 1.0 1.0
2015-12-31 23:57:37 7511.0 1.0 1.0
2015-12-31 23:58:37 7523.0 1.0 1.0
2015-12-31 23:59:37 7519.0 1.0 1.0
code: try: coef = utide.solve(time1, df_year['data'].values, lat=lat1, method='ols', phase='Greenwich', conf_int='linear', trend='False', constit=constit, nodal='True', Rayleigh_min=1, white=False, verbose=True)
except: pass
utide.solve and utide.reconstruct do not converge/work for every df_year, so I expected the "try...except" to do its job, but utide.solve and utide.reconstruct can end up consuming much memory without stopping, so they crash without letting the "try...except" function properly.
Is there any way to have utide.solve and utide.reconstruct stop whenever necessary?
Best regards,
Jue