Open lcarlaw opened 3 months ago
Commit https://github.com/tjturnage/cloud-radar-server/commit/a18887aec47f609b46140a48c71c9c0f2afcf31f turned off the write/read cfradial steps in favor of using the original pyart.io.read object given the substantial speed increases.
This is not an active bug, or enhancement area. Simply maintaining this for reference purposes in the future.
Writing and subsequently reading a cfradial version of the pyart radar object takes a very long time. In benchmark testing, this averaged near 35-40% of the total hodograph processing time.
The
pyart.retrieve.vad_browning
can evidently be run using sweeps extracted from the raw/native pyart radar object (initial data format is MSG31 binary), simply by convertingradar_1sweep = ncrad.extract_sweeps([idx])
toradar_1sweep = radar.extract_sweeps([idx])
, and removing thewrite_cfradial
andread_cfradial
calls. A try/except block must be added though, to handle sweeps in which not insufficient data is available which otherwise would result in aValueError
.One minor item of note: somewhat inexplicably, there are subtle/minor differences in the data when comparing what's in the original radar object and what's read in by
pyart.io.read_cfradial
. This differences are clearly introduced either bywrite_cfradial
orread_cfradial
. Regardless, the differences are slight (see images below), but probably not enough to justify the significant time increase to utilize cfradial data.Example of differences caused entirely by
pyart.io.write_cfradial
and/orpyart.io.read_cfradial
:Hodograph using cfradial data:
Hodograph using original pyart.io.read object: