ARM-DOE / pyart

The Python-ARM Radar Toolkit. A data model driven interactive toolkit for working with weather radar data.
https://arm-doe.github.io/pyart/
Other
502 stars 264 forks source link

vad_michelson considerably lower than other methods #1488

Open nawendt opened 9 months ago

nawendt commented 9 months ago

Description

Similar to #992, there appears to be a fairly substantial low bias on the wind speeds that get calculated in vad_michelson. vad_browning and the direct level 3 VAD data from a NEXRAD radar show more reasonable values and are more similar to one another. #1428 was suggested to have solved the issue, but it may not be the case based on comparisons I have done.

What I Did

Here is some example code to reproduce the problem:

import matplotlib.pyplot as plt
import numpy as np
import pyart

def calculuate_vad(radar, vel_field, zlevels, percentile=50, method='browning'):
    u_allsweeps = []
    v_allsweeps = []
    for idx in range(radar.nsweeps):
        radar_1sweep = radar.extract_sweeps([idx])
        try:
            if method=='michelson':
                vad = pyart.retrieve.vad_michelson(
            radar_1sweep, vel_field, z_want=zlevels
            )

            elif method=='browning':
                vad = pyart.retrieve.vad_browning(
            radar_1sweep, vel_field, z_want=zlevels
            )
            else:
                print('Not a valid method! Use browning or michelson')
            u_allsweeps.append(vad.u_wind)
            v_allsweeps.append(vad.v_wind)
        except ValueError:
            continue
    # Average U and V over all sweeps and compute magnitude and angle
    u_avg = np.nanpercentile(np.array(u_allsweeps), percentile, axis=0)
    v_avg = np.nanpercentile(np.array(v_allsweeps), percentile, axis=0)
    direction = np.rad2deg(np.arctan2(-u_avg, -v_avg)) % 360
    speed = np.sqrt(u_avg**2 + v_avg**2)
    return speed, direction

l3_vad = np.load('keox.npz')
l2_raw = pyart.io.read('KEOX20231121_203035_V06.ar2v')

gatefilter = pyart.filters.GateFilter(l2_raw)
gatefilter.exclude_transition()
gatefilter.exclude_invalid('velocity')
gatefilter.exclude_invalid('reflectivity')
gatefilter.exclude_outside('reflectivity', 0, 80)
cor_vel = pyart.correct.dealias_region_based(l2_raw, gatefilter=gatefilter)
l2_raw.add_field('corr_velocity', cor_vel, True)

# m = pyart.retrieve.vad_michelson(l2_raw, vel_field='corr_velocity', z_want=l3_vad['height'])
# b = pyart.retrieve.vad_browning(l2_raw, velocity='corr_velocity', z_want=l3_vad['height'])

# mspd = np.sqrt(m.u_wind**2 + m.v_wind**2)
# bspd = np.sqrt(b.u_wind**2 + b.v_wind**2)

mspd, _ = calculuate_vad(l2_raw, 'corr_velocity', l3_vad['height'], 90, 'michelson')
bspd, _ = calculuate_vad(l2_raw, 'corr_velocity', l3_vad['height'], 90, 'browning')

fig, ax = plt.subplots(1, 1)
ax.plot(l3_vad['speed'], l3_vad['height'], color='black', label='L3')
ax.plot(mspd * 1.94384, l3_vad['height'], color='firebrick', label='L2 (Michelson)')
ax.plot(bspd * 1.94384, l3_vad['height'], color='forestgreen', label='L2 (Browning)')
ax.set_xlim(0, 80)
ax.set_ylim(0, 8000)
ax.set_xticks(np.arange(0, 90, 10))
ax.set_yticks(np.arange(0, 9000, 1000))
ax.set_xlabel('Speed (kts)')
ax.set_ylabel('Height (m)')
ax.legend(loc=4)
ax.set_title('KEOX 2030Z 11/21/23')
plt.show()

This produces the following comparison: keox_vad_comp

Test data are attached here: test_data.zip

I also tried simply calling vad_michelson and vad_browning directly with similar results. There were some minor differences noted in the vad_browning output when not using calculate_vad, but nothing overly concerning. As a general question, however, is it the recommended approach to use a method like calculate_vad versus calling the pyART VAD methods directly with a full volume?

mgrover1 commented 9 months ago

@nawendt which version of Py-ART are you using?

nawendt commented 9 months ago

1.16.1.post7. It was latest available on the repo at the time.

mgrover1 commented 9 months ago

Thanks @nawendt - it looks like nans are not being dealt with properly.... I am looking into the issue and let we know we have a fix that is ready to test with your dataset. Thank you for providing sample code to reproduce the issue!