When doing an ADF run using the 0.25-degree ERA5 observations, I found that the vector plots where keeping the same vector arrow to grid-cell density and relative vector size the same, which results in significantly more and smaller vector arrows in the 0.25-degree plots then on, say, a 1-degree plot. As an example, compare U at 200 hPa against ERA5 here (which is 0.25-degree resolution):
With the same variable compared against another CAM run at 1-degree resolution:
Thus the vector-plotting script needs a way to adjust vector density and size based off the input data dimensions.
ADF run type
Model vs. Model
What happened?
When doing an ADF run using the 0.25-degree ERA5 observations, I found that the vector plots where keeping the same vector arrow to grid-cell density and relative vector size the same, which results in significantly more and smaller vector arrows in the 0.25-degree plots then on, say, a 1-degree plot. As an example, compare U at 200 hPa against ERA5 here (which is 0.25-degree resolution):
With the same variable compared against another CAM run at 1-degree resolution:
Thus the vector-plotting script needs a way to adjust vector density and size based off the input data dimensions.
ADF Hash you are using
b62cf68
What machine were you running the ADF on?
CISL machine
What python environment were you using?
NPL (CISL machines only), ADF-provided Conda env
Extra info
No response