Open Cuiyingzhe opened 6 months ago
I think this is due to the patch embedding and the downsampling in the transformer model and these blocky artifacts correspond to the tokens in the transformer. This problem occurs in many transformer-based models.
I really appreciate your efforts in sharing the pre-trained models and inference codes.
My inference procedure directly follows the README.md except that the
--lead_day
argument is set to be 1. For clarity, these steps are:conda activate pycdo
python inference.py --lead_day 1 --save_path output_data
The model is run on a single NVIDIA H800 GPU, the CUDA version is 12.0.
I'm taking my first look at the 1st layer of ocean temperature (thetao) forecast stored in
20190101_thetao.nc
. The large-scale pattern seems reasonable (not shown), but some blocking artifacts emerge, remarkably in high latitudes (see figure).Could you please help explain these artifacts? Thanks in advance.