Hello!
As I understand, the operational version of GraphCast is pretrained from scratch on only 13 levels on ERA5 data and then fine-tuned on IFS-HRES data. But other than this, is the training protocol similar to the one described in the paper? In particular, is the pre-training and/or fine-tuning also done autoregressively over 12 time steps (3 days)?
Thanks a lot and kind regards,
Tobias
Hello!
As I understand, the operational version of GraphCast is pretrained from scratch on only 13 levels on ERA5 data and then fine-tuned on IFS-HRES data. But other than this, is the training protocol similar to the one described in the paper? In particular, is the pre-training and/or fine-tuning also done autoregressively over 12 time steps (3 days)?
Thanks a lot and kind regards,
Tobias