Cloud cover forecasting, as an essential paradigm of research, facilitates several applications for various fields, such as weather forecasting, agriculture, aviation, and climate modeling. Nonetheless, despite the accurate predictions, cloud dynamics and its complexity worsen the performance of existing works. Therefore, we propose FACT, a novel framework that utilizes an attention-based convolutional long short-term memory (ConvLSTM) architecture to perform the prediction for the next frame of clouds using a time series dataset of satellite images of cloud cover for a country. Furthermore, an autoencoder model is considered to improve the prediction performance by encoding the frames. The encoding approach helps to reduce the computational complexity of the prediction model, further maintaining enhanced accuracy. Next, we apply post-processing techniques to the acquired prediction result by thresholding the pixel intensity values to produce sharper and clearer cloud images. The proposed model is evaluated and analyzed using various performance assessment metrics, such as Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR) metrics. A minimum MSE loss of 0.30 (30%) is achieved. Thus, it shows that the proposed model outperforms existing literature by improving the prediction in the domain of cloud cover forecasting.
FACT: Autoencoder and Attention Conv-LSTM-Based Collaborative Framework for Cloud Cover Prediction
Pau, Giovanni
;
2024-01-01
Abstract
Cloud cover forecasting, as an essential paradigm of research, facilitates several applications for various fields, such as weather forecasting, agriculture, aviation, and climate modeling. Nonetheless, despite the accurate predictions, cloud dynamics and its complexity worsen the performance of existing works. Therefore, we propose FACT, a novel framework that utilizes an attention-based convolutional long short-term memory (ConvLSTM) architecture to perform the prediction for the next frame of clouds using a time series dataset of satellite images of cloud cover for a country. Furthermore, an autoencoder model is considered to improve the prediction performance by encoding the frames. The encoding approach helps to reduce the computational complexity of the prediction model, further maintaining enhanced accuracy. Next, we apply post-processing techniques to the acquired prediction result by thresholding the pixel intensity values to produce sharper and clearer cloud images. The proposed model is evaluated and analyzed using various performance assessment metrics, such as Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR) metrics. A minimum MSE loss of 0.30 (30%) is achieved. Thus, it shows that the proposed model outperforms existing literature by improving the prediction in the domain of cloud cover forecasting.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.