Climate Informatics 2023


Wednesday - Session 1 - 11:00-12:30 Wednesday - Session 3 - 15:00-16:30
Thursday - Session 4 - 09:00-10:30 Thursday - Session 5 - 11:00-12:30 Thursday - Session 6 - 13:30-15:00 Thursday - Session 7 - 15:30-18:00
Friday - Session 8 - 09:00-10:30 Friday - Session 9 - 11:00-12:30

Wednesday: Session 1(Back to top)
Chair: Emily Shuckburgh and Dominic Orchard
11:00–12:00 Generative Models for Climate Informatics (Keynote)  Shakir Mohamed (DeepMind) Generative models are probabilistic models of data that allow easy sampling and exploration of a data distribution using a learned model. In fields, like environment and climate, where access to samples allows for uncertainty representation, these models can provide a key tool to address the complex decision-making problems we face. I'll begin with a high-level view of generative models as a field and how it has evolved. I'll then focus on the role of generative models in weather forecasting, looking at medium-range forecasting and nowcasting, which can serve as the basis for climate-focussed applications. A key aspect of these applications lies in how quality is assessed, and this will be a key focus of our discussion. I'd also like to touch on the sociotechnical implications of machine learning in climate, inviting a discussion of what shape ethical and responsible climate informatics currently takes. Advances in generative models can add value in the myriad of climate efforts we undertake, and my aim overall is to open a discussion as to what that ambitious research agenda looks like for us to take on together.
12:00–12:15 IceNet - demonstrating data-driven climate science for real-world applications (slides) James Byrne (British Antarctic Survey)* British Antarctic Survey, collaborating with the Alan Turing Institute, has been a leader in development of environmental AI with IceNet: a cutting-edge system for sea ice forecasting. Sea ice forecasts are critical to safe operations for marine industry and conservation, in the Arctic and Antarctic. Significant value comes from predicting when and where freezing and thawing takes place and how much sea ice will occur. IceNet provides this ability utilising deep learning, associating sea ice conditions in response to environmental conditions, and allowing predictions to be made. This approach to environmental AI allows faster access to predictions than traditional modelling and is generalisable to predict many other environmental conditions.

IceNet is comprised of multiple tiers of infrastructure running in multiple locations and is under continual development. It is not simply a deep learning model, but implements all the additional infrastructure required for "MLOps" (machine learning operations), post processing and API access, end user application hosting and real-world use case integration. At Climate Informatics 2023 we'd like to demonstrate how deep learning, climate science and best practice software engineering have been combined for IceNet, and hopefully demonstrate the value in building software sustainably to allow for future expansion in similar projects.
12:15–12:30 Ensemble-based 4DVarNet uncertainty quantification for the reconstruction of Sea Surface Height dynamics (slides) Maxime Beauchamp (IMT Atlantique)*; Quentin Febvre (IMT Atlantique); ronan fablet (IMT Atlantique) Uncertainty quantification (UQ) plays a crucial role in data assimilation (DA), since it impacts both the quality of the reconstruction and near-future forecast. However, traditional UQ approaches are often limited in their ability to handle complex datasets and may have a large computational cost. In this paper, we present a new ensemble-based approach to extend the 4DVarNet framework, an end-to-end deep learning (DL) scheme backboned on variational DA used to estimate the mean of the state along a given data assimilation window. We use conditional 4DVarNet simulations compliant with the available observations to estimate the 4DVarNet probability density function. Our approach enables to combine both the efficiency of 4DVarNet in terms of computational cost and validation performance with a fast and memory-saving Monte-Carlo based post-processing of the reconstruction, leading to the so-called En4DVarNet estimation of the state pdf.
 We demonstrate our approach on a case study involving the Sea Surface Height (SSH): 4DVarNet is pretrained on an idealized Observation System Simulation Experiment (OSSE), then used on real-world dataset (OSE). The sampling of independent realizations of the state is made among the catalogue of model-based data used during training. To illustrate our approach, we use a nadir altimeter constellation in January 2017 and show how the uncertainties retrieved by combining 4DVarNet with the statistical properties of the training dataset leads to a relevant information providing in most cases a confidence interval compliant with the Cryosat-2 nadir alongtrack dataset kept for validation.
Wednesday: Session 3(Back to top)
Chair: Scott Hosking
15:00–15:15 Short-term forecasting of Typhoon rainfall with deep learning-based disaster monitoring model (slides) Doyi Kim (SIAnalytics)*; Yeji Choi (SI Analytics) Accurate and reliable disaster forecasting is vital for saving the loss of life and property. Hence, effective disaster management is necessary to reduce the impact of natural disasters and accelerate recovery and reconstruction. Typhoons are one of the major disasters related to heavy rainfall in Korea. As typhoon develops in the far ocean, satellite observations are the only means to monitor them. In this study, we propose a deep-learning-based disaster monitoring model for short-term forecasting of typhoon rainfall using satellite observations. We apply two deep learning models: a video frame prediction model, WR-Net, to predict future satellite observations and an image-to-image translation model, Pix2PixCC, to generate rainfall maps from predicted satellite images. Typhoon Hinnamnor, the worst typhoon case in 2022 in Korea, was selected as a target case for model verification. The results showed that the predicted satellite images could capture the structures and patterns of the typhoon, and the precipitation map generated from the Pix2PixCC model using predicted satellite images showed a correlation coefficient of 0.81 for the three-hour prediction and 0.56 for the seven-hour prediction. The proposed disaster monitoring model can give practical implications to disaster alerting systems and can be extended to flood monitoring systems.
15:15–15:30 Locally time-invariant metric for climate model ensemble predictions of extreme risk Mala Virdee (University of Cambridge); Markus Kaiser (University of Cambridge); Carl Henrik Ek (University of Cambridge); Emily Shuckburgh (University of Cambridge); Ieva Kazlauskaite (University of Cambridge)* Adaptation-relevant predictions of climate change are often derived by combining climate models in a multi-model ensemble. Model evaluation methods used in performance-based ensemble weighting schemes have limitations in the context of high-impact extreme events. We introduce a locally time-invariant model evaluation method with a focus on assessing the simulation of extremes. We explore the behaviour of the proposed method in predicting extreme heat days in Nairobi, and provide comparisons for eight additional cities.
15:30–15:45 Simulation-based learning of neural interpolation schemes for the mapping of real satellite-derived datasets Quentin Febvre (IMT Atlantique)*; Julien Le Sommer (CNRS); clement ubelmann (Datlas); ronan fablet (IMT Atlantique); Maxime Beauchamp (IMT Atlantique) One challenge when developing learning-based approaches for earth and ocean science is the availability of ground-truth
training datasets. For instance, satellite altimetry involves a scarce and irregular sampling of the sea surface. This leads to very
high missing data rates above 90%, which prevent a direct supervised learning of neural interpolation schemes from available
altimeter-derived datasets. One way to bypass this problem is to exploit observing system simulation experiments (OSSE). By
construction, OSSE datasets involve reference ocean states associated with observation data. They provide a testbed
to benchmark data assimilation and mapping methods, including learning-based ones. In this context, an OSSE for a Gulf Stream region has stressed the relevance of neural interpolation scheme, especially 4DVarNet schemes. When combined with neural schemes, OSSE naturally raise the question of the generalization to generalization to real datasets.
In this study, we investigate this question and study whether OSSE-based learning strategies transfer to real data for
altimeter-derived SSH fields. We specifically explore how the choice of the reference training SSH dataset affects the re-
construction performance on real altimetry data. We assess the impact of simulation grid resolution and observation data re-
analysis on the performance of 4DVarNets. Our experiments support the generalization of neural interpolation from simulation-
based trained dataset to real data. We show a significant improvement in the scales resolved compared to the operational optimal-interpolation-based product (103 km vs. 153 km). Our study also highlights the performance drop when training on
lower-resolution simulation datasets. Overall, our study supports the relevance of simulation-based training strategies for the
application of learning-schemes to real ocean applications, while highlighting the importance of the choice of the simulation
data.
15:45–16:00 PYRAMID: A Platform for dynamic, hyper-resolution, near-real time flood risk assessment integrating repurposed and novel data sources Amy C Green (Newcastle University)*; Ben Smith (Newcastle Univeristy); Elizabeth Lewis (Newcastle University) It is essential that we work towards better preparation for flooding, as the impacts and risks associated increase with a changing climate. Standard methods for flood risk assessment are typically static, based on flood depths corresponding to return levels. In contrast flood risk changes over time, with the time of day and weather conditions, driving the location and extent of potential debris (e.g. vehicles or trees may cause blockages in culverts) affecting the associated risks. To this end, we aim to provide a platform for dynamic flood risk assessment, to better inform decision making, allowing for improved flood preparation at a local level. With stakeholder collaboration at a local level, a web-platform demonstrator is presented, for the city of Newcastle upon Tyne (U.K.) and the wider catchment, providing interactive visualisations and dynamic flood risk maps.
To achieve this, near real-time updates are incorporated as part of a fully integrated workflow of models, with traditional datasets combined with novel, hidden data. More realistic high-resolution data, citizen science data and novel data sources are combined, making use of data scraping and APIs to obtain additional sensor data. Using machine learning methods, more complex datasets are generated, using artificial intelligence algorithms and object detection to identify potential debris information from satellites, LIDAR point clouds and trash screen images. The model framework involves hyper-resolution hydrodynamic modelling (HIPIMS), with a hydrological catchment model (SHETRAN), working towards a digital twin.
16:00–16:15 Near-Term Forecasting of Water Reservoir Storage Capacities Using Long Short-Term Memory (slides) Eric Rohli (Trabus Technologies); Nicholas Woolsey (Trabus Technologies); David Sathiaraj (Trabus Technologies)* Predicting reservoir storage capacities is an important planning activity for effective conservation and water release practices. Climatic events such as drought and precipitation impact water storage capacities in reservoirs. Predictive insights on reservoir storage levels can be beneficial for water planners and stakeholders in effective water resource management. A deep learning (DL) neural network (NN) based reservoir storage prediction approach is proposed that learns from climate and hydrological information from the reservoir's associated watershed and historical reservoir storage capacities. These DL models are trained and evaluated for 17 reservoirs in Texas, USA. Using the trained models, reservoir storage predictions were validated with a test data set spanning 2 years. The reported results indicate high levels of predictive accuracy and show promise for longer term water planning decisions.
16:15–16:30 Huge Ensembles of Weather Extremes using the Fourier Forecasting Neural Network William Collins (Lawrence Berkeley National Laboratory)*; Michael Pritchard (NVIDIA); Noah D Brenowitz (NVIDIA); Alex Charn (Indiana University); Yair Cohen (NVIDIA); Amanda Dufek (Lawrence Berkeley National Laboratory); David M. Hall (NVIDIA); Peter Harrington (Lawrence Berkeley National Laboratory (Berkeley Lab)); Karthik Kashinath (NVIDIA); Travis A. O'Brien (Indiana University); Jaideep Pathak (NVIDIA Corporation); Shashank Subramanian (Lawrence Berkeley National Laboratory); Michael Wehner (Lawrence Berkeley National Laboratory) Studying low-likelihood high-impact extreme weather and climate events in a warming world requires massive ensembles to capture long tails of multi-variate distributions. In combination, it is simply impossible to generate massive ensembles, of say 1000 members, using traditional numerical simulations of climate models at high resolution.
 
 We describe how to bring the power of machine learning (ML) to replace traditional numerical simulations for short week-long hindcasts of massive ensembles, where ML has proven to be successful in terms of accuracy and fidelity, at five orders-of-magnitude lower computational cost than numerical methods. Because the ensembles are reproducible to machine precision, ML also provides a data compression mechanism to avoid storing the data produced from massive ensembles.
 
 The machine learning algorithm FourCastNet is based on {Fourier Neural Operators (FNO)} and {Transformers}, proven to be efficient and powerful in modeling a wide range of chaotic dynamical systems, including turbulent flows and atmospheric dynamics. FourCastNet has already been proven to be highly scalable on NVIDIA-GPU HPC systems.
 
 Until today generating 1,000- or 10,000-member ensembles of hindcasts was simply impossible because of prohibitive compute and data storage costs. For the first time, we can now generate such massive ensembles using ML at five orders-of-magnitude less compute than traditional numerical simulations.
Thursday: Session 4(Back to top)
Chair: Tom Beucler
09:00–09:15 Precipitation downscaling using a super-resolution deconvolution neural network with step orography Jyoteeshkumar reddy Papari (CSIRO)*; Richard Matear (CSIRO); John Taylor (CSIRO); Marcus Thatcher (CSIRO); Michael Grose (CSIRO) Coarse spatial resolution in gridded precipitation datasets, reanalysis and climate model outputs restricts its ability to characterise the localised extreme rain events and limits its use for local to regional scale climate management strategies. Deep learning models have recently been developed to rapidly downscale the coarse resolution precipitation to the high local scale resolution with far less cost than dynamic downscaling. However, these existing super-resolution deep learning model studies have not rigorously evaluated the model’s skill in producing fine-scale spatial variability, particularly over topographic features. These current deep-learning models also struggle to predict the complex spatial structure of extreme events. Here, we develop a super-resolution deconvolution neural network (SRDN) based model to downscale the hourly precipitation and evaluate the predictions. We apply three versions of the SRDN model: 1) SRDN (no orography), 2) SRDN-O (orography at only final resolution enhancement), and 3) SRDN-SO (orography at each step of resolution enhancement). We assess the SRDN-based models’ ability to reproduce the fine-scale spatial variability and compare it to a previously used deep learning model (DeepSD). All the models are trained and tested using the Conformal Cubic Atmospheric Model (CCAM) data to perform 100 to 12.5 km downscaling of hourly precipitation over the Australian region. We found that SRDN based models including orography deliver better fine-scale spatial structure of both climatology and extremes. The inclusion of orography significantly improved the deep-learning downscaling to recover the fine scale spatial features. The SRDN-SO model was the best super-resolution deep learning model, both qualitatively and quantitatively in reconstructing the fine-scale spatial variability of climatology and rainfall extremes over complex orographic regions.
09:15–09:30 Post-processing East African precipitation forecasts using a generative machine learning model (slides) Bobby Antonio (University of Bristol)*; Andrew McRae (University of Oxford); Dave MacLeod (University of Cardiff); Fenwick Cooper (University of Oxford); John Marsham (University of Leeds); Laurence Aitchison (University of Bristol); Tim Palmer (University of Oxford); Peter Watson (Bristol) Existing weather models are known to have poor skill over Africa, where there are regular threats of drought and floods that present significant risks to people's lives and livelihoods. Improved precipitation forecasts could help mitigate the negative effects of these extreme weather events, as well as providing significant financial benefits to the region. Building on work that successfully applied a state-of-the-art machine learning method (a conditional Generative Adversarial Network, cGAN) to postprocess precipitation forecasts in the UK, we present a novel way to improve precipitation forecasts in East Africa. We address the challenge of realistically representing tropical convective rainfall in this region, which is poorly simulated in conventional forecast models. We use a cGAN to postprocess ECMWF high resolution forecasts, using the IMERG observations as ground truth, and demonstrate how this model can correct bias and create samples of rainfall with realistic spatial structure. This has the potential to enable cost effective improvements to early warning systems in the affected areas.
09:30–09:45 Physics-Constrained Deep Learning for Downscaling Paula Harder (Fraunhofer ITWM)*; Venkatesh Ramesh (Mila); Alex Hernandez-Garcia (Mila - Quebec AI Institute); Qidong Yang (New York University); Prasanna Sattigeri (IBM Research); Daniela Szwarcman (IBM Research); Campbell D Watson (IBM Reserch); David Rolnick (McGill University, Mila) The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and, therefore, often generate coarse-resolution predictions. Statistical downscaling, including super-resolution methods from deep learning, can provide an efficient method of upsampling low-resolution data. However, despite achieving visually compelling results in some cases, such models frequently violate conservation laws when predicting physical variables. In order to conserve physical quantities, we develop a method that guarantees physical constraints are satisfied by a deep learning downscaling model while also improving their performance according to traditional metrics.
09:45–10:00 Clim-recal: an open repository of method comparison for bias correction of UKCP18 products Ruth C E Bowyer (The Alan Turing Institute & King's College London)* Researchers wishing to use publicly available RCMs, such as the suite of products offered by the UK Met Office (MO), need to consider a range of bias “correction” methods (sometimes referred to as bias adjustment or recalibration). Bias correction methods offer a means of adjusting the outputs of RCM in a manner that might better reflect future climate change signals whilst preserving the natural/internal variability of climate2. Methods span from simple ‘scaling-factor’ approaches to quantile mapping techniques, with recent additions including Bayesian methods and neural networks amongst others 3,4,5. Whilst there is extensive literature and software packages available to implement these methods, the breadth of options mean that this can be inaccessible for researchers from different disciplines who would like to make use of climate projections data for their own applications. Similarly, public-sector stakeholders (such as policy-makers, councils, public health officials) require robust data localised to their area but may not have the capacity to test and apply a range of bias correction methods for their own application.

Under the name ‘clim-recal’ the project therefore aims to:

- To provide researchers with a creation of a ‘taxonomy of methods’ for clear identification of disadvantages/advantages and use of different bias correction methods
- To provide researchers and practitioners (public and private sector) with a collated set of resources for how to technically apply the bias correction methods to UK Climate Projections 2018 (UKCP18) data via an open repository of code and documentation
- To create accessible information on bias correction methods for non quantitative researchers and lay-audience stakeholders
Here we present the preliminary results for aim 1 and our work and plans for aims 2 and 3.
10:00–10:15 Observations-based machine learning model constrains uncertainty in future regional warming projections Sophie R Wilkinson (University of East Anglia)*; Peer Nowack (Karlsruhe Institute of Technology); Manoj Joshi (University of East Anglia) Knowledge about future global and regional warming is essential for effective adaptation planning and although global climate models can provide projections of temperature under a range of future emissions scenarios, there are still discrepancies in the magnitude of the projected response[1].

Here we develop a novel method[2,3] for constraining uncertainty in future regional temperature projections based on the predictions of an observationally trained machine learning algorithm, Ridge-ERA5. Ridge-ERA5 - a Ridge regression model[4] - learns coefficients to represent observed relationships between daily temperature anomalies and a selection of variables in the ECMWF Re-Analysis (ERA) 5 dataset[5]. Climate-invariance of the Ridge relationships is demonstrated in a perfect model framework: we train a set of 23 Ridge-CMIP models on historical data of the Coupled Model Intercomparison Project (CMIP) phase 6[6] in order to emulate these models and then evaluate their predictions using future scenario data from the most extreme future emissions pathway, SSP 5-8.5, which represents the most extreme extrapolation challenge for the Ridge models.

Combining the historically constrained Ridge-ERA5 coefficients with normalised inputs from CMIP6 future climate change simulations forms the basis of a new methodology to derive observational constraints on regional climate change. For daily, regional (2.5°x2.5°), summer temperatures across the Northern Hemisphere, the Ridge-ERA5 observations-based constraint implies, for example, that a group of higher sensitivity CMIP6 models is inconsistent with observational evidence (including in Eastern, West & Central, and Northern Europe), see Figure 1, potentially suggesting that the sensitivity of these models is indeed too high[7,8]. A key advantage of our new method is the ability to constrain regional projections at very high – daily – temporal resolution which includes extreme events such as heatwaves. 
10:15–10:30 Machine learning emulation of a local-scale UK climate model (slides) Henry Addison (University of Bristol)*; Elizabeth Kendon (Met Office Hadley Centre); Suman Ravuri (); Laurence Aitchison (University of Bristol); Peter Watson (Bristol) Understanding rainfall at a local scale is important for better adapting to its future changes. Using physical simulations, however, to produce such high resolution projections is expensive. For the first time we use diffusion models (a state-of-the-art machine learning (ML) method for generative modelling) to emulate a high-resolution, convection-permitting model (CPM) by downscaling general circulation model (GCM) outputs. We apply this method to model high-resolution UK rainfall where climate change is predicted to cause intensification of heavy rainfall extremes. The ML model can complement existing expensive CPM output with cheaper samples and also enable generating high-resolution samples from other climate model datasets. The samples have realistic spatial structure, which previous statistical approaches struggle to achieve.

We will discuss the challenges of selecting and applying the model trained on coarsened CPM variables to GCM variables and present results about the method’s ability to reproduce the spatial and temporal behaviour of rainfall and extreme events that are better represented in the CPM than the GCM due to the CPM’s ability to model atmospheric convection.
Thursday: Session 5(Back to top)
Chair: Andrew Hyde
11:00–11:15 Machine learning applications for weather and climate need greater focus on extremes Peter Watson (Bristol)* Multiple studies have now demonstrated that machine learning (ML) can give improved skill for predicting or simulating fairly typical weather events, for tasks such as short-term weather forecasting, downscaling simulations to higher resolution and emulating and speeding up expensive model parameterisations. Many of these used ML methods with very high numbers of parameters, such as neural networks, which are the focus of the discussion here. Not very much attention has been given to the performance of such ML models for extreme event severities of relevance for many critical weather and climate prediction applications, with return periods of more than a few years. This leaves a lot of uncertainty about the usefulness of these methods, particularly for general purpose prediction systems that must perform reliably in extreme situations. ML models may be expected to struggle to predict extremes due to there usually being few samples of such events. Studies indicate both that ML models can have reasonable skill for extreme weather, with it not being hopeless to use them in situations requiring extrapolation, and also that it can sometimes fail. This article argues that this is an area worth researching more. Ways to get a better understanding of how well ML models perform at predicting extreme weather events are discussed.

This submission is based on Watson, 2022, Env. Res. Lett., 17, 111004, https://doi.org/10.1088/1748-9326/ac9d4e. It includes updates based on more recent work.
11:15–11:30 Improving trustworthiness: Introducing eXplainable AI evaluation to climate science Philine L Bommer (TU Berlin)*; Marlene Kretschmer (University of Reading); Anna Hedström (Technische Universität Berlin); Dilyara Bareeva (Technical University of Berlin); Marina M.-C. Höhne (TU Berlin) Explainable artificial intelligence (XAI) sheds light on the predictions of deep neural networks (DNNs) and have been successfully applied to climate science. However, the analysis and comparison of XAI methods for a given climate task is challenging due to the lack of quantitative evaluation metrics, leading to uninformed method choices, which can yield misleading information about the network decision. In this extended abstract, we introduce XAI evaluation in the context of climate research to enable a well-founded explanation application. We apply XAI evaluation to compare multiple explanation methods for a multi-layer perceptron (MLP) and a convolutional neural network (CNN). Both MLP and CNN assign temperature maps to classes based on their decade. We assess the respective explanation methods based on robustness, faithfulness, randomization, complexity and localization. We evaluate the performance of the XAI methods in each property for both MLP and CNN explanations. Our experiments demonstrate that XAI evaluation can be applied to different network tasks and offers more detailed information about different properties of explanation methods than previous research. Using XAI evaluation allows us to tackle the challenge of choosing an explanation method.
Thursday: Session 6(Back to top)
Chair: Colm-cille Caulfield
13:30–13:45 Systematically Generating Hierarchies of Machine-Learning Models, from Equation Discovery to Deep Neural Networks (slides) Tom Beucler (University of Lausanne)*; Arthur Grundner (DLR); Sara Shamekh (Columbia University); Ryan  Lagerquist (CIRA and NOAA/ESRL/GSL) While the added value of machine learning (ML) for weather and climate applications is measurable, it remains challenging
to explain, especially for large deep learning models. Inspired by climate model hierarchies, which use dynamical
models of increasing complexity to help connect our fundamental understanding of the Earth system with operational
predictions, we ask:
Given a climate process for which we have reliable data, how can we systematically generate a hierarchy of ML models,
from simple analytic equations to complex neural networks?
To address this question, we choose two atmospheric science problems for which we have physically-based, analytic
models with just a few tunable parameters, and deep learning algorithms whose performance was already established in
previous work: Cloud cover parameterization and shortwave radiative transfer emulation. In each case, we formalize the
ML-based hierarchy by working in a well-defined, two-dimensional plane: Complexity versus Performance. We choose the
number of trainable parameters as a simple metric for complexity, while performance is defined using a single regression
metric (e.g., the mean-squared error) calculated for the same outputs on a common validation dataset.
During this presentation, we will demonstrate how to use our data-driven hierarchies for two purposes: (1) Data–
driven model development; and (2) process understanding. First, each ML model of the hierarchy occupies a welldefined
(complexity, performance) position as they use the same performance metric. Models that maximize performance
for a given complexity unambiguously define a Pareto frontier in (complexity) × (performance) space and can
be deemed optimal. Second, optimal models on the Pareto frontier can be compared to reveal which added process/
nonlinearity/regime/connectivity/etc. leads to the biggest increase in performance for a given complexity, which
facilitates process understanding. For example, using sequential feature selection on simple polynomial fits, we underline
the nonlinear relationship between condensate mixing ratios and cloud cover. Using a specialized type of convolutional
neural network (U-net++) to emulate shortwave radiative heating, we can mostly overcome the biases of simpler models
of shortwave radiation (one-stream, linear, multilayer perceptron, convolutional neural network), notably in the presence
of one or more cloud layers.
To show its versatility, we apply our framework to the data-driven discovery of analytic models, which are interpretable
by construction. Applying sequential feature selection to neural network models of cloud cover, we identify the five most
informative features, and use them as inputs to genetic algorithms. These genetic algorithms automatically generate
hundreds of candidate equations, which can be filtered using physical constraints and ranked using our (complexity) ×
(performance) space. Our best candidate is interpretable, achieving a coefficient of determination close to 0.95 with only
13 trainable parameters. It beats all neural networks using three features or less, the widely-used Sundqvist scheme by
capturing how cloud condensate mixing ratio nonlinearly affects cloud fraction, and the Xu-Randall scheme by describing
how temperature decreases cloud cover.
In summary, we can systematically build hierarchies of Pareto-optimal ML models to better understand their added
value. By cleanly comparing these ML models to existing schemes, and promoting process understanding by hierarchically
unveiling system complexity, we hope to improve the trustworthiness of ML models for weather and climate applications.
13:45–14:00 An iterative data-driven emulator of an ocean general circulation model Rachel Furner (University of Cambridge)*; Peter Haynes (University of Cambridge); Dave Munday (British Antarctic Survey); Daniel Jones (British Antarctic Survey); Emily Shuckburgh (University of Cambridge) Data-driven models are becoming increasingly competent at tasks fundamental to weather and climate prediction. Relative to machine learning (ML) based atmospheric models, which have shown promise in short-term forecasting, ML-based ocean forecasting remains somewhat unexplored. In this work, we present a data-driven emulator of an ocean GCM and show that performance over a single predictive step is skilful across all variables under consideration.
While the network performs well over a single prediction step, iterating such data-driven models poses additional challenges, with many models suffering from over-smoothing of fields or instabilities in the predictions. We show preliminary results comparing a variety of methods for iterating our data-driven emulator and assess them by looking at how well they agree with the underlying GCM in the very short term and how realistic the fields remain for longer-term forecasts.
Due to the chaotic nature of the system being forecast, we would not expect any model to agree with the GCM accurately over long time periods, but instead we expect fields to continue to exhibit physically realistic behaviour at ever increasing lead times. Specifically, we expect well-represented fields to remain stable whilst also maintaining the presence and sharpness of features seen in both reality and in GCM predictions, with reduced emphasis on accurately representing the location and timing of these features. This nuanced and temporally changing definition of what constitutes a ‘good’ forecast at increasing lead times generates questions over both (1) how one defines suitable metrics for assessing data-driven models, and perhaps more importantly, (2) identifying the most promising loss functions to use to optimise these models.
14:00–15:00 Transforming climate modeling with AI: Hype or Reality? (Keynote) Laure Zanna (New York University) Climate simulations remain one of the best tools to understand and predict global and regional climate change. Yet, these simulations demand immense computing power to resolve all relevant scales in the climate system from meters to 1000’s kilometers. Due to the limited computing resources, many aspects of the physics are missing from climate simulations - e.g., ocean mixing or clouds' effects on temperature and currents. I will describe some of the key challenges in climate modeling and how machine learning tools can help accelerate progress toward accurate climate simulations and reliable climate projections. I will focus on our work capturing ocean turbulence with a range of machine learning techniques, that we have adapted for fluid flows. This will include deep learning with embedded physics constraints, uncertainty quantification for ocean turbulence, and equation discovery of multiscale physics with genetic programming. Some of our work suggests that machine learning could open the door to discovering new physics from data and enhance climate predictions. Yet, many questions remain unanswered, making the next decade exciting and challenging for ML + climate modeling for robust and actionable climate projections.
Thursday: Session 7(Back to top)
Chair: Dominic Orchard
15:30–15:45 Neural style transfer between observed and simulated cloud images to improve the detection performance of tropical cyclone precursors Daisuke Matsuoka (JAMSTEC)*; Steve Easterbrook (University of Toronto) A common observation in the field of pattern recognition for atmospheric phenomena using supervised machine learning is that recognition performance decreases for events with few observed cases, such as extreme weather events. Here, we aimed to mitigate this issue by using numerical simulation and satellite observation data for training. However, as simulation and observed data possess distinct characteristics, we employed neural style transformation learning to transform the simulation data to more closely resemble the observational data. The resulting transformed cloud images of the simulation data were found to possess physical features comparable to those of the observational data. By utilizing the transformed data for training, we successfully improved the classification performance of cloud images of tropical cyclone precursors 7, 5, and 3 days before their formation by 40.5%, 90.3%, and 41.3%, respectively.
15:45–16:00 Reducing the overhead of coupled ML models between Python and Fortran: an application to Gravity Wave Parameterizations ) Jack Atkinson (University of Cambridge); Simon Clifford (University of Cambridge); David Connelly (New York University); Chris Edsall (University of Cambridge); Athena Elafrou (NVIDIA); Edwin Gerber (New York University); Laura Mansfield (Stanford University); Dominic Orchard (University of Cambridge)*; Aditi Sheshadri (Stanford University); Y. Qiang Sun (Rice University); Minah Yang (New York University) Machine learning (ML) has recently been demonstrated as a viable approach to achieving one or both of higher computational performance and higher predictive performance in climate models. One notable use is for subgrid models (so-called “parameterizations”), where a neural network is trained against observational data or against a higher resolution dynamical model. This approach typically then leads to a technical software engineering challenge of language interoperation: Python is the most popular language for building ML models due to libraries just as TensorFlow, PyTorch, and scikit-learn, whilst Fortran remains the language du jour for GCMs and other intermediate models. We present approaches for TensorFlow and PyTorch that avoid the use of slower coupling libraries and show that higher performance can be achieved via more direct couplings between machine learning libraries and Fortran. We demonstrate our technique in the context of modelling atmospheric gravity waves. Initial results show that our approach can lead to a 3x speedup over traditional coupling techniques.
16:00–16:15 A Novel Workflow for Streamflow Prediction in the Presence of Missing Gauge Observations Rendani Mbuvha (Queen Mary University of London)*; Peniel J. Y. Adounkpe (WASCAL); Wilson Mongwe (University of Johannesburg); Zahir Nikraftar (Queen Mary University of London); Mandela Houngnibo (Agence Nationale de la Météorologie du Bénin); Nathaniel Newlands (Agriculture and Agri Food Canada); Tshilidzi Marwala (University of Johannesburg) Streamflow predictions are a vital tool for detecting flood and drought events. Such predictions are even more critical to Sub-Saraharan African regions that are vulnerable to the increasing frequency and intensity of such events. These regions are sparsely gauged, with few available gauging stations that are often plagued with missing data due to various causes, such as harsh environmental conditions and constrained operational resources. This work presents a novel workflow for predicting streamflow in the presence of missing gauge observations. We leverage bias correction of the GEOGloWS ECMWF streamflow service (GESS) forecasts for missing data imputation and predict future streamflow using the state-of-the-art Temporal Fusion transformers at ten river gauging stations in the Benin Republic. We show by simulating missingness in a testing period that GESS forecasts have a significant bias that results in poor imputation performance over the ten Beninese stations. Our findings suggest that overall bias correction by Elastic Net and Gaussian Process regression achieves superior performance relative to traditional imputation by established methods. We also show that the Temporal Fusion Transformer yields high predictive skill and further provides explanations for predictions through the weights of its attention mechanism. The findings of this work provide a basis for integrating Global streamflow prediction model data and state-of-the-art machine learning models into operational early-warning decision-making systems in resource-constrained countries vulnerable to drought and flooding due to extreme weather events.
16:15–16:30 Environmental Sensor Placement with Convolutional Gaussian Neural Processes (slides) Tom R Andersson (British Antarctic Survey)*; Wessel P Bruinsma (University of Cambridge and Invenia Labs); Stratis Markou (University of Cambridge); James R Requeima (University of Cambridge); Alejandro Coca-Castro (The Alan Turing Institute); Anna Vaughan (Univeristy of Cambridge); Anna-Louise Ellis (Met Office); Matthew Lazzara (University of Wisconsin-Madison); Daniel Jones (British Antarctic Survey); Scott Hosking (British Antarctic Survey); Richard E. Turner (University of Cambridge) Deploying environmental measurement stations can be a costly and time-consuming procedure, especially in remote regions that are difficult to access, such as Antarctica. Therefore, it is crucial that sensors are placed as efficiently as possible, maximising the informativeness of their measurements. Informative sensor placements can be identified by fitting a probabilistic model to data and finding placements that would maximally reduce the model’s uncertainty. The models most widely used for this purpose are Gaussian processes (GPs). However, capturing complex non-stationary behaviour with GPs is challenging, and it is also difficult to scale them to large datasets. In this work, we explore the use of a convolutional Gaussian neural process (ConvGNP) to address these issues. A ConvGNP is a meta-learning model that uses neural networks to parameterise a joint Gaussian distribution over target locations. Using simulated surface air temperature anomaly fields over Antarctica as ground truth, we show that the ConvGNP learns spatial and seasonal non-stationarities from the data. Notably, the enhanced flexibility of the ConvGNP enables it to make substantially better probabilistic predictions than a non-stationary GP baseline. In a simulated sensor placement experiment, the ConvGNP proposes sensor placements that significantly outperform those generated from GPs and a simple heuristic placement method in terms of uncertainty reduction metrics. The goal of this study is to facilitate future work that deploys the ConvGNP on observational data to make operational sensor placement recommendations.
16:30–16:45 Pollution Tracker: Finding industrial sources of aerosol emission in satellite imagery Peter Manshausen (University of Oxford)*; Duncan Watson-Parris (University of California San Diego); Lena Wagner (GAF AG); Pirmin Maier (GAF AG); Sybrand Jacobus Muller (GAF AG); Gernot Ramminger (GAF AG); Philip Stier (University of Oxford) The effects of anthropogenic aerosol, solid or liquid particles suspended in the air, are the biggest contributor to uncertainty in current climate perturbations. Heavy industry sites, such as coal power plants and steel manufacturers, emit large amounts of aerosol in a small area. This makes them ideal places to study aerosol interactions with radiation and clouds. However, existing data sets of heavy industry locations are either not public, or suffer from reporting gaps. Here, we develop a deep learning algorithm to detect missing sites in high-resolution satellite data. For the pipeline to be viable at global scale, we employ a two-step approach. The first step uses 10m resolution data, which is scanned for potential industry sites, before using 1.2m resolution images to confirm or reject detections. On held-back test data, the models perform well, with the lower resolution one reaching up to 94% accuracy. Deployed to a large test region, the first stage model yields many false positive detections. The second stage, higher resolution model shows promising results at filtering these out, while keeping the true positives. In the deployment area, we find five new heavy industry sites which were not in the training data set. This demonstrates that the approach can be used to complement data sets of heavy industry sites.
16:45–17:00 Data-driven gap-filling of multivariate climate observations over land Verena Bessenbacher (ETH Zuerich)*; Martin Hirschi (ETH Zuerich); Dominik L. Schumacher (ETH Zuerich); Sonia I. Seneviratne (ETH Zuerich); Lukas Gudmundsson (ETH Zurich) The volume of Earth system observations from space and ground has massively grown in recent decades. Despite this increasing abundance, multivariate or multi-source analyses at the interface of atmosphere and land are still hampered by the sparsity of ground measurements and a large number of missing values in satellite observations. Here we use CLIMFILL (CLIMate data gap-FILL), a recently developed multivariate gap-filling procedure. It combines state-of-the-art spatial interpolation with an iterative approach designed to account for the dependence across multiple incomplete variables. CLIMFILL is applied to a set of remotely sensed and in-situ observations over land that are central to observing land-atmosphere interactions and extreme events. The resulting gridded monthly time series spans the years 1995--2020 globally at 0.5-degree resolution with gap-free estimates of nine variables that are based on the satellite products: ESA CCI surface layer soil moisture, MODIS land surface temperature, diurnal temperature range, GPM precipitation, GRACE terrestrial water storage, ESA CCI burned area, ESA CCI snow cover fraction as well as two-meter temperature and precipitation from gridded in-situ observations. Internal validation shows that the generated dataset can recover time series of anomalies better than state-of-the-art interpolation methods, indicating the importance of multivariate dependencies. The CLIMFILL gap-fill shows high correlations with respective variables of ERA5-Land, and soil moisture estimates compare favorably to in-situ observations from the ISMN network.
Friday: Session 8(Back to top)
Chair: Emma Boland
09:00–09:15 Understanding cirrus clouds using explainable machine learning Kai Jeggle (ETH Zurich)*; David Neubauer (ETH Zurich); Gustau Camps-Valls (Universitat de València); Ulrike Lohmann (ETH Zurich) Cirrus clouds are key modulators of Earth’s climate. Their dependencies on meteorological and aerosol conditions are among the largest uncertainties in global climate models. This work uses three years of satellite data and reanalysis data to study the link between cirrus drivers and cloud properties. We use a gradient-boosted machine learning model and a Long Short-Term Memory (LSTM) network with an attention layer to predict ice water content and ice crystal number concentration. The models show that meteorological and aerosol conditions can predict cirrus properties with R² = 0.49. Feature attributions are calculated with SHapley Additive exPlanations (SHAP) to quantify the link between meteorological and aerosol conditions and cirrus properties. For instance, the minimum concentration of supermicron-sized dust particles required to cause a decrease in ice crystal number concentration predictions is 2 × 10 −4 mg m⁻³ . The last 15 hours before the observation predict all cirrus properties.
09:15–09:30 Clustering of causal graphs to explore drivers of river discharge Wiebke Günther (German Aerospace Center)*; Peter Miersch (Helmholtz Centre for Environmental Research); Urmi Ninad (Technische Universität Berlin); Jakob Runge (Institute of Data Science, German Aerospace Center (DLR)) This work aims to classify catchments through the lens of causal inference and cluster analysis. In particular, it uses causal effects (CE) of meteorological variables on river discharge while only relying on easily obtainable observational data. The proposed method combines time series causal discovery with CE estimation to develop features for a subsequent clustering step. Several ways to customize and adapt the features to the problem at hand are discussed. In an application example, the method is evaluated on 358 European river catchments. The found clusters are analyzed using the causal mechanisms that drive them and their environmental attributes.
09:30–09:45 Selecting Robust Features for Machine Learning Applications using Multidata Causal Discovery Saranya Ganesh S (University of Lausanne)*; Tom Beucler (University of Lausanne); Frederick Iat-Hin Tam (University of Lausanne); Andreas Gerhardus (German Aerospace Center (DLR), Institute of Data Science); Jakob Runge (Institute of Data Science, German Aerospace Center (DLR)) Robust feature selection is vital for creating reliable and interpretable Machine Learning (ML) models. When designing statistical prediction models in cases where domain knowledge is limited and underlying interactions are unknown, it is often difficult to choose the optimal set of drivers. To mitigate this issue, we introduce a Multidata (M) causal feature selection approach that can simultaneously process an ensemble of time series datasets and produce a single set of causal drivers. This approach uses the causal discovery algorithms PC1 or PCMCI that are implemented in the tigramite Python package. Both of these algorithms utilize conditional independence tests to infer parts of the causal graph. Our causal feature selection approach filters out causally-spurious links before passing the remaining causal features as inputs to a multivariate linear regression that predicts the targets. We apply our framework to the statistical prediction of West Pacific tropical cyclones intensity, for which the choice of accurate drivers and dimensionality reduction (time lags, vertical levels, and area-averaging) is often difficult. Using more stringent significance thresholds in the conditional independence tests helps eliminate spurious causal relationships, thus helping the ML model generalize better to unseen tropical cyclone cases. M-PC1 yields the best performance with a reduced number of inputs compared to other feature selection methods such as lagged correlation, random forest-based, M-PCMCI, or random feature selection. The optimal causal graphs obtained from our causal feature selection help improve our understanding of underlying relationships and suggests new potential drivers of TC intensification.
09:45–10:00 Statistical constraints on climate model parameters using a scalable cloud-based inference framework) James Carzon (Carnegie Mellon University); Bruno Abreu (University of Illinois Urbana-Champaign); Leighton Regayre (University of Leeds); Kenneth Carslaw (University of Leeds); Lucia Deaconu (University of Oxford); Philip Stier (University of Oxford); Hamish Gordon (Carnegie Mellon University); Mikael Kuusela (Carnegie Mellon University) Atmospheric aerosols influence the Earth's climate, primarily by affecting cloud formation and scattering visible radiation. However, aerosol-related physical processes in climate simulations are highly uncertain. Constraining these processes could help improve model-based climate predictions. We propose a scalable statistical framework for constraining parameters in expensive climate models by comparing model outputs with observations. Using the C3.ai Suite, a cloud computing platform, we use a perturbed parameter ensemble of the UKESM1 climate model to efficiently train a surrogate model. A data-driven means of estimating model discrepancy is derived. The strict bounds method is applied to quantify parametric uncertainty in a principled way. We demonstrate the scalability of this framework with two weeks' worth of simulated aerosol optical depth data over the South Atlantic and Central African region, written from the model every three hours and matched in time to twice-daily MODIS observations. When constraining the model using real satellite observations, we establish constraints on combinations of three model parameters using much higher time-resolution outputs from the climate model than previous studies. This result suggests that potentially very powerful constraints may be achieved when our framework is scaled to the analysis of more observations and for longer time periods.
10:00–10:15 A flexible data and knowledge-driven method for identifying climate drivers to predict summer conditions in China’s Northeast Farming Region Edward Pope (Met Office); Nathan Creaser (Met Office); Kevin Donkers (Met Office)*; Samantha V. Adams (Met Office) Northeast China is a globally important food production region. Working with the China Meteorological Administration, the Met Office have co-developed an annual climate service forecasting maize yield in the region aimed at helping regional planners manage risks to food security. As the current seasonal forecast models have limited skill in this region, the maize yield predictions are currently based on observed June-August temperature and precipitation, which has limited operational utility. Therefore, we have developed a flexible and powerful method for generating skilful statistical forecasts of June-August temperature and precipitation, increasing the time available for decision makers to take appropriate action to adverse conditions. There are 5 main steps to our approach: 1) collecting a large set of potential predictors at monthly temporal resolution (e.g. ENSO, IOD, NAO indices) from the KNMI data explorer or calculating from ERA5 reanalysis. 2) Random Forest regression for feature selection. 3) Testing the physical plausibility of these links by mapping correlations between selected features and global meteorological variables. 4) Bayesian Networks to identify the subset of selected features that have the most explanatory power and to explore potential causal relationships between predictors and regional temperature and precipitation in Northeast China, incorporating knowledge of teleconnections. 5) Out of sample predictions using these optimal subsets of predictors in Linear Regression models to forecast temperature and precipitation in each region. The resulting statistical models were trained and tested on data from 1980-2016, and are shown to correlate much more closely with observations (Pearson R ~ 0.6) than the dynamical models (R ~ 0.3) for Northeast China. This method is applicable anywhere in the world, and can provide insight into large-scale drivers of natural climate variability and generate skilful forecasts.
10:15–10:30 An Interpretable, Trustworthy Machine Learning Framework to Identify Spatiotemporal Patterns Favoring Tropical Cyclone Intensification Frederick Iat-Hin Tam (University of Lausanne)*; Tom Beucler (University of Lausanne); James Ruppert Jr. (University of Oklahoma) A quantifiable assessment of how environmental heterogeneity and convective asymmetries impact tropical cyclone (TC) strength is paramount in improving our understanding of TC intensification. This assessment can be made via Eulerian budgets or by linearizing the equations of motion. However, existing frameworks make implicit assumptions that preclude discussions on how spatially heterogeneous or transient forcing may affect TC intensity.

To address this gap, we design a data-driven model based on Sparse Principle Component Regression to learn the mapping between thermodynamic forcing and kinematic changes over different forecast windows. The model is built to be inherently interpretable and learns to combine different meteorological fields into physically-interpretable “optimal features” for predictions. Applied to WRF simulations of Hurricane Maria (2017) and Typhoon Haiyan (2013), the best models’ R2 reaching 0.7 for the intensification of three dimensional TC primary circulation.

We apply explainable AI techniques to draw new physical insights from interpretable ML models. Feature importance shows that the best models use longwave radiation (LW) the most to predict the early phase of TC intensification. The latent optimal structure for LW contains a wavenumber-1 asymmetry signature, which coincides with dry anomalies near the TCs. We will discuss several strategies to improve the trustworthiness of the ML framework, including forward feature selection to reduce the uncertainty in feature ranking and using WRF sensitivity experiments to verify that the latent optimal structures can control TC intensification in numerical models.

To conclude, our modelling framework can learn the mapping between thermodynamic forcing and kinematic changes that are physically reasonable without relying on axisymmetric assumptions. This opens the door to systematically discovering the leading causes of TC intensification from data.
Friday: Session 9(Back to top)
Chair: Eniko Szekely 
11:00–11:15 Climate Model Driven Seasonal Forecasting Approach with Deep Learning Alper Unal (Istanbul Technical University)*; Busra Asan (Istanbul Technical University); Ismail Sezen (Istanbul Technical University); Bugra Yesilkaynak (Istanbul Technical University); Yusuf Aydin (Istanbul Technical University); Mehmet Ilicak (Istanbul Technical University); Gozde Unal (Istanbul Technical University) Understanding seasonal climatic conditions is critical for better management of resources such as water, energy and agriculture. Recently, there has been a great interest in utilizing the power of artificial intelligence methods in climate studies. This paper presents a advanced deep learning model (UNet++) trained by state-of-the-art global CMIP6 models to forecast global temperatures a month ahead using the ERA5 reanalysis dataset. ERA5 dataset was also used for finetuning as well performance analysis in the validation dataset. Three different setups (CMIP6 ; CMIP6 + elevation; CMIP6 + elevation + ERA5 finetuning) were used with both UNet and UNet++ algorithms resulting in six different models. For each model 14 different sequential and non-sequential temporal settings were used. The Mean Absolute Error (MAE) analysis revealed that UNet++ with CMIP6 with elevation and ERA5 finetuning model with “Year 3 Month 2” temporal case provided the best outcome with an MAE of 0.7. Regression analysis over the validation dataset between the ERA5 data values and the corresponding AI model predictions revealed slope and R2 values close to 1 suggesting a very good agreement. The AI model predicts significantly better than the mean CMIP6 ensemble between 2016 and 2021. Both models predict the summer months more accurately than the winter months.
11:15–11:30 Quantifying causal teleconnections to drought and fire risks in Indonesian Borneo (slides) Timothy Lam (University of Exeter)*; Jennifer Catto (University of Exeter); Rosa Barciela (Met Office); Anna Harper (University of Exeter); Peter Challenor (University of Exeter); Alberto Arribas (Microsoft) Fires occurring over the peatlands in Indonesian Borneo accompanied by droughts have posed devastating impacts on human health, livelihoods, economy and the natural environment, and their prevention requires a comprehensive understanding of climate-associated risk. We want to strengthen the possibility of early warning triggers of drought, which is a strong predictor of the prevalence of fires, and evaluate the climate risk relevant to the formulation of long-term policies to eliminate fires. Although it is widely known that the droughts are often associated with El Niño events, the onset process of El Niño and thus the drought precursors and their possible changes under the future climate are not clearly understood. Here we use a causal network approach to quantify the strength of teleconnections to droughts at a seasonal timescale shown in (1) observational and reanalysis data (2) CMIP6 models and (3) seasonal hindcasts. The observational and reanalysis data proves that the droughts are strongly linked to ENSO variability, with drier years corresponding to El Niño conditions, and droughts can be predicted with a lead time of three months based on their associations with Pacific SST, with higher SST preceding drier conditions. Under the SSP585 scenario, the CMIP6 multi-model ensembles show significant increase in both the maximum number of consecutive dry days in the Indonesian Borneo region in JJA (p = 0.006) and its linear association with Pacific SST in MAM (p = 0.001) from year 2061 – 2100 compared with the historical baseline. On the other hand, seasonal hindcast models are (1) overestimating the variability of maximum number of consecutive dry days, (2) showing varied skills in simulating the mean rainfall and drought indicators, and (3) underestimating the teleconnections to Borneo droughts, making it difficult to assess the likelihood of unprecedented drought and fire risk under El Niño conditions.
11:30–11:45 Utilizing Bayesian History Matching for calibrating an internal gravity wave parameterization Robert C King (Stanford University)*; Laura Mansfield (Stanford University); Aditi Sheshadri (Stanford University) Modern Global Circulation Models (GCM)s utilize various parameterizations to model phenomena they cannot explicitly resolve. One such parameterization is the Alexander and Dunkerton parameterization for non-orographic
gravity wave drag, henceforth referred to as AD99, which can be used to estimate the drag imparted on the mean flow. The AD99 parameterization utilizes a distribution of momentum flux as a function of phase speed defined at the source level of the waves. In this case a Gaussian distribution is used with a maximum momentum flux defined as Bt and half width half maximum phase speed defined as cw. These parameters in question are user specifiable and the calibration of these parameters was the focus of this work. More specifically, Bayesian History Matching was utilized across multiple waves to reduce the size of the non-implausible space of both parameters using radiosonde measurements of the Quasi-Biennial Oscillation (QBO) to calculate the implausibility of each parameter configuration. The results from this Bayesian History Matched calibration were then compared with a previous approach conducted using Ensemble Kalman Inversion (EKI).
11:45–12:00 A heuristic method for detecting overfit in unsupervised classification of climate model data Emma Boland (British Antarctic Survey)*; Erin Atkinson (University of Toronto); Daniel Jones (British Antarctic Survey) Unsupervised classification is becoming an increasingly common method to objectively identify coherent structures within both observed and modelled climate data. However, the user must choose the number of classes to fit in advance. Typically, a combination of statistical methods and expertise is used to choose the appropriate number of classes for a given study, however it may not be possible to identify a single `optimal' number of classes. In this work we present a heuristic method, the Ensemble Difference Criterion, for determining the maximum number of classes unambiguously for modelled data where more than one ensemble member is available. This method requires robustness in the class definition between simulated ensembles of the system of interest. For demonstration, we apply this to the clustering of Southern Ocean potential temperatures in a CMIP6 climate model, and show that the data supports between four and seven classes of a Gaussian Mixture Model.
12:00–12:15 Statistical Learning to Construct Probabilistic Subseasonal Precipitation Forecasting over California Nachiketa Acharya (CIRES,University of Colorado Boulder and NOAA-Physical Sciences Laboratory, Boulder, CO)*; Kyle Hall (CIRES/NOAA-PSL) Sub-Seasonal (S2S) climate forecasts suffer from a significant lack of prediction skill beyond week-two lead times. While the statistical bias correction of global coupled ocean-atmosphere circulation models (GCM) offers some measure of skill, there is significant interest in the capacity of machine learning-based (ML) forecasting approaches to improve S2S forecasts at these time scales. The large size of S2S datasets unfortunately makes traditional ML approaches computationally expensive, and therefore mostly inaccessible to those without access to institutional computing resources. In order to address this problem, Extreme Learning Machine (ELM) can be used as a fast alternative to traditional neural network-based ML forecasting approaches. ELM is a randomly initialized neural network approach, which, instead of adjusting hidden layer neuron weights through backpropagation, leaves them unchanged and solves its output layer with the generalized Moore-Penrose inverse. However, since the traditional ELM network only produces a deterministic outcome, we use a modified version of ELM called Probabilistic Output Extreme Learning Machine (PO-ELM). PO- ELM uses sigmoid additive neurons and slightly different linear programming to make probabilistic predictions. In order to accommodate probabilistic sub- seasonal forecasting, we further modified PO-ELM in this case to produce relative tercile probabilities. We adopted a rule applying Normalization to produce mutually exclusive probabilities. We used XCast, a Python library previously designed by the authors, to implement multivariate PO-ELM-based probabilistic forecasts of precipitation using the outputs of ECMWF’s S2S forecast over California. In this talk, the skill and interpretability of the proposed method for subseasonal Precipitation Forecasting over California will be discussed.

</body>

</html>