We need you to contribute!
The ACCESS-Hive is a community resource that is a work in progress. Can you help to add content to this page? We’d love to receive your contribution. See our contributing guidelines for details of how to provide content. You can also open an issue highlighting any content you’d like us to provide but aren’t able to contribute yourself.
Documentation | Tutorial | Source Code
ESMValTool is a community-developed climate model diagnostics and evaluation software package, driven both by computational performance and scientific accuracy and reproducibility. ESMValTool is open to both users and developers, encouraging open exchange of diagnostic source code and evaluation results from the Coupled Model Intercomparison Project CMIP ensemble. For a comprehensive introduction to ESMValTool please visit our documentation page.
METplus is a verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales. It is intended to be extensible through additional capability developed by the community. The core components of the framework include MET, the associated database and display systems called METviewer and METexpress, and a suite of Python wrappers to provide low-level automation and examples, also called use-cases. METplus will be a component of NOAA's Unified Forecast System (UFS) cross-cutting infrastructure as well as NCAR's System for Integrated Modeling of the Atmosphere (SIMA).
METplus is being actively developed by NCAR/Research Applications Laboratory (RAL), NOAA/Earth Systems Research Laboratories (ESRL), NOAA/Environmental Modeling Center (EMC), and is open to community contributions.
Links to the code repository and documentation for each METplus component are provided below:
- METplus Wrappers: sources | docs
- MET: sources | docs
- METviewer: sources | docs
- METexpress: sources | docs
- METplotpy: sources | docs
- METcalcpy: sources | docs
- METdatadb: sources | docs
PCMDI Metrics Package (PMP)
The PMP is used to provide “quick-look” objective comparisons of Earth System Models (ESMs) with one another and available observations. Results are produced in the context of all model simulations contributed to CMIP6 and earlier CMIP phases. Currently, the comparisons emphasize metrics of large- to global-scale annual cycle and both tropical and extra-tropical modes of variability. Ongoing work in v1.x development branches include established statistics for ENSO, MJO, regional monsoons, and high frequency characteristics of simulated precipitation.
Free Evaluation System Framework (FREVA)
Freva, the free evaluation system framework, is a data search and analysis platform developed by the atmospheric science community for the atmospheric science community. With help of Freva researchers can:
- quickly and intuitively search for data stored at typical data centers that host many datasets.
- create a common interface for user defined data analysis tools.
- apply data analysis tools in a reproducible manner.
Toolkit for Extremes Climate Analysis (TECA)
Documentation | Tutorials | Sources
TECA is a general purpose tool for detecting discrete events in climate model output. It leverages a map-reduce framework for efficient parallelization at large scales (order 10K+ cores). Currently, TECA contains detection algorithms for tropical cyclones, atmospheric rivers, and extratropical cyclones; and plans are underway to implement algorithms for mesoscale convective complexes, African Easterly waves, atmospheric blocks, and fronts.
Model and ObservatioN Evaluation Toolkit (MONET)
Documentation | Tutorial | Source Code | Paper 2
MONET is an open source project and Python package that aims to create a common platform for atmospheric composition data analysis for weather and air quality models.
MONET was developed to evaluate the Community Multiscale Air Quality Model (CMAQ) for the NOAA National Air Quality Forecast Capability (NAQFC) modeling system. MONET is designed to be a modularized Python package for (1) pairing model output to observational data in space and time; (2) leveraging the Pandas Python package for easy searching and grouping; and (3) analyzing and visualizing data. This process introduces a convenient method for evaluating model output.
Documentation | Tutorial | Source Code | Paper 3
Climpred aims to offer a comprehensive set of analysis tools for assessing the quality of dynamical forecasts relative to verification products (e.g., observations, reanalysis products, control simulations). Climpred supports a broad range of temporal scales of prediction, spanning the weather, subseasonal-to-seasonal (S2S), and seasonal-to-decadal (S2D) communities.
Barbara Brown, Tara Jensen, John Halley Gotway, Randy Bullock, Eric Gilleland, Tressa Fowler, Kathryn Newman, Dan Adriaansen, Lindsay Blank, Tatiana Burek, and others. The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bulletin of the American Meteorological Society, 102(4):E782–E807, 2021. doi:10.1175/BAMS-D-19-0093.1. ↩
Barry Baker and Li Pan. Overview of the model and observation evaluation toolkit (MONET) version 1.0 for evaluating atmospheric transport models. Atmosphere, 8(11):210, 2017. doi:10.3390/atmos8110210. ↩
Riley X. Brady and Aaron Spring. Climpred: Verification of weather and climate forecasts. Journal of Open Source Software, 6(59):2781, 2021. doi:10.21105/joss.02781. ↩
Created: May 31, 2023