Using 'open virtual dataset' capability to work with TEMPO Level 3 data¶
Summary¶
In this tutorial, we will use the earthaccess.open_virtual_mfdataset()
function to open a week's worth of granules from the Nitrogen Dioxide (NO2) Level-3 data collection of the TEMPO air quality mission.
About TEMPO: The Tropospheric Emissions: Monitoring of Pollution (TEMPO) instrument is a geostationary satellite mission that provides hourly daytime measurements of air quality over North America. It measures key pollutants including nitrogen dioxide (NO2), formaldehyde, and ozone at high spatial resolution (~2 by 4.75 km at the center of its field of regard).
We will calculate temporal and spatial means for a subset of the data and visualize the results. This approach demonstrates cloud-optimized data access patterns that can scale from days to years of data.
Learn more: For comprehensive documentation on the earthaccess
package, visit the earthaccess documentation.
Note that this same approach can be used for a date range of any length, within the mission's duration. Running this notebook for a year's worth of TEMPO Level-3 data took approximately 15 minutes.
Prerequisites¶
AWS US-West-2 Environment: This tutorial has been designed to run in an AWS cloud compute instance in AWS region us-west-2. However, if you want to run it from your laptop or workstation, everything should work just fine but without the speed benefits of in-cloud access.
Earthdata Account: A (free!) Earthdata Login account is required to access data from the NASA Earthdata system. Before requesting TEMPO data, we first need to set up our Earthdata Login authentication, as described in the Earthdata Cookbook's earthaccess tutorial (link).
Packages:
cartopy
dask
earthaccess
version 0.14.0 or greatermatplotlib
numpy
xarray
Setup¶
import cartopy.crs as ccrs
import earthaccess
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from matplotlib import rcParams
%config InlineBackend.figure_format = 'jpeg'
rcParams["figure.dpi"] = (
80 # Reduce figure resolution to keep the saved size of this notebook low.
)
Login using the Earthdata Login¶
auth = earthaccess.login()
if not auth.authenticated:
# Ask for credentials and persist them in a .netrc file.
auth.login(strategy="interactive", persist=True)
print(earthaccess.__version__)
0.15.0
Search for data granules¶
We search for TEMPO Nitrogen Dioxide (NO2) data for a week-long period (note: times are in UTC), between January 11th and 18th, 2024. We are limiting the results to render the notebook quickly.
results = earthaccess.search_data(
# TEMPO NO₂ Level-3 product
short_name="TEMPO_NO2_L3",
# Version 3 of the data product
version="V03",
# Time period: One week in January 2024 (times are in UTC)
temporal=("2024-01-11 12:00", "2024-01-18 12:00"),
count=8,
)
print(f"Number of granules found: {len(results)}")
Number of granules found: 8
Opening Virtual Multifile Datasets¶
Understanding Virtual Datasets¶
Virtual datasets allow us to work with multiple files as if they were a single dataset without downloading all the data to local storage. This is achieved through:
- Kerchunk: Creates lightweight reference files that point to data chunks in cloud storage
- Virtualizarr: Combines multiple reference files into a single virtual dataset
- Lazy Loading: Data is only accessed when needed for computations
For TEMPO data, we need to handle the hierarchical netCDF4 structure by opening each group separately, then merging them.
First we set the argument options to be used by earthaccess.open_virtual_mfdataset
.
load
argument considerations:
load=True
works. Withinearthaccess.open_virtual_mfdataset
, a temporary virtual reference file (a "virtual dataset") is created and then immediately loaded with kerchunk. This is because the function assumes the user is making this request for the first time and the combined manifest file needs to be generated first. In the future, however,earthaccess.open_virtual_mfdataset
may provide a way to save the combined manifest file, at which point you could then avoid repeating these steps, and proceed directly to loading with kerchunk/virtualizarr.load=False
results inKeyError: "no index found for coordinate 'longitude'"
because it createsManifestArray
s without indexes (see the earthaccess documentation here (link))
open_options = {
"access": "indirect", # access to cloud data (faster in AWS with "direct")
"load": True, # Load metadata immediately (required for indexing)
"concat_dim": "time", # Concatenate files along the time dimension
"data_vars": "minimal", # Only load data variables that include the concat_dim
"coords": "minimal", # Only load coordinate variables that include the concat_dim
"compat": "override", # Avoid coordinate conflicts by picking the first
"combine_attrs": "override", # Avoid attribute conflicts by picking the first
}
Because TEMPO data are processed and archived in a netCDF4 format using a group hierarchy, we open each group – i.e., 'root', 'product', and 'geolocation' – and then afterwards merge them together.
%%time
result_root = earthaccess.open_virtual_mfdataset(granules=results, **open_options)
result_product = earthaccess.open_virtual_mfdataset(
granules=results, group="product", **open_options
)
result_geolocation = earthaccess.open_virtual_mfdataset(
granules=results, group="geolocation", **open_options
)
# merge
result_merged = xr.merge([result_root, result_product, result_geolocation])
result_merged
CPU times: user 601 ms, sys: 79.2 ms, total: 680 ms Wall time: 1min 27s
<xarray.Dataset> Size: 7GB Dimensions: (latitude: 2950, longitude: 7750, time: 8) Coordinates: * longitude (longitude) float32 31kB -168.0 ... * latitude (latitude) float32 12kB 14.01 ..... * time (time) datetime64[ns] 64B 2024-0... Data variables: weight (latitude, longitude) float32 91MB ... main_data_quality_flag (time, latitude, longitude) float32 732MB ... vertical_column_troposphere (time, latitude, longitude) float64 1GB ... vertical_column_troposphere_uncertainty (time, latitude, longitude) float64 1GB ... vertical_column_stratosphere (time, latitude, longitude) float64 1GB ... solar_zenith_angle (time, latitude, longitude) float32 732MB ... viewing_zenith_angle (time, latitude, longitude) float32 732MB ... relative_azimuth_angle (time, latitude, longitude) float32 732MB ... Attributes: (12/40) history: 2024-08-10T19:20:11Z: L2_regrid -v /tem... scan_num: 2 time_coverage_start: 2024-01-11T12:56:25Z time_coverage_end: 2024-01-11T13:36:11Z time_coverage_start_since_epoch: 1389013003.147965 time_coverage_end_since_epoch: 1389015389.7366676 ... ... title: TEMPO Level 3 nitrogen dioxide product collection_shortname: TEMPO_NO2_L3 collection_version: 1 keywords: EARTH SCIENCE>ATMOSPHERE>AIR QUALITY>NI... summary: Nitrogen dioxide Level 3 files provide ... coremetadata: \nGROUP = INVENTORYMET...
result_merged = xr.merge([result_root, result_product, result_geolocation])
result_merged
<xarray.Dataset> Size: 7GB Dimensions: (latitude: 2950, longitude: 7750, time: 8) Coordinates: * longitude (longitude) float32 31kB -168.0 ... * latitude (latitude) float32 12kB 14.01 ..... * time (time) datetime64[ns] 64B 2024-0... Data variables: weight (latitude, longitude) float32 91MB ... main_data_quality_flag (time, latitude, longitude) float32 732MB ... vertical_column_troposphere (time, latitude, longitude) float64 1GB ... vertical_column_troposphere_uncertainty (time, latitude, longitude) float64 1GB ... vertical_column_stratosphere (time, latitude, longitude) float64 1GB ... solar_zenith_angle (time, latitude, longitude) float32 732MB ... viewing_zenith_angle (time, latitude, longitude) float32 732MB ... relative_azimuth_angle (time, latitude, longitude) float32 732MB ... Attributes: (12/40) history: 2024-08-10T19:20:11Z: L2_regrid -v /tem... scan_num: 2 time_coverage_start: 2024-01-11T12:56:25Z time_coverage_end: 2024-01-11T13:36:11Z time_coverage_start_since_epoch: 1389013003.147965 time_coverage_end_since_epoch: 1389015389.7366676 ... ... title: TEMPO Level 3 nitrogen dioxide product collection_shortname: TEMPO_NO2_L3 collection_version: 1 keywords: EARTH SCIENCE>ATMOSPHERE>AIR QUALITY>NI... summary: Nitrogen dioxide Level 3 files provide ... coremetadata: \nGROUP = INVENTORYMET...
Understanding the Data¶
- vertical_column_troposphere: Total column amount of NO₂ in the troposphere (units: molecules/cm²)
- main_data_quality_flag: Quality indicator (0 = good quality data)
- Geographic region: We'll focus on the Mid-Atlantic region (Washington DC area)
- Longitude: -78° to -74° W
- Latitude: 35° to 39° N
# Define our region of interest (Mid-Atlantic/Washington DC area)
lon_bounds = (-78, -74) # Western to Eastern longitude
lat_bounds = (35, 39) # Southern to Northern latitude
print(
f"Analyzing region: {lat_bounds[0]}°N to {lat_bounds[1]}°N, {abs(lon_bounds[0])}°W to {abs(lon_bounds[1])}°W"
)
Analyzing region: 35°N to 39°N, 78°W to 74°W
Temporal Mean - a map showing an annual average¶
# Define temporal mean (average over time) calculation
temporal_mean_ds = (
result_merged.sel(
{
"longitude": slice(lon_bounds[0], lon_bounds[1]),
"latitude": slice(lat_bounds[0], lat_bounds[1]),
}
)
.where(result_merged["main_data_quality_flag"] == 0) # Filter for good quality data
.mean(dim="time")
)
print(f"Dataset shape after subsetting: {temporal_mean_ds.dims}")
temporal_mean_ds
Dataset shape after subsetting: FrozenMappingWarningOnValuesAccess({'latitude': 200, 'longitude': 200})
<xarray.Dataset> Size: 2MB Dimensions: (latitude: 200, longitude: 200) Coordinates: * longitude (longitude) float32 800B -77.99 ... * latitude (latitude) float32 800B 35.01 ..... Data variables: weight (latitude, longitude) float32 160kB dask.array<chunksize=(130, 150), meta=np.ndarray> main_data_quality_flag (latitude, longitude) float32 160kB dask.array<chunksize=(200, 200), meta=np.ndarray> vertical_column_troposphere (latitude, longitude) float64 320kB dask.array<chunksize=(200, 200), meta=np.ndarray> vertical_column_troposphere_uncertainty (latitude, longitude) float64 320kB dask.array<chunksize=(200, 200), meta=np.ndarray> vertical_column_stratosphere (latitude, longitude) float64 320kB dask.array<chunksize=(200, 200), meta=np.ndarray> solar_zenith_angle (latitude, longitude) float32 160kB dask.array<chunksize=(200, 200), meta=np.ndarray> viewing_zenith_angle (latitude, longitude) float32 160kB dask.array<chunksize=(200, 200), meta=np.ndarray> relative_azimuth_angle (latitude, longitude) float32 160kB dask.array<chunksize=(200, 200), meta=np.ndarray>
%%time
# Compute the temporal mean
mean_vertical_column_trop = temporal_mean_ds["vertical_column_troposphere"].compute()
CPU times: user 560 ms, sys: 184 ms, total: 744 ms Wall time: 10.3 s
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()})
mean_vertical_column_trop.squeeze().plot.contourf(ax=ax)
# Add geographic features
ax.coastlines()
ax.gridlines(
draw_labels=True,
dms=True,
x_inline=False,
y_inline=False,
)
plt.show()
/home/docs/checkouts/readthedocs.org/user_builds/earthaccess/envs/1096/lib/python3.11/site-packages/cartopy/io/__init__.py:242: DownloadWarning: Downloading: https://naturalearth.s3.amazonaws.com/10m_physical/ne_10m_coastline.zip warnings.warn(f'Downloading: {url}', DownloadWarning)
Spatial mean - a time series of area averages¶
# Define spatial mean (average over longitude/latitude) calculation
spatial_mean_ds = (
result_merged.sel(
{
"longitude": slice(lon_bounds[0], lon_bounds[1]),
"latitude": slice(lat_bounds[0], lat_bounds[1]),
}
)
.where(result_merged["main_data_quality_flag"] == 0) # Filter for good quality data
.mean(dim=("longitude", "latitude"))
)
spatial_mean_ds
<xarray.Dataset> Size: 416B Dimensions: (time: 8) Coordinates: * time (time) datetime64[ns] 64B 2024-0... Data variables: weight (time) float32 32B dask.array<chunksize=(1,), meta=np.ndarray> main_data_quality_flag (time) float32 32B dask.array<chunksize=(1,), meta=np.ndarray> vertical_column_troposphere (time) float64 64B dask.array<chunksize=(1,), meta=np.ndarray> vertical_column_troposphere_uncertainty (time) float64 64B dask.array<chunksize=(1,), meta=np.ndarray> vertical_column_stratosphere (time) float64 64B dask.array<chunksize=(1,), meta=np.ndarray> solar_zenith_angle (time) float32 32B dask.array<chunksize=(1,), meta=np.ndarray> viewing_zenith_angle (time) float32 32B dask.array<chunksize=(1,), meta=np.ndarray> relative_azimuth_angle (time) float32 32B dask.array<chunksize=(1,), meta=np.ndarray>
%%time
# Compute the spatial mean
spatial_mean_vertical_column_trop = spatial_mean_ds[
"vertical_column_troposphere"
].compute()
CPU times: user 552 ms, sys: 166 ms, total: 718 ms Wall time: 8.41 s
spatial_mean_vertical_column_trop.plot()
plt.show()
Single scan subset¶
# Select a single scan time for detailed analysis
scan_time_start = np.datetime64("2024-01-11T13:00:00") # 1 PM UTC
scan_time_end = np.datetime64("2024-01-11T14:00:00") # 2 PM UTC
print(f"Analyzing single scan: {scan_time_start} to {scan_time_end} UTC")
print("Note: This corresponds to ~8-9 AM local time on the US East Coast")
subset_ds = result_merged.sel(
{
"longitude": slice(lon_bounds[0], lon_bounds[1]),
"latitude": slice(lat_bounds[0], lat_bounds[1]),
"time": slice(scan_time_start, scan_time_end),
}
).where(result_merged["main_data_quality_flag"] == 0)
subset_ds
Analyzing single scan: 2024-01-11T13:00:00 to 2024-01-11T14:00:00 UTC Note: This corresponds to ~8-9 AM local time on the US East Coast
<xarray.Dataset> Size: 2MB Dimensions: (latitude: 200, longitude: 200, time: 1) Coordinates: * longitude (longitude) float32 800B -77.99 ... * latitude (latitude) float32 800B 35.01 ..... * time (time) datetime64[ns] 8B 2024-01... Data variables: weight (latitude, longitude, time) float32 160kB dask.array<chunksize=(130, 150, 1), meta=np.ndarray> main_data_quality_flag (time, latitude, longitude) float32 160kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> vertical_column_troposphere (time, latitude, longitude) float64 320kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> vertical_column_troposphere_uncertainty (time, latitude, longitude) float64 320kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> vertical_column_stratosphere (time, latitude, longitude) float64 320kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> solar_zenith_angle (time, latitude, longitude) float32 160kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> viewing_zenith_angle (time, latitude, longitude) float32 160kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> relative_azimuth_angle (time, latitude, longitude) float32 160kB dask.array<chunksize=(1, 200, 200), meta=np.ndarray> Attributes: (12/40) history: 2024-08-10T19:20:11Z: L2_regrid -v /tem... scan_num: 2 time_coverage_start: 2024-01-11T12:56:25Z time_coverage_end: 2024-01-11T13:36:11Z time_coverage_start_since_epoch: 1389013003.147965 time_coverage_end_since_epoch: 1389015389.7366676 ... ... title: TEMPO Level 3 nitrogen dioxide product collection_shortname: TEMPO_NO2_L3 collection_version: 1 keywords: EARTH SCIENCE>ATMOSPHERE>AIR QUALITY>NI... summary: Nitrogen dioxide Level 3 files provide ... coremetadata: \nGROUP = INVENTORYMET...
%%time
# Compute the single scan's values
subset_vertical_column_trop = subset_ds["vertical_column_troposphere"].compute()
CPU times: user 73.1 ms, sys: 20.6 ms, total: 93.7 ms Wall time: 1.04 s
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()})
subset_vertical_column_trop.squeeze().plot.contourf(ax=ax)
# Add geographic features
ax.coastlines()
ax.gridlines(
draw_labels=True,
dms=True,
x_inline=False,
y_inline=False,
)
plt.show()