Time series data#
A major use case for xarray is multi-dimensional time-series data. Accordingly, we’ve copied many of features that make working with time-series data in pandas such a joy to xarray. In most cases, we rely on pandas for the core functionality.
Creating datetime64 data#
Xarray uses the numpy dtypes numpy.datetime64 and numpy.timedelta64
with specified units (one of "s", "ms", "us" and "ns") to represent datetime
data, which offer vectorized operations with numpy and smooth integration with pandas.
To convert to or create regular arrays of numpy.datetime64 data, we recommend
using pandas.to_datetime(), pandas.DatetimeIndex, or xarray.date_range():
pd.to_datetime(["2000-01-01", "2000-02-02"])
DatetimeIndex(['2000-01-01', '2000-02-02'], dtype='datetime64[ns]', freq=None)
pd.DatetimeIndex(
["2000-01-01 00:00:00", "2000-02-02 00:00:00"], dtype="datetime64[s]"
)
DatetimeIndex(['2000-01-01', '2000-02-02'], dtype='datetime64[s]', freq=None)
xr.date_range("2000-01-01", periods=365)
DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04',
'2000-01-05', '2000-01-06', '2000-01-07', '2000-01-08',
'2000-01-09', '2000-01-10',
...
'2000-12-21', '2000-12-22', '2000-12-23', '2000-12-24',
'2000-12-25', '2000-12-26', '2000-12-27', '2000-12-28',
'2000-12-29', '2000-12-30'],
dtype='datetime64[ns]', length=365, freq='D')
xr.date_range("2000-01-01", periods=365, unit="s")
DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04',
'2000-01-05', '2000-01-06', '2000-01-07', '2000-01-08',
'2000-01-09', '2000-01-10',
...
'2000-12-21', '2000-12-22', '2000-12-23', '2000-12-24',
'2000-12-25', '2000-12-26', '2000-12-27', '2000-12-28',
'2000-12-29', '2000-12-30'],
dtype='datetime64[s]', length=365, freq='D')
Note
Care has to be taken to create the output with the wanted resolution.
For pandas.date_range() the unit-kwarg has to be specified
and for pandas.to_datetime() the selection of the resolution
isn’t possible at all. For that pd.DatetimeIndex can be used
directly. There is more in-depth information in section
Time Coding.
Alternatively, you can supply arrays of Python datetime objects. These get
converted automatically when used as arguments in xarray objects (with us-resolution):
import datetime
xr.Dataset({"time": datetime.datetime(2000, 1, 1)})
<xarray.Dataset> Size: 8B
Dimensions: ()
Data variables:
time datetime64[ns] 8B 2000-01-01When reading or writing netCDF files, xarray automatically decodes datetime and
timedelta arrays using CF conventions (that is, by using a units
attribute like 'days since 2000-01-01').
Note
When decoding/encoding datetimes for non-standard calendars or for dates
before 1582-10-15, xarray uses the cftime library by default.
It was previously packaged with the netcdf4-python package under the
name netcdftime but is now distributed separately. cftime is an
optional dependency of xarray.
You can manual decode arrays in this form by passing a dataset to
decode_cf():
attrs = {"units": "hours since 2000-01-01"}
ds = xr.Dataset({"time": ("time", [0, 1, 2, 3], attrs)})
# Default decoding to 'ns'-resolution
xr.decode_cf(ds)
<xarray.Dataset> Size: 32B
Dimensions: (time: 4)
Coordinates:
* time (time) datetime64[ns] 32B 2000-01-01 ... 2000-01-01T03:00:00
Data variables:
*empty*# Decoding to 's'-resolution
coder = xr.coders.CFDatetimeCoder(time_unit="s")
xr.decode_cf(ds, decode_times=coder)
<xarray.Dataset> Size: 32B
Dimensions: (time: 4)
Coordinates:
* time (time) datetime64[s] 32B 2000-01-01 ... 2000-01-01T03:00:00
Data variables:
*empty*From xarray 2025.01.2 the resolution of the dates can be one of "s", "ms", "us" or "ns". One limitation of using datetime64[ns] is that it limits the native representation of dates to those that fall between the years 1678 and 2262, which gets increased significantly with lower resolutions. When a store contains dates outside of these bounds (or dates < 1582-10-15 with a Gregorian, also known as standard, calendar), dates will be returned as arrays of cftime.datetime objects and a CFTimeIndex will be used for indexing.
CFTimeIndex enables most of the indexing functionality of a pandas.DatetimeIndex.
See Non-standard calendars and dates outside the precision range for more information.
Datetime indexing#
Xarray borrows powerful indexing machinery from pandas (see Indexing and selecting data).
This allows for several useful and succinct forms of indexing, particularly for
datetime64 data. For example, we support indexing with strings for single
items and with the slice object:
time = pd.date_range("2000-01-01", freq="h", periods=365 * 24)
ds = xr.Dataset({"foo": ("time", np.arange(365 * 24)), "time": time})
ds.sel(time="2000-01")
<xarray.Dataset> Size: 12kB
Dimensions: (time: 744)
Coordinates:
* time (time) datetime64[ns] 6kB 2000-01-01 ... 2000-01-31T23:00:00
Data variables:
foo (time) int64 6kB 0 1 2 3 4 5 6 7 ... 737 738 739 740 741 742 743ds.sel(time=slice("2000-06-01", "2000-06-10"))
<xarray.Dataset> Size: 4kB
Dimensions: (time: 240)
Coordinates:
* time (time) datetime64[ns] 2kB 2000-06-01 ... 2000-06-10T23:00:00
Data variables:
foo (time) int64 2kB 3648 3649 3650 3651 3652 ... 3884 3885 3886 3887You can also select a particular time by indexing with a
datetime.time object:
ds.sel(time=datetime.time(12))
<xarray.Dataset> Size: 6kB
Dimensions: (time: 365)
Coordinates:
* time (time) datetime64[ns] 3kB 2000-01-01T12:00:00 ... 2000-12-30T12:...
Data variables:
foo (time) int64 3kB 12 36 60 84 108 132 ... 8652 8676 8700 8724 8748For more details, read the pandas documentation and the section on Indexing Using Datetime Components (i.e. using the .dt accessor).
Datetime components#
Similar to pandas accessors, the components of datetime objects contained in a
given DataArray can be quickly computed using a special .dt accessor.
time = pd.date_range("2000-01-01", freq="6h", periods=365 * 4)
ds = xr.Dataset({"foo": ("time", np.arange(365 * 4)), "time": time})
ds.time.dt.hour
<xarray.DataArray 'hour' (time: 1460)> Size: 12kB array([ 0, 6, 12, ..., 6, 12, 18], shape=(1460,)) Coordinates: * time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
ds.time.dt.dayofweek
<xarray.DataArray 'dayofweek' (time: 1460)> Size: 12kB array([5, 5, 5, ..., 5, 5, 5], shape=(1460,)) Coordinates: * time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
The .dt accessor works on both coordinate dimensions as well as
multi-dimensional data.
Xarray also supports a notion of “virtual” or “derived” coordinates for datetime components implemented by pandas, including “year”, “month”, “day”, “hour”, “minute”, “second”, “dayofyear”, “week”, “dayofweek”, “weekday” and “quarter”:
ds["time.month"]
<xarray.DataArray 'month' (time: 1460)> Size: 12kB array([ 1, 1, 1, ..., 12, 12, 12], shape=(1460,)) Coordinates: * time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
ds["time.dayofyear"]
<xarray.DataArray 'dayofyear' (time: 1460)> Size: 12kB array([ 1, 1, 1, ..., 365, 365, 365], shape=(1460,)) Coordinates: * time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
For use as a derived coordinate, xarray adds 'season' to the list of
datetime components supported by pandas:
ds["time.season"]
<xarray.DataArray 'season' (time: 1460)> Size: 18kB
array(['DJF', 'DJF', 'DJF', ..., 'DJF', 'DJF', 'DJF'],
shape=(1460,), dtype='<U3')
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00ds["time"].dt.season
<xarray.DataArray 'season' (time: 1460)> Size: 18kB
array(['DJF', 'DJF', 'DJF', ..., 'DJF', 'DJF', 'DJF'],
shape=(1460,), dtype='<U3')
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00The set of valid seasons consists of ‘DJF’, ‘MAM’, ‘JJA’ and ‘SON’, labeled by the first letters of the corresponding months.
You can use these shortcuts with both Datasets and DataArray coordinates.
In addition, xarray supports rounding operations floor, ceil, and round. These operations require that you supply a rounding frequency as a string argument.
ds["time"].dt.floor("D")
<xarray.DataArray 'floor' (time: 1460)> Size: 12kB
array(['2000-01-01T00:00:00.000000000', '2000-01-01T00:00:00.000000000',
'2000-01-01T00:00:00.000000000', ...,
'2000-12-30T00:00:00.000000000', '2000-12-30T00:00:00.000000000',
'2000-12-30T00:00:00.000000000'],
shape=(1460,), dtype='datetime64[ns]')
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00The .dt accessor can also be used to generate formatted datetime strings
for arrays utilising the same formatting as the standard datetime.strftime.
ds["time"].dt.strftime("%a, %b %d %H:%M")
<xarray.DataArray 'strftime' (time: 1460)> Size: 12kB
array(['Sat, Jan 01 00:00', 'Sat, Jan 01 06:00', 'Sat, Jan 01 12:00', ...,
'Sat, Dec 30 06:00', 'Sat, Dec 30 12:00', 'Sat, Dec 30 18:00'],
shape=(1460,), dtype=object)
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00Indexing Using Datetime Components#
You can use use the .dt accessor when subsetting your data as well. For example, we can subset for the month of January using the following:
ds.isel(time=(ds.time.dt.month == 1))
<xarray.Dataset> Size: 2kB
Dimensions: (time: 124)
Coordinates:
* time (time) datetime64[ns] 992B 2000-01-01 ... 2000-01-31T18:00:00
Data variables:
foo (time) int64 992B 0 1 2 3 4 5 6 7 ... 117 118 119 120 121 122 123You can also search for multiple months (in this case January through March), using isin:
ds.isel(time=ds.time.dt.month.isin([1, 2, 3]))
<xarray.Dataset> Size: 6kB
Dimensions: (time: 364)
Coordinates:
* time (time) datetime64[ns] 3kB 2000-01-01 ... 2000-03-31T18:00:00
Data variables:
foo (time) int64 3kB 0 1 2 3 4 5 6 7 ... 357 358 359 360 361 362 363Resampling and grouped operations#
See also
For more generic documentation on grouping, see GroupBy: Group and Bin Data.
Datetime components couple particularly well with grouped operations for analyzing features that repeat over time. Here’s how to calculate the mean by time of day:
ds.groupby("time.hour").mean()
<xarray.Dataset> Size: 64B
Dimensions: (hour: 4)
Coordinates:
* hour (hour) int64 32B 0 6 12 18
Data variables:
foo (hour) float64 32B 728.0 729.0 730.0 731.0For upsampling or downsampling temporal resolutions, xarray offers a
Dataset.resample() method building on the core functionality
offered by the pandas method of the same name. Resample uses essentially the
same api as pandas.DataFrame.resample() in pandas.
For example, we can downsample our dataset from hourly to 6-hourly:
ds.resample(time="6h")
<DatasetResample, grouped over 1 grouper(s), 1460 groups in total:
'__resample_dim__': TimeResampler('__resample_dim__'), 1460/1460 groups with labels 2000-01-01, ..., 2000-12-30T1...>
This will create a specialized DatasetResample or DataArrayResample
object which saves information necessary for resampling. All of the reduction methods which work with
Dataset or DataArray objects can also be used for resampling:
ds.resample(time="6h").mean()
<xarray.Dataset> Size: 23kB
Dimensions: (time: 1460)
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
Data variables:
foo (time) float64 12kB 0.0 1.0 2.0 ... 1.457e+03 1.458e+03 1.459e+03You can also supply an arbitrary reduction function to aggregate over each resampling group:
ds.resample(time="6h").reduce(np.mean)
<xarray.Dataset> Size: 23kB
Dimensions: (time: 1460)
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-12-30T18:00:00
Data variables:
foo (time) float64 12kB 0.0 1.0 2.0 ... 1.457e+03 1.458e+03 1.459e+03You can also resample on the time dimension while applying reducing along other dimensions at the same time
by specifying the dim keyword argument
ds.resample(time="6h").mean(dim=["time", "latitude", "longitude"])
For upsampling, xarray provides six methods: asfreq, ffill, bfill, pad,
nearest and interpolate. interpolate extends scipy.interpolate.interp1d()
and supports all of its schemes. All of these resampling operations work on both
Dataset and DataArray objects with an arbitrary number of dimensions.
In order to limit the scope of the methods ffill, bfill, pad and
nearest the tolerance argument can be set in coordinate units.
Data that has indices outside of the given tolerance are set to NaN.
ds.resample(time="1h").nearest(tolerance="1h")
<xarray.Dataset> Size: 140kB
Dimensions: (time: 8755)
Coordinates:
* time (time) datetime64[ns] 70kB 2000-01-01 ... 2000-12-30T18:00:00
Data variables:
foo (time) float64 70kB 0.0 0.0 nan nan ... nan nan 1.459e+03 1.459e+03It is often desirable to center the time values after a resampling operation.
That can be accomplished by updating the resampled dataset time coordinate values
using time offset arithmetic via the pandas.tseries.frequencies.to_offset() function.
resampled_ds = ds.resample(time="6h").mean()
offset = pd.tseries.frequencies.to_offset("6h") / 2
resampled_ds["time"] = resampled_ds.get_index("time") + offset
resampled_ds
<xarray.Dataset> Size: 23kB
Dimensions: (time: 1460)
Coordinates:
* time (time) datetime64[ns] 12kB 2000-01-01T03:00:00 ... 2000-12-30T21...
Data variables:
foo (time) float64 12kB 0.0 1.0 2.0 ... 1.457e+03 1.458e+03 1.459e+03See also
For more examples of using grouped operations on a time dimension, see Toy weather data.
Handling Seasons#
Two extremely common time series operations are to group by seasons, and resample to a seasonal frequency.
Xarray has historically supported some simple versions of these computations.
For example, .groupby("time.season") (where the seasons are DJF, MAM, JJA, SON)
and resampling to a seasonal frequency using Pandas syntax: .resample(time="QS-DEC").
Quite commonly one wants more flexibility in defining seasons. For these use-cases, Xarray provides
groupers.SeasonGrouper and groupers.SeasonResampler.
from xarray.groupers import SeasonGrouper
ds.groupby(time=SeasonGrouper(["DJF", "MAM", "JJA", "SON"])).mean()
<xarray.Dataset> Size: 64B
Dimensions: (season: 4)
Coordinates:
* season (season) object 32B 'DJF' 'MAM' 'JJA' 'SON'
Data variables:
foo (season) float64 32B 546.2 423.5 791.5 1.158e+03Note how the seasons are in the specified order, unlike .groupby("time.season") where the
seasons are sorted alphabetically.
ds.groupby("time.season").mean()
<xarray.Dataset> Size: 64B
Dimensions: (season: 4)
Coordinates:
* season (season) object 32B 'DJF' 'JJA' 'MAM' 'SON'
Data variables:
foo (season) float64 32B 546.2 791.5 423.5 1.158e+03SeasonGrouper supports overlapping seasons:
ds.groupby(time=SeasonGrouper(["DJFM", "MAMJ", "JJAS", "SOND"])).mean()
<xarray.Dataset> Size: 64B
Dimensions: (season: 4)
Coordinates:
* season (season) object 32B 'DJFM' 'MAMJ' 'JJAS' 'SOND'
Data variables:
foo (season) float64 32B 483.5 483.5 851.5 1.218e+03Skipping months is allowed:
ds.groupby(time=SeasonGrouper(["JJAS"])).mean()
<xarray.Dataset> Size: 16B
Dimensions: (season: 1)
Coordinates:
* season (season) object 8B 'JJAS'
Data variables:
foo (season) float64 8B 851.5Use SeasonResampler to specify custom seasons.
from xarray.groupers import SeasonResampler
ds.resample(time=SeasonResampler(["DJF", "MAM", "JJA", "SON"])).mean()
<xarray.Dataset> Size: 48B
Dimensions: (time: 3)
Coordinates:
* time (time) datetime64[ns] 24B 2000-03-01 2000-06-01 2000-09-01
Data variables:
foo (time) float64 24B 423.5 791.5 1.158e+03SeasonResampler is smart enough to correctly handle years for seasons that
span the end of the year (e.g. DJF). By default SeasonResampler will skip any
season that is incomplete (e.g. the first DJF season for a time series that starts in Jan).
Pass the drop_incomplete=False kwarg to SeasonResampler to disable this behaviour.
from xarray.groupers import SeasonResampler
ds.resample(
time=SeasonResampler(["DJF", "MAM", "JJA", "SON"], drop_incomplete=False)
).mean()
<xarray.Dataset> Size: 80B
Dimensions: (time: 5)
Coordinates:
* time (time) datetime64[ns] 40B 1999-12-01 2000-03-01 ... 2000-12-01
Data variables:
foo (time) float64 40B 119.5 423.5 791.5 1.158e+03 1.4e+03Seasons need not be of the same length:
ds.resample(time=SeasonResampler(["JF", "MAM", "JJAS", "OND"])).mean()
<xarray.Dataset> Size: 64B
Dimensions: (time: 4)
Coordinates:
* time (time) datetime64[ns] 32B 2000-01-01 2000-03-01 ... 2000-10-01
Data variables:
foo (time) float64 32B 119.5 423.5 851.5 1.278e+03