Pyleoclim User API

Pyleoclim, like a lot of other Python packages, follows an object-oriented design. It sounds fancy, but it really is quite simple. What this means for you is that we’ve gone through the trouble of coding up a lot of timeseries analysis methods that apply in various situations - so you don’t have to worry about that. These situations are described in classes, the beauty of which is called “inheritance” (see link above). Basically, it allows to define methods that will automatically apply to your dataset, as long as you put your data within one of those classes. A major advantage of object-oriented design is that you, the user, can harness the power of Pyleoclim methods in very few lines of code through the user API without ever having to get your hands dirty with our code (unless you want to, of course). The flipside is that any user would do well to understand Pyleoclim classes, what they are intended for, and what methods they support.

The Pyleoclim User API. Credit: Feng Zhu

The following describes the various classes that undergird the Pyleoclim edifice.

Series (pyleoclim.Series)

class pyleoclim.core.series.Series(time, value, time_unit=None, time_name=None, value_name=None, value_unit=None, label=None, importedFrom=None, archiveType=None, control_archiveType=False, log=None, keep_log=False, sort_ts='ascending', dropna=True, verbose=True, clean_ts=False, auto_time_params=None)[source]

The Series class describes the most basic objects in Pyleoclim. A Series is a simple dictionary that contains 3 things:

  • value, an array of real-valued numbers;

  • time, a coordinate axis at which those values were obtained ;

  • optionally, some metadata about both axes, like units, labels and origin.

How to create, manipulate and use such objects is described in PyleoTutorials.

Parameters:
  • time (list or numpy.array) – time axis (prograde or retrograde)

  • value (list of numpy.array) – values of the dependent variable (y)

  • time_unit (string) – Units for the time vector (e.g., ‘ky BP’). Default is None. in which case ‘years CE’ are assumed

  • time_name (string) – Name of the time vector (e.g., ‘Time’,’Age’). Default is None. This is used to label the time axis on plots

  • value_name (string) – Name of the value vector (e.g., ‘temperature’) Default is None

  • value_unit (string) – Units for the value vector (e.g., ‘deg C’) Default is None

  • label (string) – Name of the time series (e.g., ‘NINO 3.4’) Default is None

  • log (dict) – Dictionary of tuples documentating the various transformations applied to the object

  • keep_log (bool) – Whether to keep a log of applied transformations. False by default

  • importedFrom (string) – source of the dataset. If it came from a LiPD file, this could be the datasetID property

  • archiveType (string) – climate archive, one of ‘Borehole’, ‘Coral’, ‘FluvialSediment’, ‘GlacierIce’, ‘GroundIce’, ‘LakeSediment’, ‘MarineSediment’, ‘Midden’, ‘MolluskShell’, ‘Peat’, ‘Sclerosponge’, ‘Shoreline’, ‘Speleothem’, ‘TerrestrialSediment’, ‘Wood’ Reference: https://lipdverse.org/vocabulary/archivetype/

  • control_archiveType ([True, False]) – Whether to standardize the name of the archiveType agains the vocabulary from: https://lipdverse.org/vocabulary/paleodata_proxy/. If set to True, will only allow for these terms and automatically convert known synonyms to the standardized name. Only standardized variable names will be automatically assigned a color scheme. Default is False.

  • dropna (bool) – Whether to drop NaNs from the series to prevent downstream functions from choking on them defaults to True

  • sort_ts (str) – Direction of sorting over the time coordinate; ‘ascending’ or ‘descending’ Defaults to ‘ascending’

  • verbose (bool) – If True, will print warning messages if there are any

  • clean_ts (boolean flag) – set to True to remove the NaNs and make time axis strictly prograde with duplicated timestamps reduced by averaging the values Default is None (marked for deprecation)

  • auto_time_params (bool,) – If True, uses tsbase.disambiguate_time_metadata to ensure that time_name and time_unit are usable by Pyleoclim. This may override the provided metadata. If False, the provided time_name and time_unit are used. This may break some functionalities (e.g. common_time and convert_time_unit), so use at your own risk. If not provided, code will set to True for internal consistency.

Examples

Import the Southern Oscillation Index (SOI) and display a quick synopsis:

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
soi.view()
SOI [mb]
Time [year C.E.]
1951.000000 1.5
1951.083333 0.9
1951.166667 -0.1
1951.250000 -0.3
1951.333333 -0.7
... ...
2019.583333 -0.1
2019.666667 -1.2
2019.750000 -0.4
2019.833333 -0.8
2019.916667 -0.6

828 rows × 1 columns

Attributes:
datetime_index

Convert time to pandas DatetimeIndex.

metadata

Methods

bin([keep_log])

Bin values in a time series

causality(target_series[, method, timespan, ...])

Perform causality analysis with the target timeseries. Specifically, whether there is information in the target series that influenced the original series.

center([timespan, keep_log])

Centers the series (i.e.

clean([verbose, keep_log])

Clean up the timeseries by removing NaNs and sort with increasing time points

convert_time_unit([time_unit, keep_log])

Convert the time units of the Series object

copy()

Make a copy of the Series object

correlation(target_series[, alpha, ...])

Estimates the correlation and its associated significance between two time series (not ncessarily IID).

detrend([method, keep_log, preserve_mean])

Detrend Series object

equals(ts[, index_tol, value_tol])

Test whether two objects contain the same elements (values and datetime_index) A printout is returned if metadata are different, but the statement is considered True as long as data match.

fill_na([timespan, dt, keep_log])

Fill NaNs into the timespan

filter([cutoff_freq, cutoff_scale, method, ...])

Filtering methods for Series objects using four possible methods:

flip([axis, keep_log])

Flips the Series along one or both axes

from_csv(path)

Read in Series object from CSV file.

from_json(path)

Creates a pyleoclim.Series from a JSON file

gaussianize([keep_log])

Gaussianizes the timeseries (i.e.

gkernel([step_style, keep_log, step_type])

Coarse-grain a Series object via a Gaussian kernel.

histplot([figsize, title, savefig_settings, ...])

Plot the distribution of the timeseries values

interp([method, keep_log])

Interpolate a Series object onto a new time axis

is_evenly_spaced([tol])

Check if the Series time axis is evenly-spaced, within tolerance

make_labels()

Initialization of plot labels based on Series metadata

outliers([method, remove, settings, ...])

Remove outliers from timeseries data.

plot([figsize, marker, markersize, color, ...])

Plot the timeseries

resample(rule[, keep_log])

Run analogue to pandas.Series.resample.

resolution()

Generate a resolution object

segment([factor, verbose])

Gap detection

sel([value, time, tolerance])

Slice Series based on 'value' or 'time'.

slice(timespan)

Slicing the timeseries with a timespan (tuple or list)

sort([verbose, ascending, keep_log])

Ensure timeseries is set to a monotonically increasing axis.

spectral([method, freq_method, freq_kwargs, ...])

Perform spectral analysis on the timeseries

ssa([M, nMC, f, trunc, var_thresh, online])

Singular Spectrum Analysis

standardize([keep_log, scale])

Standardizes the series ((i.e.

stats()

Compute basic statistics from a Series

stripes([figsize, cmap, ref_period, sat, ...])

Represents the Series as an Ed Hawkins "stripes" pattern

summary_plot(psd, scalogram[, figsize, ...])

Produce summary plot of timeseries.

surrogates([method, number, length, seed, ...])

Generate surrogates of the Series object according to "method"

to_csv([metadata_header, path])

Export Series to csv

to_json([path])

Export the pyleoclim.Series object to a json file

to_pandas([paleo_style])

Export to pandas Series

view()

Generates a DataFrame version of the Series object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Perform wavelet analysis on a timeseries

wavelet_coherence(target_series[, method, ...])

Performs wavelet coherence analysis with the target timeseries

from_pandas

pandas_method

bin(keep_log=False, **kwargs)[source]

Bin values in a time series

Parameters:
  • keep_log (Boolean) – if True, adds this step and its parameters to the series log.

  • kwargs – Arguments for binning function. See pyleoclim.utils.tsutils.bin for details

Returns:

new – An binned Series object

Return type:

Series

See also

pyleoclim.utils.tsutils.bin

bin the series values into evenly-spaced time bins

causality(target_series, method='liang', timespan=None, settings=None, common_time_kwargs=None)[source]
Perform causality analysis with the target timeseries. Specifically, whether there is information in the target series that influenced the original series.

If the two series have different time axes, they are first placed on a common timescale (in ascending order).

Parameters:
  • target_series (Series) – A pyleoclim Series object on which to compute causality

  • method ({'liang', 'granger'}) – The causality method to use.

  • timespan (tuple) – The time interval over which to perform the calculation

  • settings (dict) – Parameters associated with the causality methods. Note that each method has different parameters. See individual methods for details

  • common_time_kwargs (dict) – Parameters for the method MultipleSeries.common_time(). Will use interpolation by default.

Returns:

res – Dictionary containing the results of the the causality analysis. See indivudal methods for details

Return type:

dict

Examples

Liang causality

import pyleoclim as pyleo
ts_nino=pyleo.utils.load_dataset('NINO3')
ts_air=pyleo.utils.load_dataset('AIR')

We use the specific params below to lighten computations; you may drop settings for real work

liang_N2A = ts_air.causality(ts_nino, settings={'nsim': 20, 'signif_test': 'isopersist'})
print(liang_N2A)
liang_A2N = ts_nino.causality(ts_air, settings={'nsim': 20, 'signif_test': 'isopersist'})
print(liang_A2N)

liang_N2A['T21']/liang_A2N['T21']
{'T21': 0.01644548028647295, 'tau21': 0.011968992003953967, 'Z': 1.3740071244963796, 'dH1_star': -0.6359251528278479, 'dH1_noise': 0.3521058551681981, 'signif_qs': [0.005, 0.025, 0.05, 0.95, 0.975, 0.995], 'T21_noise': array([-9.84643639e-05, -9.84643639e-05, -6.55363879e-05,  1.78004343e-03,
        2.08564347e-03,  2.08564347e-03]), 'tau21_noise': array([-7.07610654e-05, -7.07610654e-05, -4.73114672e-05,  1.30222211e-03,
        1.53734026e-03,  1.53734026e-03])}
{'T21': 0.005840218794917537, 'tau21': 0.047318261599206914, 'Z': 0.12342420447279118, 'dH1_star': -0.5094709112672596, 'dH1_noise': 0.4432108271335334, 'signif_qs': [0.005, 0.025, 0.05, 0.95, 0.975, 0.995], 'T21_noise': array([-0.00099452, -0.00099452, -0.00093254,  0.0001269 ,  0.0001331 ,
        0.0001331 ]), 'tau21_noise': array([-0.01029959, -0.01029959, -0.00879044,  0.00105281,  0.00107306,
        0.00107306])}
2.815901400951736

Both information flows (T21) are positive, but the flow from NINO3 to AIR is about 3x as large as the other way around, suggesting that NINO3 influences AIR much more than the other way around, which conforms to physical intuition.

To implement Granger causality, simply specfiy the method:

granger_A2N = ts_nino.causality(ts_air, method='granger')
granger_N2A = ts_air.causality(ts_nino, method='granger')

Granger Causality
number of lags (no zero) 1
ssr based F test:         F=20.8492 , p=0.0000  , df_denom=1592, df_num=1
ssr based chi2 test:   chi2=20.8885 , p=0.0000  , df=1
likelihood ratio test: chi2=20.7529 , p=0.0000  , df=1
parameter F test:         F=20.8492 , p=0.0000  , df_denom=1592, df_num=1

Granger Causality
number of lags (no zero) 1
ssr based F test:         F=18.6927 , p=0.0000  , df_denom=1592, df_num=1
ssr based chi2 test:   chi2=18.7280 , p=0.0000  , df=1
likelihood ratio test: chi2=18.6189 , p=0.0000  , df=1
parameter F test:         F=18.6927 , p=0.0000  , df_denom=1592, df_num=1

Note that the output is fundamentally different for the two methods. Granger causality cannot discriminate between NINO3 -> AIR or AIR -> NINO3, in this case. This is not unusual, and one reason why it is no longer in wide use.

center(timespan=None, keep_log=False)[source]

Centers the series (i.e. renove its estimated mean)

Parameters:
  • timespan (tuple or list) – The timespan over which the mean must be estimated. In the form [a, b], where a, b are two points along the series’ time axis.

  • keep_log (Boolean) – if True, adds the previous mean and method parameters to the series log.

Returns:

new – The centered series object

Return type:

Series

clean(verbose=False, keep_log=False)[source]

Clean up the timeseries by removing NaNs and sort with increasing time points

Parameters:
  • verbose (bool) – If True, will print warning messages if there is any

  • keep_log (Boolean) – if True, adds this step and its parameters to the series log.

Returns:

new – Series object with removed NaNs and sorting

Return type:

Series

convert_time_unit(time_unit='ky BP', keep_log=False)[source]

Convert the time units of the Series object

Parameters:
  • time_unit (str) –

    the target time unit, possible input: {

    ’year’, ‘years’, ‘yr’, ‘yrs’, ‘CE’, ‘AD’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

    }

  • keep_log (Boolean) – if True, adds this step and its parameter to the series log.

Examples

ts = pyleo.utils.load_dataset('SOI')
tsBP = ts.convert_time_unit(time_unit='yrs BP')
print('Original timeseries:')
print('time unit:', ts.time_unit)
print('time:', ts.time[:10])
print()
print('Converted timeseries:')
print('time unit:', tsBP.time_unit)
print('time:', tsBP.time[:10])
Original timeseries:
time unit: year C.E.
time: [1951.       1951.083333 1951.166667 1951.25     1951.333333 1951.416667
 1951.5      1951.583333 1951.666667 1951.75    ]

Converted timeseries:
time unit: yrs BP
time: [-69.91471656 -69.83138256 -69.74804957 -69.66471654 -69.58138257
 -69.49804955 -69.41471656 -69.33138255 -69.24804956 -69.16471654]
copy()[source]

Make a copy of the Series object

Returns:

Series – A copy of the Series object

Return type:

Series

correlation(target_series, alpha=0.05, statistic='pearsonr', method='phaseran', timespan=None, settings=None, common_time_kwargs=None, seed=None, mute_pbar=False)[source]

Estimates the correlation and its associated significance between two time series (not ncessarily IID).

The significance of the correlation is assessed using one of the following methods:

  1. ‘ttest’: T-test adjusted for effective sample size.

  2. ‘isopersistent’: AR(1) modeling of x and y.

  3. ‘isospectral’: phase randomization of original inputs. (default)

The T-test is a parametric test, hence computationally cheap, but can only be performed in ideal circumstances. The others are non-parametric, but their computational requirements scale with the number of simulations.

The choise of significance test and associated number of Monte-Carlo simulations are passed through the settings parameter.

Parameters:
  • target_series (Series) – A pyleoclim Series object

  • alpha (float) – The significance level (default: 0.05)

  • statistic (str) –

    statistic being evaluated. Can use any of the SciPy-supported ones:

    https://docs.scipy.org/doc/scipy/reference/stats.html#association-correlation-tests Currently supported: [‘pearsonr’,’spearmanr’,’pointbiserialr’,’kendalltau’,’weightedtau’]

    Default: ‘pearsonr’.

  • method (str, {'ttest','built-in','ar1sim','phaseran'}) – method for significance testing. Default is ‘phaseran’ ‘ttest’ implements the T-test with degrees of freedom adjusted for autocorrelation, as done in [1] ‘built-in’ uses the p-value that ships with the statistic. The old options ‘isopersistent’ and ‘isospectral’ still work, but trigger a deprecation warning. Note that ‘weightedtau’ does not have a known distribution, so the ‘built-in’ method returns an error in that case.

  • timespan (tuple) – The time interval over which to perform the calculation

settingsdict

Parameters for the correlation function, including:

nsimint

the number of simulations (default: 1000)

methodstr, {‘ttest’,’ar1sim’,’phaseran’ (default)}

method for significance testing

surr_settingsdict

Parameters for surrogate generator. See individual methods for details.

common_time_kwargsdict

Parameters for the method MultipleSeries.common_time(). Will use interpolation by default.

seedfloat or int

random seed for isopersistent and isospectral methods

mute_pbarbool, optional

Mute the progress bar. The default is False.

Returns:

corr – the result object, containing

  • rfloat

    correlation coefficient

  • pfloat

    the p-value

  • signifbool

    true if significant; false otherwise Note that signif = True if and only if p <= alpha.

  • alphafloat

    the significance level

Return type:

pyleoclim.Corr

See also

pyleoclim.utils.correlation.corr_sig

Correlation function (marked for deprecation)

pyleoclim.utils.correlation.association

SciPy measures of association between variables

pyleoclim.series.surrogates

parametric and non-parametric surrogates of any Series object

pyleoclim.multipleseries.common_time

Aligning time axes

References

[1] Hu, J., J. Emile-Geay, and J. Partin (2017), Correlation-based interpretations of paleoclimate data – where statistics meet past climates, Earth and Planetary Science Letters, 459, 362–371, doi:10.1016/j.epsl.2016.11.048.

Examples

Correlation between the Nino3.4 index and the Deasonalized All Indian Rainfall Index

import pyleoclim as pyleo
ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')

# with `nsim=20` and default `method='phaseran'`
# set an arbitrary random seed to fix the result
corr_res = ts_nino.correlation(ts_air, settings={'nsim': 20}, seed=2333)
print(corr_res)

# changing the statistic
corr_res = ts_nino.correlation(ts_air, statistic='kendalltau')
print(corr_res)

# using a simple t-test with DOFs adjusted for autocorrelation
# set an arbitrary random seed to fix the result
corr_res = ts_nino.correlation(ts_air, method='ttest')
print(corr_res)

# using  "isopersistent" surrogates (AR(1) simulation)
# set an arbitrary random seed to fix the result
corr_res = ts_nino.correlation(ts_air, method = 'ar1sim', settings={'nsim': 20}, seed=2333)
print(corr_res)
  correlation  p-value      signif. (α: 0.05)
-------------  ---------  -------------------
    -0.152394  < 1e-6                       1

  correlation  p-value      signif. (α: 0.05)
-------------  ---------  -------------------
   -0.0626788  < 1e-2                       1

  correlation  p-value    signif. (α: 0.05)
-------------  ---------  -------------------
    -0.152394  < 1e-7     True

  correlation  p-value      signif. (α: 0.05)
-------------  ---------  -------------------
    -0.152394  < 1e-6                       1

property datetime_index

Convert time to pandas DatetimeIndex.

Note: conversion will happen using time_unit, and will assume:

detrend(method='emd', keep_log=False, preserve_mean=False, **kwargs)[source]

Detrend Series object

Parameters:
  • method (str, optional) –

    The method for detrending. The default is ‘emd’. Options include:

    • ”linear”: the result of a n ordinary least-squares stright line fit to y is subtracted.

    • ”constant”: only the mean of data is subtracted.

    • ”savitzky-golay”, y is filtered using the Savitzky-Golay filters and the resulting filtered series is subtracted from y.

    • ”emd” (default): Empirical mode decomposition. The last mode is assumed to be the trend and removed from the series

  • keep_log (boolean) – if True, adds the removed trend and method parameters to the series log.

  • preserve_mean (boolean) – if True, ensures that the mean of the series is preserved despite the detrending

  • kwargs (dict) – Relevant arguments for each of the methods.

Returns:

new – Detrended Series object in “value”, with new field “trend” added

Return type:

Series

See also

pyleoclim.utils.tsutils.detrend

detrending wrapper functions

Examples

lr04 = pyleo.utils.load_dataset('LR04')
fig, ax = lr04.plot(invert_yaxis=True)
ts_emd = lr04.detrend(method='emd',preserve_mean=True)
ts_emd.plot(label=lr04.label+', EMD detrend',ax=ax)
<Axes: xlabel='Age [ky BP]', ylabel='$\\delta^{18} \\mathrm{O}$ [‰]'>
../_images/api_6_1.png
equals(ts, index_tol=5, value_tol=1e-05)[source]

Test whether two objects contain the same elements (values and datetime_index) A printout is returned if metadata are different, but the statement is considered True as long as data match.

Parameters:
  • ts (Series object) – The target series for the comparison

  • index_tol (int, default 5) – tolerance on difference in datetime indices (in dtype units, which are seconds by default)

  • value_tol (float, default 1e-5) – tolerance on difference in values (in %)

Returns:

  • same_data (bool) – Truth value of the proposition “the two series have the same data”.

  • same_metadata (bool) – Truth value of the proposition “the two series have the same metadata”.

Examples

import pyleoclim as pyleo

soi = pyleo.utils.load_dataset('SOI')
NINO3 = pyleo.utils.load_dataset('NINO3')
soi.equals(NINO3)
The two series have different lengths, left: 828 vs right: 1596
Metadata are different:
value_unit property -- left: mb, right: $^{\circ}$C
value_name property -- left: SOI, right: NINO3
label property -- left: Southern Oscillation Index, right: NINO3 SST
(False, False)
fill_na(timespan=None, dt=1, keep_log=False)[source]

Fill NaNs into the timespan

Parameters:
  • timespan (tuple or list) – The list of time points for slicing, whose length must be 2. For example, if timespan = [a, b], then the sliced output includes one segment [a, b]. If None, will use the start point and end point of the original timeseries

  • dt (float) – The time spacing to fill the NaNs; default is 1.

  • keep_log (Boolean) – if True, adds this step and its parameters to the series log.

Returns:

new – The sliced Series object.

Return type:

Series

filter(cutoff_freq=None, cutoff_scale=None, method='butterworth', keep_log=False, **kwargs)[source]
Filtering methods for Series objects using four possible methods:

By default, this method implements a lowpass filter, though it can easily be turned into a bandpass or high-pass filter (see examples below).

Parameters:
  • method (str, {'savitzky-golay', 'butterworth', 'firwin', 'lanczos'}) – the filtering method - ‘butterworth’: a Butterworth filter (default = 3rd order) - ‘savitzky-golay’: Savitzky-Golay filter - ‘firwin’: finite impulse response filter design using the window method, with default window as Hamming - ‘lanczos’: Lanczos zero-phase filter

  • cutoff_freq (float or list) – The cutoff frequency only works with the Butterworth method. If a float, it is interpreted as a low-frequency cutoff (lowpass). If a list, it is interpreted as a frequency band (f1, f2), with f1 < f2 (bandpass). Note that only the Butterworth option (default) currently supports bandpass filtering.

  • cutoff_scale (float or list) – cutoff_freq = 1 / cutoff_scale The cutoff scale only works with the Butterworth method and when cutoff_freq is None. If a float, it is interpreted as a low-frequency (high-scale) cutoff (lowpass). If a list, it is interpreted as a frequency band (f1, f2), with f1 < f2 (bandpass).

  • keep_log (Boolean) – if True, adds this step and its parameters to the series log.

  • kwargs (dict) – a dictionary of the keyword arguments for the filtering method, see pyleoclim.utils.filter.savitzky_golay, pyleoclim.utils.filter.butterworth, pyleoclim.utils.filter.lanczos and pyleoclim.utils.filter.firwin for the details

Returns:

new

Return type:

Series

See also

pyleoclim.utils.filter.butterworth

Butterworth method

pyleoclim.utils.filter.savitzky_golay

Savitzky-Golay method

pyleoclim.utils.filter.firwin

FIR filter design using the window method

pyleoclim.utils.filter.lanczos

lowpass filter via Lanczos resampling

Examples

In the example below, we generate a signal as the sum of two signals with frequency 10 Hz and 20 Hz, respectively. Then we apply a low-pass filter with a cutoff frequency at 15 Hz, and compare the output to the signal of 10 Hz. After that, we apply a band-pass filter with the band 15-25 Hz, and compare the outcome to the signal of 20 Hz.

  • Generating the test data

import pyleoclim as pyleo
import numpy as np

t = np.linspace(0, 1, 1000)
sig1 = np.sin(2*np.pi*10*t)
sig2 = np.sin(2*np.pi*20*t)
sig = sig1 + sig2
ts1 = pyleo.Series(time=t, value=sig1)
ts2 = pyleo.Series(time=t, value=sig2)
ts = pyleo.Series(time=t, value=sig)

fig, ax = ts.plot(label='mix')
ts1.plot(ax=ax, label='10 Hz')
ts2.plot(ax=ax, label='20 Hz')
ax.legend(loc='upper left', bbox_to_anchor=(0, 1.1), ncol=3)
Time axis values sorted in ascending order
Time axis values sorted in ascending order
Time axis values sorted in ascending order
<matplotlib.legend.Legend at 0x7f278cb4eb50>
../_images/api_8_3.png
  • Applying a low-pass filter

fig, ax = ts.plot(label='mix')
ts.filter(cutoff_freq=15).plot(ax=ax, label='After 15 Hz low-pass filter')
ts1.plot(ax=ax, label='10 Hz')
ax.legend(loc='upper left', bbox_to_anchor=(0, 1.1), ncol=3)
<matplotlib.legend.Legend at 0x7f27880b7dd0>
../_images/api_9_1.png
  • Applying a band-pass filter

fig, ax = ts.plot(label='mix')
ts.filter(cutoff_freq=[15, 25]).plot(ax=ax, label='After 15-25 Hz band-pass filter')
ts2.plot(ax=ax, label='20 Hz')
ax.legend(loc='upper left', bbox_to_anchor=(0, 1.1), ncol=3)
<matplotlib.legend.Legend at 0x7f278276cad0>
../_images/api_10_1.png

Above is using the default Butterworth filtering. To use FIR filtering with a window like Hanning is also simple:

fig, ax = ts.plot(label='mix')
ts.filter(cutoff_freq=[15, 25], method='firwin', window='hanning').plot(ax=ax, label='After 15-25 Hz band-pass filter')
ts2.plot(ax=ax, label='20 Hz')
ax.legend(loc='upper left', bbox_to_anchor=(0, 1.1), ncol=3)
<matplotlib.legend.Legend at 0x7f2788151e50>
../_images/api_11_1.png
  • Applying a high-pass filter

fig, ax = ts.plot(label='mix')
ts_low  = ts.filter(cutoff_freq=15)
ts_high = ts.copy()
ts_high.value = ts.value - ts_low.value # subtract low-pass filtered series from original one
ts_high.plot(label='High-pass filter @ 15Hz',ax=ax)
ax.legend(loc='upper left', bbox_to_anchor=(0, 1.1), ncol=3)
<matplotlib.legend.Legend at 0x7f2782638c50>
../_images/api_12_1.png
flip(axis='value', keep_log=False)[source]

Flips the Series along one or both axes

Parameters:
  • axis (str, optional) – The axis along which the Series will be flipped. The default is ‘value’. Other acceptable options are ‘time’ or ‘both’. TODO: enable time flipping after paleopandas is released

  • keep_log (Boolean) – if True, adds this transformation to the series log.

Returns:

new – The flipped series object

Return type:

Series

Examples

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('SOI')
tsf = ts.flip(keep_log=True)

fig, ax = tsf.plot()
tsf.log
({0: 'flip', 'applied': True, 'axis': 'value'},)
../_images/api_13_1.png
classmethod from_csv(path)[source]

Read in Series object from CSV file. Expects a metadata header dealineated by ‘###’ lines, as written by the Series.to_csv() method.

Parameters:
  • filename (str) – name of the file, e.g. ‘myrecord.csv’

  • path (str) – directory of the file. Default: current directory, ‘.’

Returns:

pyleoclim Series object containing data and metadata.

Return type:

Series

See also

pyleoclim.Series.to_csv

classmethod from_json(path)[source]

Creates a pyleoclim.Series from a JSON file

The keys in the JSON file must correspond to the parameter associated with a Series object

Parameters:

path (str) – Path to the JSON file

Returns:

ts – A Pyleoclim Series object.

Return type:

pyleoclim.core.series.Series

gaussianize(keep_log=False)[source]

Gaussianizes the timeseries (i.e. maps its values to a standard normal)

Returns:

  • new (Series) – The Gaussianized series object

  • keep_log (Boolean) – if True, adds this transformation to the series log.

References

Emile-Geay, J., and M. Tingley (2016), Inferring climate variability from nonlinear proxies: application to palaeo-enso studies, Climate of the Past, 12 (1), 31–50, doi:10.5194/cp- 12-31-2016.

gkernel(step_style='max', keep_log=False, step_type=None, **kwargs)[source]

Coarse-grain a Series object via a Gaussian kernel.

Like .bin() this technique is conservative and uses the max space between points as the default spacing. Unlike .bin(), gkernel() uses a gaussian kernel to calculate the weighted average of the time series over these intervals.

Note that if the series being examined has very low resolution sections with few points, you may need to tune the parameter for the kernel e-folding scale (h).

Parameters:
  • step_style (str) – type of timestep: ‘mean’, ‘median’, or ‘max’ of the time increments

  • keep_log (Boolean) – if True, adds the step type and its keyword arguments to the series log.

  • kwargs – Arguments for kernel function. See pyleoclim.utils.tsutils.gkernel for details

Returns:

new – The coarse-grained Series object

Return type:

Series

See also

pyleoclim.utils.tsutils.gkernel

application of a Gaussian kernel

histplot(figsize=[10, 4], title=None, savefig_settings=None, ax=None, ylabel='KDE', vertical=False, edgecolor='w', **plot_kwargs)[source]

Plot the distribution of the timeseries values

Parameters:
  • figsize (list) – a list of two integers indicating the figure size

  • title (str) – the title for the figure

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below:
    • ”path” must be specified; it can be any existed or non-existed path, with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (matplotlib.axis, optional) – A matplotlib axis

  • ylabel (str) – Label for the count axis

  • vertical ({True,False}) – Whether to flip the plot vertically

  • edgecolor (matplotlib.color) – The color of the edges of the bar

  • plot_kwargs (dict) – Plotting arguments for seaborn histplot: https://seaborn.pydata.org/generated/seaborn.histplot.html

See also

pyleoclim.utils.plotting.savefig

saving figure in Pyleoclim

Examples

Distribution of the SOI record

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')
fig, ax = ts.plot()

fig, ax = ts.histplot()
../_images/api_14_1.png ../_images/api_14_2.png
interp(method='linear', keep_log=False, **kwargs)[source]

Interpolate a Series object onto a new time axis

Parameters:
  • method ({‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, ‘next’}) – where ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point) or as an integer specifying the order of the spline interpolator to use. Default is ‘linear’.

  • keep_log (Boolean) – if True, adds the method name and its parameters to the series log.

  • kwargs – Arguments specific to each interpolation function. See pyleoclim.utils.tsutils.interp for details

Returns:

new – An interpolated Series object

Return type:

Series

See also

pyleoclim.utils.tsutils.interp

interpolation function

is_evenly_spaced(tol=0.001)[source]

Check if the Series time axis is evenly-spaced, within tolerance

Parameters:

tol (float) – tolerance. If time increments are all within tolerance, the series is declared evenly-spaced. default = 1e-3

Returns:

res

Return type:

bool

make_labels()[source]

Initialization of plot labels based on Series metadata

Returns:

  • time_header (str) – Label for the time axis

  • value_header (str) – Label for the value axis

outliers(method='kmeans', remove=True, settings=None, fig_outliers=True, figsize_outliers=[10, 4], plotoutliers_kwargs=None, savefigoutliers_settings=None, fig_clusters=True, figsize_clusters=[10, 4], plotclusters_kwargs=None, savefigclusters_settings=None, keep_log=False)[source]

Remove outliers from timeseries data. The method employs clustering to identify clusters in the data, using the k-means and DBSCAN algorithms from scikit-learn. Points falling a certain distance from the cluster (either away from the centroid for k-means or in a area of low density for DBSCAN) are considered outliers. The silhouette score is used to optimize parameter values.

A tutorial explaining how to use this method and set the parameters is available at https://github.com/LinkedEarth/PyleoTutorials/blob/main/notebooks/L2_outliers_detection.ipynb.

Parameters:
  • method (str, {'kmeans','DBSCAN'}, optional) – The clustering method to use. The default is ‘kmeans’.

  • remove (bool, optional) – If True, removes the outliers. The default is True.

  • settings (dict, optional) – Specific arguments for the clustering functions. The default is None.

  • fig_outliers (bool, optional) – Whether to display the timeseries showing the outliers. The default is True.

  • figsize_outliers (list, optional) – The dimensions of the outliers figure. The default is [10,4].

  • plotoutliers_kwargs (dict, optional) – Arguments for the plot displaying the outliers. The default is None.

  • savefigoutliers_settings (dict, optional) –

    Saving options for the outlier plot. The default is None. - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • fig_clusters (bool, optional) – Whether to display the clusters. The default is True.

  • figsize_clusters (list, optional) – The dimensions of the cluster figures. The default is [10,4].

  • plotclusters_kwargs (dict, optional) – Arguments for the cluster plot. The default is None.

  • savefigclusters_settings (dict, optional) –

    Saving options for the cluster plot. The default is None. - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • keep_log (Boolean) – if True, adds the previous method parameters to the series log.

Returns:

ts – A new Series object without outliers if remove is True. Otherwise, returns the original timeseries

Return type:

Series

See also

pyleoclim.utils.tsutils.detect_outliers_DBSCAN

Outlier detection using the DBSCAN method

pyleoclim.utils.tsutils.detect_outliers_kmeans

Outlier detection using the kmeans method

pyleoclim.utils.tsutils.remove_outliers

Remove outliers from the series

Examples

import pyleoclim as pyleo
LR04 = pyleo.utils.load_dataset('LR04')
LR_out = LR04.detrend().standardize().outliers(method='kmeans')
../_images/api_15_0.png ../_images/api_15_1.png

To set the number of clusters:

LR_out = LR04.detrend().standardize().outliers(method='kmeans', settings={'nbr_clusters':2})
../_images/api_16_0.png ../_images/api_16_1.png

The log contains diagnostic information, to access it, set the keep_log parameter to True:

LR_out = LR04.detrend().standardize().outliers(method='kmeans', settings={'nbr_clusters':2}, keep_log=True)
../_images/api_17_0.png ../_images/api_17_1.png
plot(figsize=[10, 4], marker=None, markersize=None, color=None, linestyle=None, linewidth=None, xlim=None, ylim=None, label=None, xlabel=None, ylabel=None, title=None, zorder=None, legend=True, plot_kwargs=None, lgd_kwargs=None, alpha=None, savefig_settings=None, ax=None, invert_xaxis=False, invert_yaxis=False)[source]

Plot the timeseries

Parameters:
  • figsize (list) – a list of two integers indicating the figure size

  • marker (str) – e.g., ‘o’ for dots See [matplotlib.markers](https://matplotlib.org/stable/api/markers_api.html) for details

  • markersize (float) – the size of the marker

  • color (str, list) – the color for the line plot e.g., ‘r’ for red See [matplotlib colors](https://matplotlib.org/stable/gallery/color/color_demo.html) for details

  • linestyle (str) – e.g., ‘–’ for dashed line See [matplotlib.linestyles](https://matplotlib.org/stable/gallery/lines_bars_and_markers/linestyles.html) for details

  • linewidth (float) – the width of the line

  • label (str) – the label for the line

  • xlabel (str) – the label for the x-axis

  • ylabel (str) – the label for the y-axis

  • title (str) – the title for the figure

  • zorder (int) – The default drawing order for all lines on the plot

  • legend ({True, False}) – plot legend or not

  • invert_xaxis (bool, optional) – if True, the x-axis of the plot will be inverted

  • invert_yaxis (bool, optional) – same for the y-axis

  • plot_kwargs (dict) – the dictionary of keyword arguments for ax.plot() See [matplotlib.pyplot.plot](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html) for details

  • lgd_kwargs (dict) – the dictionary of keyword arguments for ax.legend() See [matplotlib.pyplot.legend](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html) for details

  • alpha (float) – Transparency setting

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (matplotlib.axis, optional) – the axis object from matplotlib See [matplotlib.axes](https://matplotlib.org/api/axes_api.html) for details.

Returns:

Notes

When ax is passed, the return will be ax only; otherwise, both fig and ax will be returned.

See also

pyleoclim.utils.plotting.savefig

saving a figure in Pyleoclim

Examples

Plot the SOI record

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('SOI')
fig, ax = ts.plot()
../_images/api_18_0.png

Change the line color

fig, ax = ts.plot(color='r')
../_images/api_19_0.png
Save the figure. Two options available, only one is needed:
  • Within the plotting command

  • After the figure has been generated

fig, ax = ts.plot(color='k', savefig_settings={'path': 'ts_plot3.png'}); pyleo.closefig(fig)
pyleo.savefig(fig,path='ts_plot3.png')
Figure saved at: "ts_plot3.png"
Figure saved at: "ts_plot3.png"
resample(rule, keep_log=False, **kwargs)[source]

Run analogue to pandas.Series.resample.

This is a convenience method: doing

ser.resample(‘AS’).mean()

will do the same thing as

ser.pandas_method(lambda x: x.resample(‘AS’).mean())

but will also accept some extra resampling rules, such as ‘Ga’ (see below).

Parameters:
  • rule (str) –

    The offset string or object representing target conversion. Can also accept pyleoclim units, such as ‘ka’ (1000 years), ‘Ma’ (1 million years), and ‘Ga’ (1 billion years).

    Check the [pandas resample docs](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.resample.html) for more details.

  • kwargs (dict) – Any other arguments which will be passed to pandas.Series.resample.

Returns:

Resampler object, not meant to be used to directly. Instead, an aggregation should be called on it, see examples below.

Return type:

SeriesResampler

Examples

ts = pyleo.utils.load_dataset('LR04')
ts5k = ts.resample('5ka').mean()
fig, ax = ts.plot(invert_yaxis='True',xlim=[0, 1000])
ts5k.plot(ax=ax,color='C1')
<Axes: xlabel='Age [ky BP]', ylabel='$\\delta^{18} \\mathrm{O}$ [‰]'>
../_images/api_21_1.png
resolution()[source]

Generate a resolution object

Increments are assigned to the preceding time value. E.g. for time_axis = [0,1,3], resolution.resolution = [1,2] resolution.time = [0,1]

Returns:

resolution – Resolution object

Return type:

Resolution

See also

pyleoclim.core.resolutions.Resolution

Examples

To create a resolution object, apply the .resolution() method to a Series object

ts = pyleo.utils.load_dataset('EDC-dD')
resolution = ts.resolution()

Several methods are then available:

Summary statistics can be obtained via .describe()

resolution.describe()
{'nobs': 5784,
 'minmax': (8.244210000000002, 1364.0),
 'mean': 138.5932963710235,
 'variance': 29806.73648249974,
 'skewness': 2.661861461835658,
 'kurtosis': 8.705801510819656,
 'median': 58.132250000006024}

A simple plot can be created using .plot()

resolution.plot()
(<Figure size 1000x400 with 1 Axes>,
 <Axes: xlabel='Age [y BP]', ylabel='resolution [y BP]'>)
../_images/api_24_1.png

The distribution of resolution

resolution.histplot()
(<Figure size 1000x400 with 1 Axes>,
 <Axes: xlabel='resolution [y BP]', ylabel='KDE'>)
../_images/api_25_2.png

Or a dashboard combining plot() and histplot() side by side:

resolution.dashboard()
(<Figure size 1100x800 with 2 Axes>,
 {'res': <Axes: xlabel='Age [y BP]', ylabel='resolution [y BP]'>,
  'res_hist': <Axes: xlabel='Counts'>})
../_images/api_26_2.png
segment(factor=10, verbose=False)[source]

Gap detection

This function segments a timeseries into n number of parts following a gap

detection algorithm. The rule of gap detection is very simple: we define the intervals between time points as dts, then if dts[i] is larger than factor * dts[i-1], we think that the change of dts (or the gradient) is too large, and we regard it as a breaking point and divide the time series into two segments here

Parameters:
  • factor (float) – The factor that adjusts the threshold for gap detection

  • verbose (bool) – If True, will print warning messages if there is any

Returns:

res – If gaps were detected, returns the segments in a MultipleSeries object, else, returns the original timeseries.

Return type:

MultipleSeries or Series

sel(value=None, time=None, tolerance=0)[source]

Slice Series based on ‘value’ or ‘time’.

Parameters:
  • value (int, float, slice) – If int/float, then the Series will be sliced so that self.value is equal to value (+/- tolerance). If slice, then the Series will be sliced so self.value is between slice.start and slice.stop (+/- tolerance).

  • time (int, float, slice) – If int/float, then the Series will be sliced so that self.time is equal to time. (+/- tolerance) If slice of int/float, then the Series will be sliced so that self.time is between slice.start and slice.stop. If slice of datetime (or str containing datetime, such as ‘2020-01-01’), then the Series will be sliced so that self.datetime_index is between time.start and time.stop (+/- tolerance, which needs to be a timedelta).

  • tolerance (int, float, default 0.) – Used by value and time, see above.

Return type:

Copy of self, sliced according to value and time.

Examples

>>> ts = pyleo.Series(
...     time=np.array([1, 1.1, 2, 3]), value=np.array([4, .9, 6, 1]), time_unit='years BP'
... )
>>> ts.sel(value=1)
{'log': ({0: 'clean_ts', 'applied': True, 'verbose': False},
        {2: 'clean_ts', 'applied': True, 'verbose': False})}

None time [years BP] 3.0 1.0 Name: value, dtype: float64

If you also want to include the value 3.9, you could set tolerance to .1:

>>> ts.sel(value=1, tolerance=.1)
{'log': ({0: 'clean_ts', 'applied': True, 'verbose': False},
        {2: 'clean_ts', 'applied': True, 'verbose': False})}

None time [years BP] 1.1 0.9 3.0 1.0 Name: value, dtype: float64

You can also pass a slice to select a range of values:

>>> ts.sel(value=slice(4, 6))
{'log': ({0: 'clean_ts', 'applied': True, 'verbose': False},
        {2: 'clean_ts', 'applied': True, 'verbose': False})}

None time [years BP] 1.0 4.0 2.0 6.0 Name: value, dtype: float64

>>> ts.sel(value=slice(4, None))
{'log': ({0: 'clean_ts', 'applied': True, 'verbose': False},
        {2: 'clean_ts', 'applied': True, 'verbose': False})}

None time [years BP] 1.0 4.0 2.0 6.0 Name: value, dtype: float64

>>> ts.sel(value=slice(None, 4))
{'log': ({0: 'clean_ts', 'applied': True, 'verbose': False},
        {2: 'clean_ts', 'applied': True, 'verbose': False})}

None time [years BP] 1.0 4.0 1.1 0.9 3.0 1.0 Name: value, dtype: float64

Similarly, you filter using time instead of value.

slice(timespan)[source]

Slicing the timeseries with a timespan (tuple or list)

Parameters:

timespan (tuple or list) – The list of time points for slicing, whose length must be even. When there are n time points, the output Series includes n/2 segments. For example, if timespan = [a, b], then the sliced output includes one segment [a, b]; if timespan = [a, b, c, d], then the sliced output includes segment [a, b] and segment [c, d].

Returns:

new – The sliced Series object.

Return type:

Series

Examples

slice the SOI from 1972 to 1998

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')
ts_slice = ts.slice([1972, 1998])
print("New time bounds:",ts_slice.time.min(),ts_slice.time.max())
New time bounds: 1972.0 1998.0
sort(verbose=False, ascending=True, keep_log=False)[source]
Ensure timeseries is set to a monotonically increasing axis.

If the time axis is prograde to begin with, no transformation is applied.

Parameters:
  • verbose (bool) – If True, will print warning messages if there is any

  • keep_log (Boolean) – if True, adds this step and its parameter to the series log.

Returns:

new – Series object with removed NaNs and sorting

Return type:

Series

spectral(method='lomb_scargle', freq_method='log', freq_kwargs=None, settings=None, label=None, scalogram=None, verbose=False)[source]

Perform spectral analysis on the timeseries

Parameters:
  • method (str;) – {‘wwz’, ‘mtm’, ‘lomb_scargle’, ‘welch’, ‘periodogram’, ‘cwt’}

  • freq_method (str) – {‘log’,’scale’, ‘nfft’, ‘lomb_scargle’, ‘welch’}

  • freq_kwargs (dict) – Arguments for frequency vector

  • settings (dict) – Arguments for the specific spectral method

  • label (str) – Label for the PSD object

  • scalogram (pyleoclim.core.series.Series.Scalogram) – The return of the wavelet analysis; effective only when the method is ‘wwz’ or ‘cwt’

  • verbose (bool) – If True, will print warning messages if there is any

Returns:

psd – A PSD object

Return type:

PSD

See also

pyleoclim.utils.spectral.mtm

Spectral analysis using the Multitaper approach

pyleoclim.utils.spectral.lomb_scargle

Spectral analysis using the Lomb-Scargle method

pyleoclim.utils.spectral.welch

Spectral analysis using the Welch segement approach

pyleoclim.utils.spectral.periodogram

Spectral anaysis using the basic Fourier transform

pyleoclim.utils.spectral.wwz_psd

Spectral analysis using the Wavelet Weighted Z transform

pyleoclim.utils.spectral.cwt_psd

Spectral analysis using the continuous Wavelet Transform as implemented by Torrence and Compo

pyleoclim.utils.spectral.make_freq_vector

Functions to create the frequency vector

pyleoclim.utils.tsutils.detrend

Detrending function

pyleoclim.core.psds.PSD

PSD object

pyleoclim.core.psds.MultiplePSD

Multiple PSD object

Examples

Calculate the spectrum of SOI using the various methods and compute significance

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')
ts_std = ts.standardize()
  • Lomb-Scargle

psd_ls = ts_std.spectral(method='lomb_scargle')
psd_ls_signif = psd_ls.signif_test(number=20) #in practice, need more AR1 simulations
fig, ax = psd_ls_signif.plot(title='PSD using Lomb-Scargle method')
../_images/api_29_3.png

We may pass in method-specific arguments via “settings”, which is a dictionary. For instance, to adjust the number of overlapping segment for Lomb-Scargle, we may specify the method-specific argument “n50”; to adjust the frequency vector, we may modify the “freq_method” or modify the method-specific argument “freq”.

import numpy as np
psd_LS_n50 = ts_std.spectral(method='lomb_scargle', settings={'n50': 4})  # c=1e-2 yields lower frequency resolution
psd_LS_freq = ts_std.spectral(method='lomb_scargle', settings={'freq': np.linspace(1/20, 1/0.2, 51)})
psd_LS_LS = ts_std.spectral(method='lomb_scargle', freq_method='lomb_scargle')  # with frequency vector generated using REDFIT method
fig, ax = psd_LS_n50.plot(
    title='PSD using Lomb-Scargle method with 4 overlapping segments',
    label='settings={"n50": 4}')
psd_ls.plot(ax=ax, label='settings={"n50": 3}', marker='o')

fig, ax = psd_LS_freq.plot(
    title='PSD using Lomb-Scargle method with different frequency vectors',
    label='freq=np.linspace(1/20, 1/0.2, 51)', marker='o')
psd_ls.plot(ax=ax, label='freq_method="log"', marker='o')
<Axes: title={'center': 'PSD using Lomb-Scargle method with different frequency vectors'}, xlabel='Period [year]', ylabel='PSD'>
../_images/api_30_1.png ../_images/api_30_2.png

You may notice the differences in the PSD curves regarding smoothness and the locations of the analyzed period points.

For other method-specific arguments, please look up the specific methods in the “See also” section.

  • WWZ

psd_wwz = ts_std.spectral(method='wwz')  # wwz is the default method
psd_wwz_signif = psd_wwz.signif_test(number=1)  # significance test; for real work, should use number=200 or even larger
fig, ax = psd_wwz_signif.plot(title='PSD using WWZ method')
../_images/api_31_4.png

We may take advantage of a pre-calculated scalogram using WWZ to accelerate the spectral analysis (although note that the default parameters for spectral and wavelet analysis using WWZ are different):

scal_wwz = ts_std.wavelet(method='wwz')  # wwz is the default method
psd_wwz_fast = ts_std.spectral(method='wwz', scalogram=scal_wwz)
fig, ax = psd_wwz_fast.plot(title='PSD using WWZ method w/ pre-calculated scalogram')
../_images/api_32_0.png
  • Periodogram

ts_interp = ts_std.interp()
psd_perio = ts_interp.spectral(method='periodogram')
psd_perio_signif = psd_perio.signif_test(number=20, method='ar1sim') #in practice, need more AR1 simulations
fig, ax = psd_perio_signif.plot(title='PSD using Periodogram method')
../_images/api_33_3.png
  • Welch

psd_welch = ts_interp.spectral(method='welch')
psd_welch_signif = psd_welch.signif_test(number=20, method='ar1sim') #in practice, need more AR1 simulations
fig, ax = psd_welch_signif.plot(title='PSD using Welch method')
../_images/api_34_3.png
  • MTM

psd_mtm = ts_interp.spectral(method='mtm', label='MTM, NW=4')
psd_mtm_signif = psd_mtm.signif_test(number=20, method='ar1sim') #in practice, need more AR1 simulations
fig, ax = psd_mtm_signif.plot(title='PSD using the multitaper method')
../_images/api_35_8.png

By default, MTM uses a half-bandwidth of 4 times the fundamental (Rayleigh) frequency, i.e. NW = 4, which is the most conservative choice. NW runs from 2 to 4 in multiples of 1/2, and can be adjusted like so (note the sharper peaks and higher overall variance, which may not be desirable):

psd_mtm2 = ts_interp.spectral(method='mtm', settings={'NW':2}, label='MTM, NW=2')
fig, ax = psd_mtm2.plot(title='MTM with NW=2')
../_images/api_36_0.png
  • Continuous Wavelet Transform

ts_interp = ts_std.interp()
psd_cwt = ts_interp.spectral(method='cwt')
psd_cwt_signif = psd_cwt.signif_test(number=20)
fig, ax = psd_cwt_signif.plot(title='PSD using the CWT method')
../_images/api_37_4.png
ssa(M=None, nMC=0, f=0.3, trunc=None, var_thresh=80, online=True)[source]

Singular Spectrum Analysis

Nonparametric, orthogonal decomposition of timeseries into constituent oscillations. This implementation uses the method of [1], with applications presented in [2]. Optionally (MC>0), the significance of eigenvalues is assessed by Monte-Carlo simulations of an AR(1) model fit to X, using [3]. The method expects regular spacing, but is tolerant to missing values, up to a fraction 0<f<1 (see [4]).

Parameters:
  • M (int, optional) – window size. The default is None (10% of the length of the series).

  • MC (int, optional) – Number of iteration in the Monte-Carlo process. The default is 0.

  • f (float, optional) – maximum allowable fraction of missing values. The default is 0.3.

  • trunc (str) –

    if present, truncates the expansion to a level K < M owing to one of 4 criteria:
    1. ’kaiser’: variant of the Kaiser-Guttman rule, retaining eigenvalues larger than the median

    2. ’mcssa’: Monte-Carlo SSA (use modes above the 95% quantile from an AR(1) process)

    3. ’var’: first K modes that explain at least var_thresh % of the variance.

    Default is None, which bypasses truncation (K = M)
    1. ’knee’: Wherever the “knee” of the screeplot occurs.

    Recommended as a first pass at identifying significant modes as it tends to be more robust than ‘kaiser’ or ‘var’, and faster than ‘mcssa’. While no truncation method is imposed by default, if the goal is to enhance the S/N ratio and reconstruct a smooth version of the attractor’s skeleton, then the knee-finding method is a good compromise between objectivity and efficiency. See kneed’s documentation for more details on the knee finding algorithm.

  • var_thresh (float) – variance threshold for reconstruction (only impactful if trunc is set to ‘var’)

  • online (bool; {True,False}) –

    Whether or not to conduct knee finding analysis online or offline. Only called when trunc = ‘knee’. Default is True See kneed’s documentation for details.

Returns:

  • res (object of the SsaRes class containing:)

  • eigvals ((M, ) array of eigenvalues)

  • eigvecs ((M, M) Matrix of temporal eigenvectors (T-EOFs))

  • PC ((N - M + 1, M) array of principal components (T-PCs))

  • RCmat ((N, M) array of reconstructed components)

  • RCseries ((N,) reconstructed series, with mean and variance restored (same type as original))

  • pctvar ((M, ) array of the fraction of variance (%) associated with each mode)

  • eigvals_q ((M, 2) array contaitning the 5% and 95% quantiles of the Monte-Carlo eigenvalue spectrum [ if nMC >0 ])

References

[1]_ Vautard, R., and M. Ghil (1989), Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series, Physica D, 35, 395–424.

[2]_ Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, M. E. Mann, A. Robertson, A. Saunders, Y. Tian, F. Varadi, and P. Yiou (2002), Advanced spectral methods for climatic time series, Rev. Geophys., 40(1), 1003–1052, doi:10.1029/2000RG000092.

[3]_ Allen, M. R., and L. A. Smith (1996), Monte Carlo SSA: Detecting irregular oscillations in the presence of coloured noise, J. Clim., 9, 3373–3404.

[4]_ Schoellhamer, D. H. (2001), Singular spectrum analysis for time series with missing data, Geophysical Research Letters, 28(16), 3187–3190, doi:10.1029/2000GL012698.

See also

pyleoclim.core.utils.decomposition.ssa

Singular Spectrum Analysis utility

pyleoclim.core.ssares.SsaRes.modeplot

plot SSA modes

pyleoclim.core.ssares.SsaRes.screeplot

plot SSA eigenvalue spectrum

Examples

SSA with SOI

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('SOI')
fig, ax = ts.plot()

nino_ssa = ts.ssa(M=60)
../_images/api_38_0.png

Let us now see how to make use of all these arrays. The first step is to inspect the eigenvalue spectrum (“scree plot”) to identify remarkable modes. Let us restrict ourselves to the first 40, so we can see something:

fig, ax = nino_ssa.screeplot()
../_images/api_39_0.png
This highlights a few common phenomena with SSA:
  • the eigenvalues are in descending order

  • their uncertainties are proportional to the eigenvalues themselves

  • the eigenvalues tend to come in pairs : (1,2) (3,4), are all clustered within uncertainties . (5,6) looks like another doublet

  • around i=15, the eigenvalues appear to reach a floor, and all subsequent eigenvalues explain a very small amount of variance.

So, summing the variance of the first 14 modes, we get:

print(nino_ssa.pctvar[:14].sum())
71.61676734218962

That is a typical result for a (paleo)climate timeseries; a few modes do the vast majority of the work. That means we can focus our attention on these modes and capture most of the interesting behavior. To see this, let’s use the reconstructed components (RCs), and sum the RC matrix over the first 14 columns:

RCmat = nino_ssa.RCmat[:,:14]
RCk = (RCmat-RCmat.mean()).sum(axis=1) + ts.value.mean()
fig, ax = ts.plot(title='SOI')
ax.plot(nino_ssa.orig.time,RCk,label='SSA reconstruction, 14 modes',color='orange')
ax.legend()
<matplotlib.legend.Legend at 0x7f278237e150>
../_images/api_41_1.png
Indeed, these first few modes capture the vast majority of the low-frequency behavior, including all the El Niño/La Niña events. What is left (the blue wiggles not captured in the orange curve) are high-frequency oscillations that might be considered “noise” from the standpoint of ENSO dynamics. This illustrates how SSA might be used for filtering a timeseries. One must be careful however:
  • there was not much rhyme or reason for picking 14 modes. Why not 5, or 39? All we have seen so far is that they gather >95% of the variance, which is by no means a magic number.

  • there is no guarantee that the first few modes will filter out high-frequency behavior, or at what frequency cutoff they will do so. If you need to cut out specific frequencies, you are better off doing it with a classical filter, like the butterworth filter implemented in Pyleoclim. However, in many instances the choice of a cutoff frequency is itself rather arbitrary. In such cases, SSA provides a principled alternative for generating a version of a timeseries that preserves features and excludes others (i.e, a filter).

  • as with all orthgonal decompositions, summing over all RCs will recover the original signal within numerical precision.

Monte-Carlo SSA

Selecting meaningful modes in eigenproblems (e.g. EOF analysis) is more art than science. However, one technique stands out: Monte Carlo SSA, introduced by Allen & Smith, (1996) to identify SSA modes that rise above what one would expect from “red noise”, specifically an AR(1) process). To run it, simply provide the parameter MC, ideally with a number of iterations sufficient to get decent statistics. Here let’s use MC = 1000. The result will be stored in the eigval_q array, which has the same length as eigval, and its two columns contain the 5% and 95% quantiles of the ensemble of MC-SSA eigenvalues.

nino_mcssa = ts.ssa(M = 60, nMC=1000)

Now let’s look at the result:

fig, ax = nino_mcssa.screeplot()
print('Indices of modes retained: '+ str(nino_mcssa.mode_idx))
Indices of modes retained: [ 0  1  2  3  4 14 20 25 27 28 29 30]
../_images/api_43_1.png

This suggests that modes 1-5 fall above the red noise benchmark, but so do a few others. To inspect mode 1 (index 0), just type:

fig, ax = nino_mcssa.modeplot(index=0)
../_images/api_44_0.png

To inspect the reconstructed series, simply do:

fig, ax = ts.plot()
nino_mcssa.RCseries.plot(ax=ax)
<Axes: xlabel='Time [year C.E.]', ylabel='SOI [mb]'>
../_images/api_45_1.png

For other truncation methods, see http://linked.earth/PyleoTutorials/notebooks/L2_singular_spectrum_analysis.html

standardize(keep_log=False, scale=1)[source]

Standardizes the series ((i.e. remove its estimated mean and divides by its estimated standard deviation)

Returns:

  • new (Series) – The standardized series object

  • keep_log (Boolean) – if True, adds the previous mean, standard deviation and method parameters to the series log.

stats()[source]

Compute basic statistics from a Series

Computes the mean, median, min, max, standard deviation, and interquartile range of a numpy array y, ignoring NaNs.

Returns:

res – Contains the mean, median, minimum value, maximum value, standard deviation, and interquartile range for the Series.

Return type:

dictionary

Examples

Compute basic statistics for the SOI series

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('SOI')
ts.stats()
{'mean': 0.11992753623188407,
 'median': 0.1,
 'min': -3.6,
 'max': 2.9,
 'std': 0.9380195472790024,
 'IQR': 1.3}
stripes(figsize=[8, 1], cmap='RdBu_r', ref_period=None, sat=1.0, top_label=None, bottom_label=None, label_color='gray', label_size=None, xlim=None, xlabel=None, savefig_settings=None, ax=None, invert_xaxis=False, show_xaxis=False, x_offset=0.03)[source]

Represents the Series as an Ed Hawkins “stripes” pattern

Credit: https://matplotlib.org/matplotblog/posts/warming-stripes/

Parameters:
  • ref_period (array-like (2-elements)) – dates of the reference period, in the form “(first, last)”

  • figsize (list) – a list of two integers indicating the figure size (in inches)

  • cmap (str) – colormap name (https://matplotlib.org/stable/tutorials/colors/colormaps.html)

  • sat (float > 0) – Controls the saturation of the colormap normalization by scaling the vmin, vmax in https://matplotlib.org/stable/tutorials/colors/colormapnorms.html default = 0.9

  • xlim (list) – time axis limits

  • top_label (str) – the “title” label for the stripe

  • bottom_label (str) – the “ylabel” explaining which variable is being plotted

  • invert_xaxis (bool, optional) – if True, the x-axis of the plot will be inverted

  • x_offset (float) – value controlling the horizontal offset between stripes and labels (default = 0.05)

  • show_xaxis (bool) – flag indicating whether or not the x-axis should be shown (default = False)

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (matplotlib.axis, optional) – the axis object from matplotlib See [matplotlib.axes](https://matplotlib.org/api/axes_api.html) for details.

Returns:

Notes

When ax is passed, the return will be ax only; otherwise, both fig and ax will be returned.

See also

pyleoclim.utils.plotting.stripes

stripes representation of a timeseries

pyleoclim.utils.plotting.savefig

saving a figure in Pyleoclim

Examples

Plot the HadCRUT5 Global Mean Surface Temperature

gmst = pyleo.utils.load_dataset('HadCRUT5')
fig, ax = gmst.stripes(ref_period=(1971,2000))
../_images/api_47_0.png

For a more pastel tone, dial down saturation:

fig, ax = gmst.stripes(ref_period=(1971,2000), sat = 0.8)
../_images/api_48_0.png

To change the colormap:

fig, ax = gmst.stripes(ref_period=(1971,2000), cmap='Spectral_r')
fig, ax = gmst.stripes(ref_period=(1971,2000), cmap='magma_r')
../_images/api_49_0.png ../_images/api_49_1.png

To show the time axis:

fig, ax = gmst.stripes(ref_period=(1971,2000), show_xaxis=True)
../_images/api_50_0.png
summary_plot(psd, scalogram, figsize=[8, 10], title=None, time_lim=None, value_lim=None, period_lim=None, psd_lim=None, time_label=None, value_label=None, period_label=None, psd_label=None, ts_plot_kwargs=None, wavelet_plot_kwargs=None, psd_plot_kwargs=None, gridspec_kwargs=None, y_label_loc=None, legend=None, savefig_settings=None)[source]

Produce summary plot of timeseries.

Generate cohesive plot of timeseries alongside results of wavelet analysis and spectral analysis on said timeseries. Requires wavelet and spectral analysis to be conducted outside of plotting function, psd and scalogram must be passed as arguments.

Parameters:
  • psd (PSD) – the PSD object of a Series.

  • scalogram (Scalogram) – the Scalogram object of a Series. If the passed scalogram object contains stored signif_scals these will be plotted.

  • figsize (list) – a list of two integers indicating the figure size

  • title (str) – the title for the figure

  • time_lim (list or tuple) – the limitation of the time axis. This is for display purposes only, the scalogram and psd will still be calculated using the full time series.

  • value_lim (list or tuple) – the limitation of the value axis of the timeseries. This is for display purposes only, the scalogram and psd will still be calculated using the full time series.

  • period_lim (list or tuple) – the limitation of the period axis

  • psd_lim (list or tuple) – the limitation of the psd axis

  • time_label (str) – the label for the time axis

  • value_label (str) – the label for the value axis of the timeseries

  • period_label (str) – the label for the period axis

  • psd_label (str) – the label for the amplitude axis of PDS

  • legend (bool) – if set to True, a legend will be added to the open space above the psd plot

  • ts_plot_kwargs (dict) – arguments to be passed to the timeseries subplot, see Series.plot for details

  • wavelet_plot_kwargs (dict) – arguments to be passed to the scalogram plot, see pyleoclim.Scalogram.plot for details

  • psd_plot_kwargs (dict) –

    arguments to be passed to the psd plot, see PSD.plot for details Certain psd plot settings are required by summary plot formatting. These include:

    • ylabel

    • legend

    • tick parameters

    These will be overriden by summary plot to prevent formatting errors

  • gridspec_kwargs (dict) –

    arguments used to build the specifications for gridspec configuration The plot is constructed with six slots:

    • slot [0] contains a subgridspec containing the timeseries and scalogram (shared x axis)

    • slot [1] contains a subgridspec containing an empty slot and the PSD plot (shared y axis with scalogram)

    • slot [2] and slot [3] are empty to allow ample room for xlabels for the scalogram and PSD plots

    • slot [4] contains the scalogram color bar

    • slot [5] is empty

    It is possible to tune the size and spacing of the various slots:

    • ’width_ratios’: list of two values describing the relative widths of the column containig the timeseries/scalogram/colorbar and the column containig the PSD plot (default: [6, 1])

    • ’height_ratios’: list of three values describing the relative heights of the three timeseries, scalogram and colorbar (default: [2, 7, .35])

    • ’hspace’: vertical space between timeseries and scalogram (default: 0, however if either the scalogram xlabel or the PSD xlabel contain ‘n’, .05)

    • ’wspace’: lateral space between scalogram and psd plot (default: 0)

    • ’cbspace’: vertical space between the scalogram and colorbar

  • y_label_loc (float) – Plot parameter to adjust horizontal location of y labels to avoid conflict with axis labels, default value is -0.15

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

See also

pyleoclim.core.series.Series.spectral

Spectral analysis for a timeseries

pyleoclim.core.series.Series.wavelet

Wavelet analysis for a timeseries

pyleoclim.utils.plotting.savefig

saving figure in Pyleoclim

pyleoclim.core.psds.PSD

PSD object

pyleoclim.core.psds.MultiplePSD

Multiple PSD object

Examples

Summary_plot with pre-generated psd and scalogram objects. Note that if the scalogram contains saved noise realizations these will be flexibly reused. See pyleo.Scalogram.signif_test() for details

import pyleoclim as pyleo
series = pyleo.utils.load_dataset('SOI')
psd = series.spectral(freq_method = 'welch')
scalogram = series.wavelet(freq_method = 'welch')

fig, ax = series.summary_plot(psd = psd,scalogram = scalogram)
../_images/api_51_0.png

Summary_plot with pre-generated psd and scalogram objects from before and some plot modification arguments passed. Note that if the scalogram contains saved noise realizations these will be flexibly reused. See pyleo.Scalogram.signif_test() for details

import pyleoclim as pyleo
series = pyleo.utils.load_dataset('SOI')
psd = series.spectral(freq_method = 'welch')
scalogram = series.wavelet(freq_method = 'welch')

fig, ax = series.summary_plot(psd = psd,scalogram = scalogram, period_lim = [5,0], ts_plot_kwargs = {'color':'red','linewidth':.5}, psd_plot_kwargs = {'color':'red','linewidth':.5})
../_images/api_52_0.png
surrogates(method='ar1sim', number=1, length=None, seed=None, settings=None)[source]

Generate surrogates of the Series object according to “method”

For now, assumes uniform spacing and increasing time axis

Parameters:
  • method ({ar1sim, phaseran}) –

    The method used to generate surrogates of the timeseries

    Note that phaseran assumes an odd number of samples. If the series has even length, the last point is dropped to satisfy this requirement

  • number (int) – The number of surrogates to generate

  • length (int) – Length of the series

  • seed (int) – Control seed option for reproducibility

  • settings (dict) – Parameters for surrogate generator. See individual methods for details.

Returns:

surr

Return type:

SurrogateSeries

See also

pyleoclim.utils.tsmodel.ar1_sim

AR(1) simulator

pyleoclim.utils.tsutils.phaseran

phase randomization

to_csv(metadata_header=True, path=None)[source]

Export Series to csv

Parameters:
  • metadata_header (boolean, optional) – DESCRIPTION. The default is True.

  • path (str, optional) – system path to save the file. Default is ‘.’

Return type:

None

See also

pyleoclim.Series.from_csv

Examples

import pyleoclim as pyleo

LR04 = pyleo.utils.load_dataset('LR04')
LR04.to_csv()
lr04 = pyleo.Series.from_csv('LR04_benthic_stack.csv')
LR04.equals(lr04)
Series exported to LR04_benthic_stack.csv
Time axis values sorted in ascending order
(True, True)
to_json(path=None)[source]

Export the pyleoclim.Series object to a json file

Parameters:

path (string, optional) – The path to the file. The default is None, resulting in a file saved in the current working directory using the label for the dataset as filename if available or ‘series.json’ if label is not provided.

Return type:

None.

Examples

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')
ts.to_json('soi.json')
to_pandas(paleo_style=False)[source]

Export to pandas Series

Parameters:

paleo_style (boolean, optional) – If True, will replace datetime with time and label columns with units . The default is False.

Returns:

ser

Return type:

pd.Series representation of the pyleo.Series object

view()[source]

Generates a DataFrame version of the Series object, suitable for viewing in a Jupyter Notebook

Return type:

pd.DataFrame

Examples

Plot the HadCRUT5 Global Mean Surface Temperature

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('HadCRUT5')
ts.view()
GMST [$^{\circ}$C]
Time [year C.E.]
1850.0 -0.417659
1851.0 -0.233350
1852.0 -0.229399
1853.0 -0.270354
1854.0 -0.291630
... ...
2018.0 0.762654
2019.0 0.891073
2020.0 0.922794
2021.0 0.761856
2022.0 0.801242

173 rows × 1 columns

wavelet(method='cwt', settings=None, freq_method='log', freq_kwargs=None, verbose=False)[source]

Perform wavelet analysis on a timeseries

Parameters:
  • method (str {wwz, cwt}) –

    cwt - the continuous wavelet transform [1]

    is appropriate for evenly-spaced series.

    wwz - the weighted wavelet Z-transform [2]

    is appropriate for unevenly-spaced series.

    Default is cwt, returning an error if the Series is unevenly-spaced.

  • freq_method (str) – {‘log’, ‘scale’, ‘nfft’, ‘lomb_scargle’, ‘welch’}

  • freq_kwargs (dict) – Arguments for the frequency vector

  • settings (dict) – Arguments for the specific wavelet method

  • verbose (bool) – If True, will print warning messages if there are any

Returns:

scal

Return type:

Scalogram object

See also

pyleoclim.utils.wavelet.wwz

wwz function

pyleoclim.utils.wavelet.cwt

cwt function

pyleoclim.utils.spectral.make_freq_vector

Functions to create the frequency vector

pyleoclim.utils.tsutils.detrend

Detrending function

pyleoclim.core.series.Series.spectral

spectral analysis tools

pyleoclim.core.scalograms.Scalogram

Scalogram object

pyleoclim.core.scalograms.MultipleScalogram

Multiple Scalogram object

References

[1] Torrence, C. and G. P. Compo, 1998: A Practical Guide to Wavelet Analysis. Bull. Amer. Meteor. Soc., 79, 61-78. Python routines available at http://paos.colorado.edu/research/wavelets/

[2] Foster, G., 1996: Wavelets for period analysis of unevenly sampled time series. The Astronomical Journal, 112, 1709.

Examples

Wavelet analysis on the evenly-spaced SOI record. The CWT method will be applied by default.

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')

scal1 = ts.wavelet()
scal_signif = scal1.signif_test(number=20)  # for research-grade work, use number=200 or larger
fig, ax = scal_signif.plot()
../_images/api_56_4.png

If you wanted to invoke the WWZ method instead (here with no significance testing, to lower computational cost):

scal2 = ts.wavelet(method='wwz')
fig, ax = scal2.plot()
../_images/api_57_0.png

Notice that the two scalograms have different amplitudes, which are relative. Method-specific arguments may be passed via settings. For instance, if you wanted to change the default mother wavelet (‘MORLET’) to a derivative of a Gaussian (DOG), with degree 2 by default (“Mexican Hat wavelet”):

scal3 = ts.wavelet(settings = {'mother':'DOG'})
fig, ax = scal3.plot(title='CWT scalogram with DOG mother wavelet')
../_images/api_58_0.png

As for WWZ, note that, for computational efficiency, the time axis is coarse-grained by default to 50 time points, which explains in part the difference with the CWT scalogram.

If you need a custom axis, it (and other method-specific parameters) can also be passed via the settings dictionary:

tau = np.linspace(np.min(ts.time), np.max(ts.time), 60)
scal4 = ts.wavelet(method='wwz', settings={'tau':tau})
fig, ax = scal4.plot(title='WWZ scalogram with finer time axis')
../_images/api_59_0.png
wavelet_coherence(target_series, method='cwt', settings=None, freq_method='log', freq_kwargs=None, verbose=False, common_time_kwargs=None)[source]

Performs wavelet coherence analysis with the target timeseries

Parameters:
  • target_series (Series) – A pyleoclim Series object on which to perform the coherence analysis

  • method (str) – Possible methods {‘wwz’,’cwt’}. Default is ‘cwt’, which only works if the series share the same evenly-spaced time axis. ‘wwz’ is designed for unevenly-spaced data, but is far slower.

  • freq_method (str) – {‘log’,’scale’, ‘nfft’, ‘lomb_scargle’, ‘welch’}

  • freq_kwargs (dict) – Arguments for frequency vector

  • common_time_kwargs (dict) – Parameters for the method MultipleSeries.common_time(). Will use interpolation by default.

  • settings (dict) – Arguments for the specific wavelet method (e.g. decay constant for WWZ, mother wavelet for CWT) and common properties like standardize, detrend, gaussianize, pad, etc.

  • verbose (bool) – If True, will print warning messages, if any

Returns:

coh

Return type:

pyleoclim.core.coherence.Coherence

References

Grinsted, A., Moore, J. C. & Jevrejeva, S. Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlin. Processes Geophys. 11, 561–566 (2004).

See also

pyleoclim.utils.spectral.make_freq_vector

Functions to create the frequency vector

pyleoclim.utils.tsutils.detrend

Detrending function

pyleoclim.core.multipleseries.MultipleSeries.common_time

put timeseries on common time axis

pyleoclim.core.series.Series.wavelet

wavelet analysis

pyleoclim.utils.wavelet.wwz_coherence

coherence using the wwz method

pyleoclim.utils.wavelet.cwt_coherence

coherence using the cwt method

Examples

Calculate the wavelet coherence of NINO3 and All India Rainfall with default arguments:

import pyleoclim as pyleo

ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')

coh = ts_air.wavelet_coherence(ts_nino)
coh.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_60_1.png

Note that in this example both timeseries area already on a common, evenly-spaced time axis. If they are not (either because the data are unevenly spaced, or because the time axes are different in some other way), an error will be raised. To circumvent this error, you can either put the series on a common time axis (e.g. using common_time()) prior to applying CWT, or you can use the Weighted Wavelet Z-transform (WWZ) instead, as it is designed for unevenly-spaced data. However, it is usually far slower:

coh_wwz = ts_air.wavelet_coherence(ts_nino, method = 'wwz')
coh_wwz.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_61_1.png

As with wavelet analysis, both CWT and WWZ admit optional arguments through settings. For instance, one can adjust the resolution of the time axis on which coherence is evaluated:

coh_wwz = ts_air.wavelet_coherence(ts_nino, method = 'wwz', settings = {'ntau':20})
coh_wwz.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_62_1.png

The frequency (scale) axis can also be customized, e.g. to focus on scales from 1 to 20y, with 24 scales:

coh = ts_air.wavelet_coherence(ts_nino, freq_kwargs={'fmin':1/20,'fmax':1,'nf':24})
coh.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_63_1.png

Significance is assessed similarly to PSD or Scalogram objects:

cwt_sig = coh.signif_test(number=20, qs=[.9,.95]) # specifiying 2 significance thresholds does not take any more time.
# by default, the plot function will look for the closest quantile to 0.95, but it is easy to adjust:
cwt_sig.plot(signif_thresh = 0.9)
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:90% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_64_8.png

Another plotting option, dashboard, allows to visualize both timeseries as well as the wavelet transform coherency (WTC), which quantifies where two timeseries exhibit similar behavior in time-frequency space, and the cross-wavelet transform (XWT), which indicates regions of high common power.

cwt_sig.dashboard()
(<Figure size 900x1200 with 6 Axes>,
 {'ts1': <Axes: ylabel='AIR [mm/month]'>,
  'ts2': <Axes: xlabel='Time [year C.E.]', ylabel='NINO3 [$^{\\circ}$C]'>,
  'wtc': <Axes: ylabel='Scale [yrs]'>,
  'xwt': <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>})
../_images/api_65_1.png

Note: this design balances many considerations, and is not easily customizable.

GeoSeries (pyleoclim.GeoSeries)

class pyleoclim.core.geoseries.GeoSeries(time, value, lat, lon, elevation=None, time_unit=None, time_name=None, value_name=None, value_unit=None, label=None, importedFrom=None, archiveType=None, control_archiveType=False, sensorType=None, observationType=None, log=None, keep_log=False, verbose=True, depth=None, depth_name=None, depth_unit=None, sort_ts='ascending', dropna=True, clean_ts=False, auto_time_params=None)[source]

The GeoSeries class is a child of the Series class, and requires geolocation information (latitude, longitude). Elevation is optional, but can be used in mapping, if present. The class also allows for ancillary data and metadata, detailed below.

Parameters:
  • time (list or numpy.array) – independent variable (t)

  • value (list of numpy.array) – values of the dependent variable (y)

  • lat (float) – latitude N in decimal degrees. Must be in the range [-90;+90]

  • lon (float) – longitude East in decimal degrees. Must be in the range [-180;+360] No conversion is applied as mapping utilities convert to [-180,+180] internally

  • elevation (float) – elevation of the sample, in meters above sea level. Negative numbers indicate depth below global mean sea level, therefore.

  • time_unit (string) – Units for the time vector (e.g., ‘years’). Default is None

  • time_name (string) – Name of the time vector (e.g., ‘Time’,’Age’). Default is None. This is used to label the time axis on plots

  • value_name (string) – Name of the value vector (e.g., ‘temperature’) Default is None

  • value_unit (string) – Units for the value vector (e.g., ‘deg C’) Default is None

  • label (string) – Name of the time series (e.g., ‘Nino 3.4’) Default is None

  • log (dict) – Dictionary of tuples documentating the various transformations applied to the object

  • keep_log (bool) – Whether to keep a log of applied transformations. False by default

  • importedFrom (string) – source of the dataset. If it came from a LiPD file, this could be the datasetID property

  • archiveType (string) – climate archive, one of ‘Borehole’, ‘Coral’, ‘FluvialSediment’, ‘GlacierIce’, ‘GroundIce’, ‘LakeSediment’, ‘MarineSediment’, ‘Midden’, ‘MolluskShell’, ‘Peat’, ‘Sclerosponge’, ‘Shoreline’, ‘Speleothem’, ‘TerrestrialSediment’, ‘Wood’ Reference: https://lipdverse.org/vocabulary/archivetype/

  • control_archiveType ([True, False]) – Whether to standardize the name of the archiveType agains the vocabulary from: https://lipdverse.org/vocabulary/paleodata_proxy/. If set to True, will only allow for these terms and automatically convert known synonyms to the standardized name. Only standardized variable names will be automatically assigned a color scheme. Default is False.

  • sensorType (string) – sensor, e.g. a paleoclimate proxy sensor. This property can be used to differentiate between species of foraminifera

  • observationType (string) – observation type, e.g. a proxy observation. See https://lipdverse.org/vocabulary/paleodata_proxy/. Note: this is preferred terminology but not enforced

  • depth (array) – depth at which the values were collected

  • depth_name (string) – name of the field, e.g. ‘mid-depth’, ‘top-depth’, etc

  • depth_unit (string) – units of the depth axis, e.g. ‘cm’

  • dropna (bool) – Whether to drop NaNs from the series to prevent downstream functions from choking on them defaults to True

  • sort_ts (str) – Direction of sorting over the time coordinate; ‘ascending’ or ‘descending’ Defaults to ‘ascending’

  • verbose (bool) – If True, will print warning messages if there is any

  • clean_ts (boolean flag) – set to True to remove the NaNs and make time axis strictly prograde with duplicated timestamps reduced by averaging the values Default is None (marked for deprecation)

  • auto_time_params (bool,) – If True, uses tsbase.disambiguate_time_metadata to ensure that time_name and time_unit are usable by Pyleoclim. This may override the provided metadata. If False, the provided time_name and time_unit are used. This may break some functionalities (e.g. common_time and convert_time_unit), so use at your own risk. If not provided, code will set to True for internal consistency.

Examples

Import the EPICA Dome C deuterium record and display a quick synopsis:

import pyleoclim as pyleo
ts = pyleo.utils.datasets.load_dataset('EDC-dD')
ts_interp = ts.convert_time_unit('kyr BP').interp(step=.5) # interpolate for a faster result
fig, ax = ts_interp.dashboard()
../_images/api_66_203.png
Attributes:
datetime_index

Convert time to pandas DatetimeIndex.

metadata

Methods

bin([keep_log])

Bin values in a time series

causality(target_series[, method, timespan, ...])

Perform causality analysis with the target timeseries. Specifically, whether there is information in the target series that influenced the original series.

center([timespan, keep_log])

Centers the series (i.e.

clean([verbose, keep_log])

Clean up the timeseries by removing NaNs and sort with increasing time points

convert_time_unit([time_unit, keep_log])

Convert the time units of the Series object

copy()

Make a copy of the Series object

correlation(target_series[, alpha, ...])

Estimates the correlation and its associated significance between two time series (not ncessarily IID).

dashboard([figsize, gs, plt_kwargs, ...])

param figsize:

Figure size. The default is [11,8].

detrend([method, keep_log, preserve_mean])

Detrend Series object

equals(ts[, index_tol, value_tol])

Test whether two objects contain the same elements (values and datetime_index) A printout is returned if metadata are different, but the statement is considered True as long as data match.

fill_na([timespan, dt, keep_log])

Fill NaNs into the timespan

filter([cutoff_freq, cutoff_scale, method, ...])

Filtering methods for Series objects using four possible methods:

flip([axis, keep_log])

Flips the Series along one or both axes

from_csv(path)

Read in Series object from CSV file.

from_json(path)

Creates a pyleoclim.Series from a JSON file

gaussianize([keep_log])

Gaussianizes the timeseries (i.e.

gkernel([step_style, keep_log, step_type])

Coarse-grain a Series object via a Gaussian kernel.

histplot([figsize, title, savefig_settings, ...])

Plot the distribution of the timeseries values

interp([method, keep_log])

Interpolate a Series object onto a new time axis

is_evenly_spaced([tol])

Check if the Series time axis is evenly-spaced, within tolerance

make_labels()

Initialization of plot labels based on Series metadata

map([projection, proj_default, background, ...])

Map the location of the record

map_neighbors(mgs[, radius, projection, ...])

Map all records within a given radius of the object

outliers([method, remove, settings, ...])

Remove outliers from timeseries data.

plot([figsize, marker, markersize, color, ...])

Plot the timeseries

resample(rule[, keep_log])

Run analogue to pandas.Series.resample.

resolution()

Generate a resolution object

segment([factor, verbose])

Gap detection

sel([value, time, tolerance])

Slice Series based on 'value' or 'time'.

slice(timespan)

Slicing the timeseries with a timespan (tuple or list)

sort([verbose, ascending, keep_log])

Ensure timeseries is set to a monotonically increasing axis.

spectral([method, freq_method, freq_kwargs, ...])

Perform spectral analysis on the timeseries

ssa([M, nMC, f, trunc, var_thresh, online])

Singular Spectrum Analysis

standardize([keep_log, scale])

Standardizes the series ((i.e.

stats()

Compute basic statistics from a Series

stripes([figsize, cmap, ref_period, sat, ...])

Represents the Series as an Ed Hawkins "stripes" pattern

summary_plot(psd, scalogram[, figsize, ...])

Produce summary plot of timeseries.

surrogates([method, number, length, seed, ...])

Generate surrogates of the Series object according to "method"

to_csv([metadata_header, path])

Export Series to csv

to_json([path])

Export the pyleoclim.Series object to a json file

to_pandas([paleo_style])

Export to pandas Series

view()

Generates a DataFrame version of the Series object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Perform wavelet analysis on a timeseries

wavelet_coherence(target_series[, method, ...])

Performs wavelet coherence analysis with the target timeseries

from_Series

from_pandas

pandas_method

dashboard(figsize=[11, 8], gs=None, plt_kwargs=None, histplt_kwargs=None, spectral_kwargs=None, spectralsignif_kwargs=None, spectralfig_kwargs=None, map_kwargs=None, hue='archiveType', marker='archiveType', size=None, scatter_kwargs=None, gridspec_kwargs=None, savefig_settings=None)[source]
Parameters:
  • figsize (list or tuple, optional) – Figure size. The default is [11,8].

  • gs (matplotlib.gridspec object, optional) – Requires at least two rows and 4 columns. - top row, left: timeseries - top row, right: histogram - bottom left: map - bottom right: PSD See [matplotlib.gridspec.GridSpec](https://matplotlib.org/stable/tutorials/intermediate/gridspec.html) for details.

plt_kwargsdict, optional

Optional arguments for the timeseries plot. See Series.plot() or EnsembleSeries.plot_envelope(). The default is None.

histplt_kwargsdict, optional

Optional arguments for the distribution plot. See Series.histplot() or EnsembleSeries.plot_distplot(). The default is None.

spectral_kwargsdict, optional

Optional arguments for the spectral method. Default is to use Lomb-Scargle method. See Series.spectral() or EnsembleSeries.spectral(). The default is None.

spectralsignif_kwargsdict, optional

Optional arguments to estimate the significance of the power spectrum. See PSD.signif_test. Note that we currently do not support significance testing for ensembles. The default is None.

spectralfig_kwargsdict, optional

Optional arguments for the power spectrum figure. See PSD.plot() or MultiplePSD.plot_envelope(). The default is None.

map_kwargsdict, optional

Optional arguments for map configuration - projection: str; Optional value for map projection. Default ‘auto’. - proj_default: bool - lakes, land, ocean, rivers, borders, coastline, background: bool or dict; - lgd_kwargs: dict; Optional values for how the map legend is configured - gridspec_kwargs: dict; Optional values for adjusting the arrangement of the colorbar, map and legend in the map subplot - legend: bool; Whether to draw a legend on the figure. Default is True - colorbar: bool; Whether to draw a colorbar on the figure if the data associated with hue are numeric. Default is True The default is None.

huestr, optional

Variable associated with color coding for points plotted on map. May correspond to a continuous or categorical variable. The default is ‘archiveType’.

sizestr, optional

Variable associated with size. Must correspond to a continuous numeric variable. The default is None.

markerstring, optional

Grouping variable that will produce points with different markers. Can have a numeric dtype but will always be treated as categorical. The default is ‘archiveType’.

scatter_kwargsdict, optional

Optional arguments configuring how data are plotted on a map. See description of scatter_kwargs in pyleoclim.utils.mapping.scatter_map

gridspec_kwargsdict, optional

Optional dictionary for configuring dashboard layout using gridspec For information about Gridspec configuration, refer to `Matplotlib documentation <https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.gridspec.GridSpec.html#matplotlib.gridspec.GridSpec>_. The default is None.

savefig_settingsdict, optional

the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

with or without a suffix; if the suffix is not given in “path”, it will follow “format”

  • “format” can be one of {“pdf”, “eps”, “png”, “ps”}.

The default is None.

Returns:

  • fig (matplotlib.figure) – The figure

  • ax (dict) – dictionary of matplotlib ax

See also

pyleoclim.core.series.Series.plot

plot a timeseries

pyleoclim.core.ensembleseries.EnsembleSeries.plot_envelope

Envelope plots for an ensemble

pyleoclim.core.series.Series.histplot

plot a distribution of the timeseries

pyleoclim.core.ensembleseries.EnsembleSeries.histplot

plot a distribution of the timeseries across ensembles

pyleoclim.core.series.Series.spectral

spectral analysis method.

pyleoclim.core.multipleseries.MultipleSeries.spectral

spectral analysis method for multiple series.

pyleoclim.core.psds.PSD.signif_test

significance test for timeseries analysis

pyleoclim.core.psds.PSD.plot

plot power spectrum

pyleoclim.core.psds.MulitplePSD.plot

plot envelope of power spectrum

pyleoclim.core.geoseries.GeoSeries.map

map location of dataset

pyleoclim.utils.mapping.scatter_map

Underlying mapping function for Pyleoclim

Examples

import pyleoclim as pyleo
ts = pyleo.utils.datasets.load_dataset('EDC-dD')
ts_interp = ts.convert_time_unit('kyr BP').interp(step=.5) # interpolate for a faster result
fig, ax = ts_interp.dashboard()
../_images/api_67_203.png
classmethod from_json(path)[source]

Creates a pyleoclim.Series from a JSON file

The keys in the JSON file must correspond to the parameter associated with a GeoSeries object

Parameters:

path (str) – Path to the JSON file

Returns:

ts – A Pyleoclim Series object.

Return type:

pyleoclim.core.series.Series

map(projection='Orthographic', proj_default=True, background=True, borders=False, coastline=True, rivers=False, lakes=False, ocean=True, land=True, fig=None, gridspec_slot=None, figsize=None, marker='archiveType', hue='archiveType', size=None, edgecolor='w', markersize=None, scatter_kwargs=None, cmap=None, colorbar=False, gridspec_kwargs=None, legend=True, lgd_kwargs=None, savefig_settings=None)[source]

Map the location of the record

Parameters:
  • projection (str, optional) – The projection to use. The default is ‘Orthographic’.

  • proj_default (bool; {True, False}, optional) – Whether to use the Pyleoclim defaults for each projection type. The default is True.

  • background (bool, optional) – If True, uses a shaded relief background (only one available in Cartopy) Default is on (True).

  • borders (bool or dict, optional) – Draws the countries border. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Defaults is off (False).

  • coastline (bool or dict, optional) – Draws the coastline. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Defaults is on (True).

  • land (bool or dict, optional) – Colors land masses. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is off (True). Overriden if background=True.

  • ocean (bool or dict, optional) – Colors oceans. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is on (True). Overriden if background=True.

  • rivers (bool or dict, optional) – Draws major rivers. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Default is off (False).

  • lakes (bool or dict, optional) – Draws major lakes. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is off (False).

  • figsize (list or tuple, optional) – The size of the figure. The default is None.

  • marker (str, optional) – The marker type for each archive. The default is None. Uses plot_default

  • hue (str, optional) – Variable associated with color coding. The default is None. Uses plot_default.

  • markersize (float, optional) – Size of the marker. The default is None.

  • scatter_kwargs (dict, optional) – Parameters for the scatter plot. The default is None.

  • legend (bool; {True, False}, optional) – Whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Arguments for the legend. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}. The default is None.

Returns:

res

Return type:

fig,ax_d

See also

pyleoclim.utils.mapping.scatter_map

Underlying mapping function for Pyleoclim

Examples

import pyleoclim as pyleo
ts = pyleo.utils.datasets.load_dataset('EDC-dD')
fig, ax = ts.map()
../_images/api_68_0.png
map_neighbors(mgs, radius=3000, projection='Orthographic', proj_default=True, background=True, borders=False, rivers=False, lakes=False, ocean=True, land=True, fig=None, gridspec_slot=None, figsize=None, marker='archiveType', hue='archiveType', size=None, edgecolor='w', markersize=None, scatter_kwargs=None, cmap=None, colorbar=False, gridspec_kwargs=None, legend=True, lgd_kwargs=None, savefig_settings=None)[source]

Map all records within a given radius of the object

Parameters:
  • mgs (MultipleGeoSeries) – object containing the series to be considered as neighbors

  • radius (float) – search radius for the record, in km. Default is 3000.

  • projection (str, optional) – The projection to use. The default is ‘Orthographic’.

  • proj_default (bool; {True, False}, optional) – Whether to use the Pyleoclim defaults for each projection type. The default is True.

  • background (bool; {True, False}, optional) – Whether to use a background. The default is True.

  • borders (bool; {True, False}, optional) – Draw borders. The default is False.

  • rivers (bool; {True, False}, optional) – Draw rivers. The default is False.

  • lakes (bool; {True, False}, optional) – Draw lakes. The default is False.

  • figsize (list or tuple, optional) – The size of the figure. The default is None.

  • marker (str, optional) – The marker type for each archive. The default is None. Uses plot_default

  • hue (str, optional) – Variable associated with color coding. The default is None. Uses plot_default.

  • markersize (float, optional) – Size of the marker. The default is None.

  • scatter_kwargs (dict, optional) – Parameters for the scatter plot. The default is None.

  • legend (bool; {True, False}, optional) – Whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Arguments for the legend. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}. The default is None.

Returns:

res

Return type:

fig,ax_d

See also

pyleoclim.utils.mapping.map

Underlying mapping function for Pyleoclim

Examples

from pylipd.utils.dataset import load_dir
lipd = load_dir(name='Pages2k')
df = lipd.get_timeseries_essentials()
dfs = df.query("archiveType in ('tree','documents','coral','lake sediment') and paleoData_variableName not in ('year')")
# place in a MultipleGeoSeries object
ts_list = []
for _, row in dfs.iterrows():
    ts_list.append(pyleo.GeoSeries(time=row['time_values'],value=row['paleoData_values'],
                                   time_name=row['time_variableName'],value_name=row['paleoData_variableName'],
                                   time_unit=row['time_units'], value_unit=row['paleoData_units'],
                                   lat = row['geo_meanLat'], lon = row['geo_meanLon'],
                                   archiveType = row['archiveType'], verbose = False,
                                   label=row['dataSetName']+'_'+row['paleoData_variableName']))

mgs = pyleo.MultipleGeoSeries(ts_list,time_unit='years AD')
gs = mgs.series_list[6] # extract one record as the target one
gs.map_neighbors(mgs, radius=4000)
Loading 16 LiPD files
Loaded..
(<Figure size 1800x700 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_69_8.png
resample(rule, keep_log=False, **kwargs)[source]

Run analogue to pandas.Series.resample.

This is a convenience method: doing

ser.resample(‘AS’).mean()

will do the same thing as

ser.pandas_method(lambda x: x.resample(‘AS’).mean())

but will also accept some extra resampling rules, such as ‘Ga’ (see below).

Parameters:
  • rule (str) –

    The offset string or object representing target conversion. Can also accept pyleoclim units, such as ‘ka’ (1000 years), ‘Ma’ (1 million years), and ‘Ga’ (1 billion years).

    Check the [pandas resample docs](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.resample.html) for more details.

  • kwargs (dict) – Any other arguments which will be passed to pandas.Series.resample.

Returns:

Resampler object, not meant to be used to directly. Instead, an aggregation should be called on it, see examples below.

Return type:

SeriesResampler

Examples

ts = pyleo.utils.load_dataset('EDC-dD').convert_time_unit('ky BP')
ts5k = ts.resample('1ka').mean()
fig, ax = ts.plot()
ts5k.plot(ax=ax,color='C1')
<Axes: xlabel='Age [ka]', ylabel='$\\delta \\mathrm{D}$ [‰]'>
../_images/api_70_1.png
segment(factor=10, verbose=False)[source]

Gap detection

This function segments a timeseries into n parts following a gap- detection algorithm. The rule of gap detection is very simple:

we define the intervals between time points as dts, then if dts[i] is larger than factor * dts[i-1], we think that the change of dts (or the gradient) is too large, and we regard it as a breaking point and divide the time series into two segments here

Parameters:
  • factor (float) – The factor that adjusts the threshold for gap detection

  • verbose (bool) – If True, will print warning messages if there is any

Returns:

res – If gaps were detected, returns the segments in a MultipleGeoSeries object, else, returns the original timeseries.

Return type:

MultiplegGeoSeries or GeoSeries

Examples

import numpy as np
gs = pyleo.utils.datasets.load_dataset('EDC-dD')
gs.value[4000:5000] = np.nan # cut a large gap in the middle
mgs = gs.segment()
mgs.plot()
(<Figure size 1000x400 with 1 Axes>,
 <Axes: xlabel='Age [y BP]', ylabel='$\\delta \\mathrm{D}$ [‰]'>)
../_images/api_71_1.png

MultipleSeries (pyleoclim.MultipleSeries)

class pyleoclim.core.multipleseries.MultipleSeries(series_list, time_unit=None, label=None, name=None)[source]

MultipleSeries object.

This object handles a collection of the type Series and can be created from a list of such objects. MultipleSeries should be used when the need to run analysis on multiple records arises, such as running principal component analysis. Some of the methods automatically transform the time axis prior to analysis to ensure consistency.

Parameters:
  • series_list (list) – a list of pyleoclim.Series objects

  • time_unit (str) – The target time unit for every series in the list. If None, then no conversion will be applied; Otherwise, the time unit of every series in the list will be converted to the target.

  • label (str) – label of the collection of timeseries (e.g. ‘PAGES 2k ice cores’)

Examples

soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms.label = 'ENSO'
ms
                     Southern Oscillation Index  NINO3 SST
datetime                                                  
1870-12-31 03:41:38                         NaN  -0.358250
1871-01-30 14:10:31                         NaN  -0.292458
1871-03-02 00:39:56                         NaN  -0.143583
1871-04-01 11:08:49                         NaN  -0.149625
1871-05-01 21:37:43                         NaN  -0.274250
...                                         ...        ...
2019-08-01 01:22:19                        -0.1        NaN
2019-08-31 11:51:43                        -1.2        NaN
2019-09-30 22:20:37                        -0.4        NaN
2019-10-31 08:49:30                        -0.8        NaN
2019-11-30 19:18:55                        -0.6        NaN

[1788 rows x 2 columns]

Methods

append(ts[, inplace])

Append timeseries ts to MultipleSeries object

bin(**kwargs)

Aligns the time axes of a MultipleSeries object, via binning.

common_time([method, step, start, stop, ...])

Aligns the time axes of a MultipleSeries object

convert_time_unit([time_unit])

Convert the time units of the object

copy()

Copy the object

correlation([target, timespan, alpha, ...])

Calculate the correlation between a MultipleSeries and a target Series

detrend([method])

Detrend timeseries

equal_lengths()

Test whether all series in object have equal length

filter([cutoff_freq, cutoff_scale, method])

Filtering the timeseries in the MultipleSeries object

flip([axis])

Flips the Series along one or both axes

from_json(path)

Creates a pyleoclim.MulitpleSeries from a JSON file

gkernel(**kwargs)

Aligns the time axes of a MultipleSeries object, via Gaussian kernel.

increments([step_style, verbose])

Extract grid properties (start, stop, step) of all the Series objects in a collection.

interp(**kwargs)

Aligns the time axes of a MultipleSeries object, via interpolation.

pca([weights, name, missing, tol_em, ...])

Principal Component Analysis (Empirical Orthogonal Functions)

plot([figsize, marker, markersize, ...])

Plot multiple timeseries on the same axis

remove(label)

Remove Series based on given label.

resolution([time_unit, verbose, statistic])

Generate a MultipleResolution object

sel([value, time, tolerance])

Slice MulitpleSeries based on 'value' or 'time'.

spectral([method, settings, mute_pbar, ...])

Perform spectral analysis on the timeseries

stackplot([figsize, savefig_settings, ...])

Stack plot of multiple series

standardize()

Standardize each series object in a collection

stripes([cmap, sat, ref_period, figsize, ...])

Represents a MultipleSeries object as a quilt of Ed Hawkins' "stripes" patterns

time_coverage_plot([figsize, marker, ...])

A plot of the temporal coverage of the records in a MultipleSeries object organized by ranked length.

to_csv([path, use_common_time])

Export MultipleSeries to CSV

to_json([path])

Export the pyleoclim.MultipleSeries object to a json file

to_pandas([paleo_style, use_common_time])

Align Series and place in DataFrame.

view()

Generates a DataFrame version of the MultipleSeries object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Wavelet analysis

append(ts, inplace=False)[source]

Append timeseries ts to MultipleSeries object

Parameters:

ts (pyleoclim.Series) – The pyleoclim Series object to be appended to the MultipleSeries object

Returns:

ms – The augmented object, comprising the old one plus ts

Return type:

MultipleSeries

See also

pyleoclim.core.series.Series

A Pyleoclim Series object

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
NINO3 = pyleo.utils.load_dataset('NINO3')
ms = pyleo.MultipleSeries([soi], label = 'ENSO')
ms.append(NINO3)
The two series have different lengths, left: 828 vs right: 1596
Metadata are different:
value_unit property -- left: mb, right: $^{\circ}$C
value_name property -- left: SOI, right: NINO3
label property -- left: Southern Oscillation Index, right: NINO3 SST
                     Southern Oscillation Index  NINO3 SST
datetime                                                  
1870-12-31 03:41:38                         NaN  -0.358250
1871-01-30 14:10:31                         NaN  -0.292458
1871-03-02 00:39:56                         NaN  -0.143583
1871-04-01 11:08:49                         NaN  -0.149625
1871-05-01 21:37:43                         NaN  -0.274250
...                                         ...        ...
2019-08-01 01:22:19                        -0.1        NaN
2019-08-31 11:51:43                        -1.2        NaN
2019-09-30 22:20:37                        -0.4        NaN
2019-10-31 08:49:30                        -0.8        NaN
2019-11-30 19:18:55                        -0.6        NaN

[1788 rows x 2 columns]
bin(**kwargs)[source]

Aligns the time axes of a MultipleSeries object, via binning.

This is critical for workflows that need to assume a common time axis for the group of series under consideration.

The common time axis is characterized by the following parameters:

start : the latest start date of the bunch (maximin of the minima)

stop : the earliest stop date of the bunch (minimum of the maxima)

step : The representative spacing between consecutive values (mean of the median spacings)

This is a special case of the common_time function.

Parameters:

kwargs (dict) – Arguments for the binning function. See pyleoclim.utils.tsutils.bin

Returns:

ms – The MultipleSeries objects with all series aligned to the same time axis.

Return type:

MultipleSeries

See also

pyleoclim.core.multipleseries.MultipleSeries.common_time

Base function on which this operates

pyleoclim.utils.tsutils.bin

Underlying binning function

pyleoclim.core.series.Series.bin

Bin function for Series object

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist)
msbin = ms.bin()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
common_time(method='interp', step=None, start=None, stop=None, step_style=None, time_axis=None, **kwargs)[source]

Aligns the time axes of a MultipleSeries object

The alignment is achieved via binning, interpolation, or Gaussian kernel. Alignment is critical for workflows that need to assume a common time axis for the group of series under consideration.

The common time axis is characterized by the following parameters:

start : the latest start date of the bunch (maximun of the minima)

stop : the earliest stop date of the bunch (minimum of the maxima)

step : The representative spacing between consecutive values

Optional arguments for binning, Gaussian kernel (gkernel) interpolation are those of the underling functions.

If any of the time axes are retrograde, this step makes them prograde.

Parameters:
  • method (string; {'bin','interp','gkernel'}) – either ‘bin’, ‘interp’ [default] or ‘gkernel’

  • step (float) – common step for all time axes. Default is None and inferred from the timeseries spacing

  • start (float) – starting point of the common time axis. Default is None and inferred as the max of the min of the time axes for the timeseries.

  • stop (float) – end point of the common time axis. Default is None and inferred as the min of the max of the time axes for the timeseries.

  • step_style (string; {'median', 'mean', 'mode', 'max'}) – Method to obtain a representative step among all Series (using tsutils.increments). Default value is None, so that it will be chosen according to the method: ‘max’ for bin and gkernel, ‘mean’ for interp.

  • time_axis (array) – Time axis onto which all the series will be aligned. Will override step,start,stop, and step_style if they are passed.

  • kwargs (dict) – keyword arguments (dictionary) of the bin, gkernel or interp methods

Returns:

ms – The MultipleSeries objects with all series aligned to the same time axis.

Return type:

MultipleSeries

Notes

start, stop, step, and step_style are interpreted differently depending on the method used. Interp uses these to specify the time_axis onto which interpolation will be applied. Bin and gkernel use these to specify the bin_edges which define the “buckets” used for the respective methods.

See also

pyleoclim.utils.tsutils.bin

put timeseries values into bins of equal size (possibly leaving NaNs in).

pyleoclim.utils.tsutils.gkernel

coarse-graining using a Gaussian kernel

pyleoclim.utils.tsutils.interp

interpolation onto a regular grid (default = linear interpolation)

pyleoclim.utils.tsutils.increments

infer grid properties

Examples

import numpy as np
import pyleoclim as pyleo
import matplotlib.pyplot as plt
from pyleoclim.utils.tsmodel import colored_noise

# create 2 incompletely sampled series
ns = 2 ; nt = 200; n_del = 20
serieslist = []

for j in range(ns):
    t = np.arange(nt)
    v = colored_noise(alpha=1, t=t)
    deleted_idx = np.random.choice(range(np.size(t)), n_del, replace=False)
    tu =  np.delete(t, deleted_idx)
    vu =  np.delete(v, deleted_idx)
    ts = pyleo.Series(time = tu, value = vu, label = 'series {}'.format(j+1), verbose=False)
    serieslist.append(ts)

# create MS object from the list
ms = pyleo.MultipleSeries(serieslist)

fig, ax = plt.subplots(2,2,sharex=True,sharey=True, figsize=(10,8))
ax = ax.flatten()
# apply common_time with default parameters
msc = ms.common_time()
msc.plot(title='linear interpolation',ax=ax[0], legend=False)

# apply common_time with binning
msc = ms.common_time(method='bin')
msc.plot(title='Binning',ax=ax[1], legend=False)

# apply common_time with gkernel
msc = ms.common_time(method='gkernel')
msc.plot(title=r'Gaussian kernel ($h=3$)',ax=ax[2],legend=False)

# apply common_time with gkernel and a large bandwidth
msc = ms.common_time(method='gkernel', h=.5)
msc.plot(title=r'Gaussian kernel ($h=.5$)',ax=ax[3],legend=False)
fig.tight_layout()
# Optional close fig after plotting
../_images/api_75_0.png
convert_time_unit(time_unit=None)[source]

Convert the time units of the object

Parameters:

time_unit (str) –

the target time unit, possible input: {

’year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

}

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
new_ms = ms.convert_time_unit('yr BP')
print('Original timeseries:')
print('time unit:', ms.time_unit)
print()
print('Converted timeseries:')
print('time unit:', new_ms.time_unit)
Original timeseries:
time unit: None

Converted timeseries:
time unit: yr BP
copy()[source]

Copy the object

Returns:

ms – The copied version of the pyleoclim.MultipleSeries object

Return type:

MultipleSeries

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms_copy = ms.copy()
correlation(target=None, timespan=None, alpha=0.05, settings=None, fdr_kwargs=None, common_time_kwargs=None, mute_pbar=False, seed=None)[source]

Calculate the correlation between a MultipleSeries and a target Series

Parameters:
  • target (pyleoclim.Series, optional) – The Series against which to take the correlation. If the target Series is not specified, then the 1st member of MultipleSeries will be used as the target

  • timespan (tuple, optional) – The time interval over which to perform the calculation

  • alpha (float) – The significance level (0.05 by default)

  • settings (dict) –

    Parameters for the correlation function, including:

    nsimint

    the number of simulations (default: 1000)

    methodstr, {‘ttest’,’isopersistent’,’isospectral’ (default)}

    method for significance testing

  • fdr_kwargs (dict) – Parameters for the FDR function

  • common_time_kwargs (dict) – Parameters for the method MultipleSeries.common_time()

  • mute_pbar (bool; {True,False}) – If True, the progressbar will be muted. Default is False.

  • seed (float or int) – random seed for isopersistent and isospectral methods

Returns:

corr – the result object

Return type:

CorrEns

See also

pyleoclim.utils.correlation.corr_sig

Correlation function

pyleoclim.utils.correlation.fdr

FDR function

pyleoclim.core.correns.CorrEns

the correlation ensemble object

Examples

import pyleoclim as pyleo
from pyleoclim.utils.tsmodel import colored_noise
import numpy as np

nt = 100
t0 = np.arange(nt)
v0 = colored_noise(alpha=1, t=t0)
noise = np.random.normal(loc=0, scale=1, size=nt)

ts0 = pyleo.Series(time=t0, value=v0, verbose=False)
ts1 = pyleo.Series(time=t0, value=v0+noise, verbose=False)
ts2 = pyleo.Series(time=t0, value=v0+2*noise, verbose=False)
ts3 = pyleo.Series(time=t0, value=v0+1/2*noise, verbose=False)

ts_list = [ts1, ts2, ts3]

ms = pyleo.MultipleSeries(ts_list)
ts_target = ts0

Correlation between the MultipleSeries object and a target Series. We also set an arbitrary random seed to ensure reproducibility:

corr_res = ms.correlation(ts_target, settings={'nsim': 20}, seed=2333)
print(corr_res)
Looping over 3 Series in collection
  correlation  p-value      signif. w/o FDR (α: 0.05)  signif. w/ FDR (α: 0.05)
-------------  ---------  ---------------------------  --------------------------
     0.997156  < 1e-6                               1  True
     0.988605  < 1e-6                               1  True
     0.999292  < 1e-6                               1  True
Ensemble size: 3

Correlation among the series of the MultipleSeries object

corr_res = ms.correlation(settings={'nsim': 20}, seed=2333)
print(corr_res)
Looping over 3 Series in collection
  correlation  p-value      signif. w/o FDR (α: 0.05)  signif. w/ FDR (α: 0.05)
-------------  ---------  ---------------------------  --------------------------
     1         < 1e-6                               1  True
     0.997139  < 1e-6                               1  True
     0.999286  < 1e-6                               1  True
Ensemble size: 3
detrend(method='emd', **kwargs)[source]

Detrend timeseries

Parameters:
  • method (str, optional) –

    The method for detrending. The default is ‘emd’. Options include:

    • linear: the result of a linear least-squares fit to y is subtracted from y.

    • constant: only the mean of data is subtrated.

    • ’savitzky-golay’, y is filtered using the Savitzky-Golay filters and the resulting filtered series is subtracted from y.

    • ’emd’ (default): Empirical mode decomposition. The last mode is assumed to be the trend and removed from the series

  • **kwargs (dict) – Relevant arguments for each of the methods.

Returns:

ms – The detrended timeseries

Return type:

MultipleSeries

See also

pyleoclim.core.series.Series.detrend

Detrending for a single series

pyleoclim.utils.tsutils.detrend

Detrending function

equal_lengths()[source]

Test whether all series in object have equal length

Returns:

  • flag (bool) – Whether or not the Series in the pyleo.MultipleSeries object are of equal length

  • lengths (list) – List of the lengths of the series in object

See also

pyleoclim.core.multipleseries.MultipleSeries.common_time

Aligns the time axes of a MultipleSeries object

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
flag, lengths = ms.equal_lengths()
print(flag)
False
filter(cutoff_freq=None, cutoff_scale=None, method='butterworth', **kwargs)[source]

Filtering the timeseries in the MultipleSeries object

Parameters:
  • method (str; {'savitzky-golay', 'butterworth', 'firwin', 'lanczos'}) – The filtering method - ‘butterworth’: the Butterworth method (default) - ‘savitzky-golay’: the Savitzky-Golay method - ‘firwin’: FIR filter design using the window method, with default window as Hamming - ‘lanczos’: lowpass filter via Lanczos resampling

  • cutoff_freq (float or list) – The cutoff frequency only works with the Butterworth method. If a float, it is interpreted as a low-frequency cutoff (lowpass). If a list, it is interpreted as a frequency band (f1, f2), with f1 < f2 (bandpass).

  • cutoff_scale (float or list) – cutoff_freq = 1 / cutoff_scale The cutoff scale only works with the Butterworth method and when cutoff_freq is None. If a float, it is interpreted as a low-frequency (high-scale) cutoff (lowpass). If a list, it is interpreted as a frequency band (f1, f2), with f1 < f2 (bandpass).

  • kwargs (dict) – A dictionary of the keyword arguments for the filtering method, See pyleoclim.utils.filter.savitzky_golay, pyleoclim.utils.filter.butterworth, pyleoclim.utils.filter.firwin, and pyleoclim.utils.filter.lanczos for the details

Returns:

ms

Return type:

MultipleSeries

See also

pyleoclim.series.Series.filter

filtering for Series objects

pyleoclim.utils.filter.butterworth

Butterworth method

pyleoclim.utils.filter.savitzky_golay

Savitzky-Golay method

pyleoclim.utils.filter.firwin

FIR filter design using the window method

pyleoclim.utils.filter.lanczos

lowpass filter via Lanczos resampling

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms_filter = ms.filter(method='lanczos',cutoff_scale=20)
flip(axis='value')[source]

Flips the Series along one or both axes

Parameters:

axis (str, optional) – The axis along which the Series will be flipped. The default is ‘value’. Other acceptable options are ‘time’ or ‘both’.

Returns:

  • ms (MultipleSeries) – The flipped object

    Examples

  • .. jupyter-execute:: – import pyleoclim as pyleo soi = pyleo.utils.load_dataset(‘SOI’) nino = pyleo.utils.load_dataset(‘NINO3’) ms = soi & nino ms.name = ‘ENSO’ fig, ax = ms.flip().stackplot()

Note that labels have been updated to reflect the flip

classmethod from_json(path)[source]

Creates a pyleoclim.MulitpleSeries from a JSON file

The keys in the JSON file must correspond to the parameter associated with MulitpleSeries and Series objects

Parameters:

path (str) – Path to the JSON file

Returns:

ts – A Pyleoclim MultipleSeries object.

Return type:

pyleoclim.core.series.MulitplesSeries

gkernel(**kwargs)[source]

Aligns the time axes of a MultipleSeries object, via Gaussian kernel.

This is critical for workflows that need to assume a common time axis for the group of series under consideration.

The common time axis is characterized by the following parameters:

start : the latest start date of the bunch (maximin of the minima)

stop : the earliest stop date of the bunch (minimum of the maxima)

step : The representative spacing between consecutive values (mean of the median spacings)

This is a special case of the common_time function.

Parameters:

kwargs (dict) – Arguments for gkernel. See pyleoclim.utils.tsutils.gkernel for details.

Returns:

ms – The MultipleSeries objects with all series aligned to the same time axis.

Return type:

MultipleSeries

See also

pyleoclim.core.multipleseries.MultipleSeries.common_time

Base function on which this operates

pyleoclim.utils.tsutils.gkernel

Underlying kernel module

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist)
msk = ms.gkernel()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
increments(step_style='median', verbose=False)[source]

Extract grid properties (start, stop, step) of all the Series objects in a collection.

Parameters:
  • step_style (str; {'median','mean','mode','max'}) –

    Method to obtain a representative step if x is not evenly spaced. Valid entries: ‘median’ [default], ‘mean’, ‘mode’ or ‘max’. The “mode” is the most frequent entry in a dataset, and may be a good choice if the timeseries is nearly equally spaced but for a few gaps.

    ”max” is a conservative choice, appropriate for binning methods and Gaussian kernel coarse-graining

  • verbose (bool) – If True, will print out warning messages when they appear

Returns:

increments – n x 3 array, where n is the number of series,

  • index 0 is the earliest time among all Series

  • index 1 is the latest time among all Series

  • index 2 is the step, chosen according to step_style

Return type:

numpy.array

See also

pyleoclim.utils.tsutils.increments

underlying array-level utility

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
increments = ms.increments()
interp(**kwargs)[source]

Aligns the time axes of a MultipleSeries object, via interpolation.

This is critical for workflows that need to assume a common time axis for the group of series under consideration.

The common time axis is characterized by the following parameters:

start : the latest start date of the bunch (maximin of the minima)

stop : the earliest stop date of the bunch (minimum of the maxima)

step : The representative spacing between consecutive values (mean of the median spacings)

This is a special case of the common_time function.

Parameters:

kwargs (keyword arguments (dictionary) for the interpolation method) –

Returns:

ms – The MultipleSeries objects with all series aligned to the same time axis.

Return type:

MultipleSeries

See also

pyleoclim.core.multipleseries.MultipleSeries.common_time

Base function on which this operates

pyleoclim.utils.tsutils.interp

Underlying interpolation function

pyleoclim.core.series.Series.interp

Interpolation function for Series object

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist)
msinterp = ms.interp()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
pca(weights=None, name=None, missing='fill-em', tol_em=0.005, max_em_iter=100, **pca_kwargs)[source]

Principal Component Analysis (Empirical Orthogonal Functions)

Decomposition of MultipleSeries in terms of orthogonal basis functions. Tolerant to missing values, infilled by an EM algorithm.

Do make sure the time axes are aligned, however! (e.g. use common_time())

Algorithm from statsmodels: https://www.statsmodels.org/stable/generated/statsmodels.multivariate.pca.PCA.html

Parameters:
  • weights (ndarray, optional) – Series weights to use after transforming data according to standardize or demean when computing the principal components.

  • missing ({str, None}) –

    Method for missing data. Choices are:

    • ’drop-row’ - drop rows with missing values.

    • ’drop-col’ - drop columns with missing values.

    • ’drop-min’ - drop either rows or columns, choosing by data retention.

    • ’fill-em’ - use EM algorithm to fill missing value. ncomp should be set to the number of factors required.

    • None raises if data contains NaN values.

  • tol_em (float) – Tolerance to use when checking for convergence of the EM algorithm.

  • max_em_iter (int) – Maximum iterations for the EM algorithm.

Returns:

res – Resulting pyleoclim.MultivariateDecomp object

Return type:

MultivariateDecomp

See also

pyleoclim.utils.tsutils.eff_sample_size

Effective Sample Size of timeseries y

pyleoclim.core.multivardecomp.MultivariateDecomp

The spatial decomposition object

pyleoclim.core.mulitpleseries.MulitpleSeries.common_time

align time axes

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist).common_time()
ms.label = ms.series_list[0].label
res = ms.pca() # carry out PCA

fig1, ax1 = res.screeplot() # plot the eigenvalue spectrum
fig2, ax2 = res.modeplot() # plot the first mode
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
The provided eigenvalue array has only one dimension. UQ defaults to NB82
../_images/api_86_2.png ../_images/api_86_3.png
plot(figsize=[10, 4], marker=None, markersize=None, linestyle=None, linewidth=None, colors=None, cmap='tab10', norm=None, xlabel=None, ylabel=None, title=None, time_unit=None, legend=True, plot_kwargs=None, lgd_kwargs=None, savefig_settings=None, ax=None, invert_xaxis=False)[source]

Plot multiple timeseries on the same axis

Parameters:
  • figsize (list, optional) – Size of the figure. The default is [10, 4].

  • marker (str, optional) – Marker type. The default is None.

  • markersize (float, optional) – Marker size. The default is None.

  • linestyle (str, optional) – Line style. The default is None.

  • linewidth (float, optional) – The width of the line. The default is None.

  • colors (a list of, or one, Python supported color code (a string of hex code or a tuple of rgba values)) – Colors for plotting. If None, the plotting will cycle the ‘tab10’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None.

  • norm (matplotlib.colors.Normalize) – The normalization for the colormap. If None, a linear normalization will be used.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Title. The default is None.

  • time_unit (str) –

    the target time unit, possible input: {

    ’year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

    } default is None, in which case the code picks the most common time unit in the collection. If no unambiguous winner can be found, the unit of the first series in the collection is used.

  • legend (bool, optional) – Whether the show the legend. The default is True.

  • plot_kwargs (dict, optional) – Plot parameters. The default is None.

  • lgd_kwargs (dict, optional) – Legend parameters. The default is None.

  • savefig_settings (dictionary, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the figure. The default is None.

  • invert_xaxis (bool, optional) – if True, the x-axis of the plot will be inverted

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms.name = 'ENSO'
fig, ax = ms.plot()
../_images/api_87_0.png
remove(label)[source]

Remove Series based on given label.

Modifies the MultipleSeries, does not return anything.

resolution(time_unit=None, verbose=True, statistic='median')[source]

Generate a MultipleResolution object

Increments are assigned to the preceding time value. E.g. for time_axis = [0,1,3], resolution.resolution = [1,2] resolution.time = [0,1]. Note that the MultipleResolution class requires a shared time unit. If the time_unit parameter is not passed, a time unit will be automatically determined.

Returns:

  • multipleresolution (pyleoclim.MultipleResolution) – MultipleResolution object

  • time_unit (str) – Time unit to convert objects to. See pyleo.Series.convert_time_unit for options.

  • verbose (bool) – Whether or not to print messages warning the user about automated decisions.

  • statistic (str; {‘median’,’mean’,None}) – If a recognized statistic is passed, this function will simply output that statistic applied to the resolution of each series in the MulitipleSeries object. Options are ‘mean’ or ‘median’. If statistic is None, then the function will return a new MultipleResolution class with plotting capabilities.

See also

pyleoclim.core.resolutions.MultipleResolution, pyleoclim.core.series.Series.convert_time_unit

Examples

To create a resolution object, apply the .resolution() method to a Series object with statistic=None.

import pyleoclim as pyleo

co2ts = pyleo.utils.load_dataset('AACO2')
edc = pyleo.utils.load_dataset('EDC-dD')
ms = edc & co2ts # create MS object
ms_resolution = ms.resolution(statistic=None)
Time unit not found, attempting conversion.
Converted to ky BP

Several methods are then available:

Summary statistics can be obtained via .describe()

ms_resolution.describe()
{'EPICA Dome C dD': {'nobs': 5784,
  'minmax': (0.008244210009855486, 1.3640000000207237),
  'mean': 0.13859329637102363,
  'variance': 0.029806736482500852,
  'skewness': 2.6618614618357794,
  'kurtosis': 8.705801510816693,
  'median': 0.05813225000042621},
 'EPICA Dome C CO2': {'nobs': 1900,
  'minmax': (1.9999983537945243e-05, 4.171250000015277),
  'mean': 0.4240631052631468,
  'variance': 0.24737134999156402,
  'skewness': 2.0625788668360423,
  'kurtosis': 6.7967879720624484,
  'median': 0.27533999998908243}}

A simple plot can be created using .summary_plot()

ms_resolution.summary_plot()
(<Figure size 1000x800 with 1 Axes>, <Axes: xlabel='Resolution [ky BP]'>)
../_images/api_90_1.png
sel(value=None, time=None, tolerance=0)[source]

Slice MulitpleSeries based on ‘value’ or ‘time’. See examples in pyleoclim.series.Series for usage.

Parameters:
  • value (int, float, slice) – If int/float, then the Series will be sliced so that self.value is equal to value (+/- tolerance). If slice, then the Series will be sliced so self.value is between slice.start and slice.stop (+/- tolerance).

  • time (int, float, slice) – If int/float, then the Series will be sliced so that self.time is equal to time. (+/- tolerance) If slice of int/float, then the Series will be sliced so that self.time is between slice.start and slice.stop. If slice of datetime (or str containing datetime, such as ‘2020-01-01’), then the Series will be sliced so that self.datetime_index is between time.start and time.stop (+/- tolerance, which needs to be a timedelta).

  • tolerance (int, float, default 0.) – Used by value and time, see above.

Returns:

ms_new – Copy of self, sliced according to value and time.

Return type:

pyleoclim.mulitpleseries.MultipleSeries

See also

pyleoclim.series.Series.sel

Slicing a series by value and time.

spectral(method='lomb_scargle', settings=None, mute_pbar=False, freq_method='log', freq_kwargs=None, label=None, verbose=False, scalogram_list=None)[source]

Perform spectral analysis on the timeseries

Parameters:
  • method (str; {'wwz', 'mtm', 'lomb_scargle', 'welch', 'periodogram', 'cwt'}) –

  • freq_method (str; {'log','scale', 'nfft', 'lomb_scargle', 'welch'}) –

  • freq_kwargs (dict) – Arguments for frequency vector

  • settings (dict) – Arguments for the specific spectral method

  • label (str) – Label for the PSD object

  • verbose (bool) – If True, will print warning messages if there is any

  • mute_pbar (bool) – Mute the progress bar. Default is False.

  • scalogram_list (pyleoclim.MultipleScalogram) – Multiple scalogram object containing pre-computed scalograms to use when calculating spectra, only works with wwz or cwt

Returns:

psd – A Multiple PSD object

Return type:

MultiplePSD

See also

pyleoclim.utils.spectral.mtm

Spectral analysis using the Multitaper approach

pyleoclim.utils.spectral.lomb_scargle

Spectral analysis using the Lomb-Scargle method

pyleoclim.utils.spectral.welch

Spectral analysis using the Welch segement approach

pyleoclim.utils.spectral.periodogram

Spectral anaysis using the basic Fourier transform

pyleoclim.utils.spectral.wwz_psd

Spectral analysis using the Wavelet Weighted Z transform

pyleoclim.utils.spectral.cwt_psd

Spectral analysis using the continuous Wavelet Transform as implemented by Torrence and Compo

pyleoclim.utils.wavelet.make_freq_vector

Functions to create the frequency vector

pyleoclim.utils.tsutils.detrend

Detrending function

pyleoclim.core.series.Series.spectral

Spectral analysis for a single timeseries

pyleoclim.core.PSD.PSD

PSD object

pyleoclim.core.psds.MultiplePSD

Multiple PSD object

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist)
ms_psd = ms.spectral()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
stackplot(figsize=None, savefig_settings=None, time_unit=None, xlim=None, fill_between_alpha=0.2, colors=None, cmap='tab10', norm=None, labels='auto', ylabel_fontsize=8, spine_lw=1.5, grid_lw=0.5, label_x_loc=-0.15, v_shift_factor=0.75, linewidth=1.5, plot_kwargs=None)[source]

Stack plot of multiple series

Time units are harmonized prior to plotting. Note that the plotting style is uniquely designed for this one and cannot be properly reset with pyleoclim.set_style().

Parameters:
  • figsize (list) – Size of the figure.

  • savefig_settings (dictionary) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • time_unit (str) –

    the target time unit, possible inputs: {

    ’year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

    } default is None, in which case the code picks the most common time unit in the collection. If no discernible winner can be found, the unit of the first series in the collection is used.

  • xlim (list) – The x-axis limit.

  • fill_between_alpha (float) – The transparency for the fill_between shades.

  • colors (a list of, or one, Python supported color code (a string of hex code or a tuple of rgba values)) – Colors for plotting. If None, the plotting will cycle the ‘tab10’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None.

  • norm (matplotlib.colors.Normalize like) – The normalization for the colormap. If None, a linear normalization will be used.

  • labels (None, 'auto' or list) – If None, doesn’t add labels to the subplots If ‘auto’, uses the labels passed during the creation of pyleoclim.Series If list, pass a list of strings for each labels. Default is ‘auto’

  • spine_lw (float) – The linewidth for the spines of the axes.

  • grid_lw (float) – The linewidth for the gridlines.

  • label_x_loc (float) – The x location for the label of each curve.

  • v_shift_factor (float) – The factor for the vertical shift of each axis. The default value 3/4 means the top of the next axis will be located at 3/4 of the height of the previous one.

  • linewidth (float) – The linewidth for the curves.

  • ylabel_fontsize (int) – Size for ylabel font. Default is 8, to avoid crowding.

  • plot_kwargs (dict or list of dict) –

    Arguments to further customize the plot from matplotlib.pyplot.plot.

    • Dictionary: Arguments will be applied to all lines in the stackplots

    • List of dictionaries: Allows to customize one line at a time.

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
d = pyleo.Lipd(usr_path = url)
tslist = d.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only concerns age and depth
ms = pyleo.MultipleSeries(tslist)
fig, ax = ms.stackplot()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
../_images/api_92_2.png

Let’s change the labels on the left

sst = d.to_LipdSeries(number=5)
d18Osw = d.to_LipdSeries(number=3)
ms = pyleo.MultipleSeries([sst,d18Osw])

fig, ax = ms.stackplot(labels=['sst','d18Osw'])
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
../_images/api_93_1.png

And let’s remove them completely

fig, ax = ms.stackplot(labels=None)
../_images/api_94_0.png

Now, let’s add markers to the timeseries.

fig, ax = ms.stackplot(labels=None, plot_kwargs={'marker':'o'})
../_images/api_95_0.png

Using different marker types on each series:

fig, ax = ms.stackplot(labels=None, plot_kwargs=[{'marker':'o'},{'marker':'^'}])
../_images/api_96_0.png
standardize()[source]

Standardize each series object in a collection

Returns:

ms – The standardized pyleoclim.MultipleSeries object

Return type:

MultipleSeries

Examples

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms_std = ms.standardize()
stripes(cmap='RdBu_r', sat=1.0, ref_period=None, figsize=None, savefig_settings=None, time_unit=None, labels='auto', label_color='gray', show_xaxis=False, common_time_kwargs=None, xlim=None, font_scale=0.8, x_offset=0.05)[source]

Represents a MultipleSeries object as a quilt of Ed Hawkins’ “stripes” patterns

To ensure comparability, constituent series are placed on a common time axis, using MultipleSeries.common_time(). To ensure consistent scaling, all series are Gaussianized prior to plotting.

Credit: https://showyourstripes.info/, Implementation: https://matplotlib.org/matplotblog/posts/warming-stripes/

Parameters:
  • cmap (str) – colormap name (https://matplotlib.org/stable/tutorials/colors/colormaps.html) Default is ‘RdBu_r’

  • ref_period (TYPE, optional) – dates of the reference period, in the form “(first, last)”. The default is None, which will pick the beginning and end of the common time axis.

  • figsize (list) – a list of two integers indicating the figure size (in inches)

satfloat > 0

Controls the saturation of the colormap normalization by scaling the vmin, vmax in https://matplotlib.org/stable/tutorials/colors/colormapnorms.html default = 1.0

show_xaxisbool

flag indicating whether or not the x-axis should be shown (default = False)

savefig_settings : dictionary

the dictionary of arguments for plt.savefig(); some notes below:

  • ‘path’ must be specified; it can be any existing or non-existing path, with or without a suffix; if the suffix is not given in ‘path’, it will follow ‘format’

  • ‘format’ can be one of {“pdf”, ‘eps’, ‘png’, ps’} The default is None.

time_unit : str

the target time unit, possible inputs: {

‘year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

} default is None, in which case the code picks the most common time unit in the collection. If no discernible winner can be found, the unit of the first series in the collection is used.

xlimlist

The x-axis limit.

x_offsetfloat

value controlling the horizontal offset between stripes and labels (default = 0.05)

labels: None, ‘auto’ or list

If None, doesn’t add labels to the subplots

If ‘auto’, uses the labels passed during the creation of pyleoclim.Series

If list, pass a list of strings for each labels. Default is ‘auto’

common_time_kwargsdict

Optional arguments for common_time()

font_scalefloat

The scale for the font sizes. Default is 0.8.

Returns:

See also

pyleoclim.core.multipleseries.MultipleSeries.common_time

aligns the time axes of a MultipleSeries object

pyleoclim.utils.plotting.savefig

saving a figure in Pyleoclim

pyleoclim.core.series.Series.stripes

stripes representation in Pyleoclim

pyleoclim.utils.tsutils.gaussianize

mapping to a standard Normal distribution

Examples

co2ts = pyleo.utils.load_dataset('AACO2')
lr04 = pyleo.utils.load_dataset('LR04')
edc = pyleo.utils.load_dataset('EDC-dD')
ms = lr04.flip() & edc & co2ts # create MS object
fig, ax = ms.stripes()
The two series have different lengths, left: 2115 vs right: 1901
Metadata are different:
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta^{18} \mathrm{O}$ x (-1), right: $CO_2$
label property -- left: LR04 benthic stack, right: EPICA Dome C CO2
archiveType property -- left: MarineSediment, right: GlacierIce
importedFrom property -- left: None, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
The two series have different lengths, left: 5785 vs right: 1901
Metadata are different:
lat property -- left: -75.1011, right: None
lon property -- left: 123.3478, right: None
elevation property -- left: 3233, right: None
time_unit property -- left: y BP, right: ky BP
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta \mathrm{D}$, right: $CO_2$
label property -- left: EPICA Dome C dD, right: EPICA Dome C CO2
sensorType property -- left: ice sheet, right: None
observationType property -- left: hydrogen isotopes, right: None
importedFrom property -- left: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.txt, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
control_archiveType property -- left: False, right: None
../_images/api_98_1.png

The default style has rather thick bands, intense colors, and too many stripes. The first issue can be solved by passing a figsize tuple; the second by increasing the LIM parameter; the third by passing a step of 0.5 (500y) to common_time(). Finally, the labels are too close to the edge of the plot, which can be adjusted with x_offset, like so:

co2ts = pyleo.utils.load_dataset('AACO2')
lr04 = pyleo.utils.load_dataset('LR04')
edc = pyleo.utils.load_dataset('EDC-dD')
ms = lr04.flip() & edc & co2ts # create MS object
fig, ax = ms.stripes(figsize=(8,2.5),show_xaxis=True, sat = 0.8)
The two series have different lengths, left: 2115 vs right: 1901
Metadata are different:
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta^{18} \mathrm{O}$ x (-1), right: $CO_2$
label property -- left: LR04 benthic stack, right: EPICA Dome C CO2
archiveType property -- left: MarineSediment, right: GlacierIce
importedFrom property -- left: None, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
The two series have different lengths, left: 5785 vs right: 1901
Metadata are different:
lat property -- left: -75.1011, right: None
lon property -- left: 123.3478, right: None
elevation property -- left: 3233, right: None
time_unit property -- left: y BP, right: ky BP
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta \mathrm{D}$, right: $CO_2$
label property -- left: EPICA Dome C dD, right: EPICA Dome C CO2
sensorType property -- left: ice sheet, right: None
observationType property -- left: hydrogen isotopes, right: None
importedFrom property -- left: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.txt, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
control_archiveType property -- left: False, right: None
../_images/api_99_1.png
time_coverage_plot(figsize=[10, 3], marker=None, markersize=None, alpha=0.8, linestyle=None, linewidth=10, colors=None, cmap='turbo', norm=None, xlabel=None, ylabel=None, title=None, time_unit=None, legend=True, inline_legend=True, plot_kwargs=None, lgd_kwargs=None, label_x_offset=200, label_y_offset=0, savefig_settings=None, ax=None, ypad=None, invert_xaxis=False, invert_yaxis=False)[source]

A plot of the temporal coverage of the records in a MultipleSeries object organized by ranked length.

Inspired by Dr. Mara Y. McPartland.

Parameters:
  • figsize (list, optional) – Size of the figure. The default is [10, 4].

  • marker (str, optional) – Marker type. The default is None.

  • markersize (float, optional) – Marker size. The default is None.

  • alpha (float, optional) – Alpha of the lines

  • linestyle (str, optional) – Line style. The default is None.

  • linewidth (float, optional) – The width of the line. The default is 10.

  • colors (a list of, or one, Python supported color code (a string of hex code or a tuple of rgba values)) – Colors for plotting. If None, the plotting will cycle the ‘viridis’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None. Default is ‘turbo’

  • norm (matplotlib.colors.Normalize) – The normalization for the colormap. If None, a linear normalization will be used.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Title. The default is None.

  • time_unit (str) –

    the target time unit, possible input: {

    ’year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

    } default is None, in which case the code picks the most common time unit in the collection. If no unambiguous winner can be found, the unit of the first series in the collection is used.

  • legend (bool, optional) – Whether the show the legend. The default is True.

  • inline_legend (bool, optional) – Whether to use inline labels or the default pyleoclim legend. This option overrides lgd_kwargs

  • plot_kwargs (dict, optional) – Plot parameters. The default is None.

  • lgd_kwargs (dict, optional) –

    Legend parameters. The default is None.

    If inline_legend is True, lgd_kwargs will be passed to ax.text() (see matplotlib.axes.Axes.text documentation) If inline_legend is False, lgd_kwargs will be passed to ax.legend() (see matplotlib.axes.Axes.legend documentation)

  • label_x_offset (float or list, optional) – Amount to offset label by in the x direction. Only used if inline_legend is True. Default is 200. If list, should have the same number of elements as the MultipleSeries object.

  • label_y_offset (float or list, optional) – Amount to offset label by in the y direction. Only used if inline_legend is True. Default is 0. If list, should have the same number of elements as the MultipleSeries object.

  • savefig_settings (dictionary, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the figure. The default is None.

  • invert_xaxis (bool, optional) – if True, the x-axis of the plot will be inverted

  • invert_yaxis (bool, optional) – if True, the y-axis of the plot will be inverted

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

import pyleoclim as pyleo

co2ts = pyleo.utils.load_dataset('AACO2')
lr04 = pyleo.utils.load_dataset('LR04')
edc = pyleo.utils.load_dataset('EDC-dD')
ms = lr04.flip() & edc & co2ts # create MS object
fig, ax = ms.time_coverage_plot(label_y_offset=-.08) #Fiddling with label offsets is sometimes necessary for aesthetic
The two series have different lengths, left: 2115 vs right: 1901
Metadata are different:
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta^{18} \mathrm{O}$ x (-1), right: $CO_2$
label property -- left: LR04 benthic stack, right: EPICA Dome C CO2
archiveType property -- left: MarineSediment, right: GlacierIce
importedFrom property -- left: None, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
The two series have different lengths, left: 5785 vs right: 1901
Metadata are different:
lat property -- left: -75.1011, right: None
lon property -- left: 123.3478, right: None
elevation property -- left: 3233, right: None
time_unit property -- left: y BP, right: ky BP
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta \mathrm{D}$, right: $CO_2$
label property -- left: EPICA Dome C dD, right: EPICA Dome C CO2
sensorType property -- left: ice sheet, right: None
observationType property -- left: hydrogen isotopes, right: None
importedFrom property -- left: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.txt, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
control_archiveType property -- left: False, right: None
../_images/api_100_1.png

Awkward vertical spacing can be adjusted by varying linewidth and figure size

import pyleoclim as pyleo

co2ts = pyleo.utils.load_dataset('AACO2')
lr04 = pyleo.utils.load_dataset('LR04')
edc = pyleo.utils.load_dataset('EDC-dD')
ms = lr04.flip() & edc & co2ts # create MS object
fig, ax = ms.time_coverage_plot(linewidth=20,figsize=[10,2],label_y_offset=-.1)
The two series have different lengths, left: 2115 vs right: 1901
Metadata are different:
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta^{18} \mathrm{O}$ x (-1), right: $CO_2$
label property -- left: LR04 benthic stack, right: EPICA Dome C CO2
archiveType property -- left: MarineSediment, right: GlacierIce
importedFrom property -- left: None, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
The two series have different lengths, left: 5785 vs right: 1901
Metadata are different:
lat property -- left: -75.1011, right: None
lon property -- left: 123.3478, right: None
elevation property -- left: 3233, right: None
time_unit property -- left: y BP, right: ky BP
value_unit property -- left: ‰, right: ppm
value_name property -- left: $\delta \mathrm{D}$, right: $CO_2$
label property -- left: EPICA Dome C dD, right: EPICA Dome C CO2
sensorType property -- left: ice sheet, right: None
observationType property -- left: hydrogen isotopes, right: None
importedFrom property -- left: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.txt, right: https://www.ncei.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2composite.txt
control_archiveType property -- left: False, right: None
../_images/api_101_1.png
to_csv(path=None, *args, use_common_time=False, **kwargs)[source]

Export MultipleSeries to CSV

Parameters:
  • path (str, optional) – system path to save the file. The default is None, in which case the filename defaults to the poetic ‘MultipleSeries.csv’ in the current directory.

  • *args – Arguments and keyword arguments to pass to common_time.

  • **kwargs – Arguments and keyword arguments to pass to common_time.

  • use_common_time – Set to True if you want to use common_time to align the Series to a common timescale. Else, times for which some Series don’t have values will be filled with NaN (default).

  • bool – Set to True if you want to use common_time to align the Series to a common timescale. Else, times for which some Series don’t have values will be filled with NaN (default).

Return type:

None.

Examples

This will place the NINO3 and SOI datasets into a MultipleSeries object and export it to enso.csv.

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms.label = 'enso'
ms.to_csv()
to_json(path=None)[source]

Export the pyleoclim.MultipleSeries object to a json file

Parameters:

path (string, optional) – The path to the file. The default is None, resulting in a file saved in the current working directory using the label for the dataset as filename if available or ‘mulitpleseries.json’ if label is not provided.

Return type:

None.

to_pandas(paleo_style=False, *args, use_common_time=False, **kwargs)[source]

Align Series and place in DataFrame.

Column names will be taken from each Series’ label.

Parameters:
  • paleo_style (boolean, optional) – If True, will format datetime as the common time vector and assign as index name the time_name of the first series in the object.

  • *args – Arguments and keyword arguments to pass to common_time.

  • **kwargs – Arguments and keyword arguments to pass to common_time.

  • use_common_time – Pass True if you want to use common_time to align the Series to have common times. Else, times for which some Series doesn’t have values will be filled with NaN (default).

  • bool – Pass True if you want to use common_time to align the Series to have common times. Else, times for which some Series doesn’t have values will be filled with NaN (default).

Return type:

pandas.DataFrame

view()[source]

Generates a DataFrame version of the MultipleSeries object, suitable for viewing in a Jupyter Notebook

Return type:

pd.DataFrame

Examples

import pyleoclim as pyleo

soi = pyleo.utils.load_dataset('SOI')
nino = pyleo.utils.load_dataset('NINO3')
ms = soi & nino
ms.name = 'ENSO'
ms.view()
Southern Oscillation Index NINO3 SST
Time
1871.000000 NaN -0.358250
1871.083333 NaN -0.292458
1871.166667 NaN -0.143583
1871.250000 NaN -0.149625
1871.333333 NaN -0.274250
... ... ...
2019.583333 -0.1 NaN
2019.666667 -1.2 NaN
2019.750000 -0.4 NaN
2019.833333 -0.8 NaN
2019.916667 -0.6 NaN

1788 rows × 2 columns

wavelet(method='cwt', settings={}, freq_method='log', freq_kwargs=None, verbose=False, mute_pbar=False)[source]

Wavelet analysis

Parameters:
  • method (str {wwz, cwt}) –

    • cwt - the continuous wavelet transform (as per Torrence and Compo [1998])

      is appropriate only for evenly-spaced series.

    • wwz - the weighted wavelet Z-transform (as per Foster [1996])

      is appropriate for both evenly and unevenly-spaced series.

    Default is cwt, returning an error if the Series is unevenly-spaced.

  • settings (dict) – Settings for the particular method. The default is {}.

  • freq_method (str; {'log', 'scale', 'nfft', 'lomb_scargle', 'welch'}) –

  • freq_kwargs (dict) – Arguments for frequency vector

  • settings – Arguments for the specific spectral method

  • verbose (bool) – If True, will print warning messages if there is any

  • mute_pbar (bool, optional) – Whether to mute the progress bar. The default is False.

Returns:

scals – A Multiple Scalogram object

Return type:

MultipleScalograms

See also

pyleoclim.utils.wavelet.wwz

wwz function

pyleoclim.utils.wavelet.cwt

cwt function

pyleoclim.utils.wavelet.make_freq_vector

Functions to create the frequency vector

pyleoclim.utils.tsutils.detrend

Detrending function

pyleoclim.core.series.Series.wavelet

wavelet analysis on single object

pyleoclim.core.scalograms.MultipleScalogram

Multiple Scalogram object

References

Torrence, C. and G. P. Compo, 1998: A Practical Guide to Wavelet Analysis. Bull. Amer. Meteor. Soc., 79, 61-78. Python routines available at http://paos.colorado.edu/research/wavelets/

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
tslist = data.to_LipdSeriesList()
tslist = tslist[2:] # drop the first two series which only contain age and depth
ms = pyleo.MultipleSeries(tslist)
wav = ms.wavelet(method='wwz')
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries

MultipleGeoSeries (pyleoclim.MultipleGeoSeries)

class pyleoclim.core.multiplegeoseries.MultipleGeoSeries(series_list, time_unit=None, label=None)[source]

MultipleGeoSeries object.

This object handles a collection of the type GeoSeries and can be created from a list of such objects. MultipleGeoSeries should be used when the need to run analysis on multiple records arises, such as running principal component analysis. Some of the methods automatically transform the time axis prior to analysis to ensure consistency.

Parameters:
  • series_list (list) – a list of pyleoclim.Series objects

  • time_unit (str) – The target time unit for every series in the list. If None, then no conversion will be applied; Otherwise, the time unit of every series in the list will be converted to the target.

  • label (str) – label of the collection of timeseries (e.g. ‘Euro 2k’)

Examples

from pylipd.utils.dataset import load_dir
lipd = load_dir(name='Pages2k')
df = lipd.get_timeseries_essentials()
dfs = df.query("archiveType in ('tree','documents','coral','lake sediment')")
# place in a MultipleGeoSeries object
ts_list = []
for _, row in dfs.iterrows():
    ts_list.append(pyleo.GeoSeries(time=row['time_values'],value=row['paleoData_values'],
                                   time_name=row['time_variableName'],value_name=row['paleoData_variableName'],
                                   time_unit=row['time_units'], value_unit=row['paleoData_units'],
                                   lat = row['geo_meanLat'], lon = row['geo_meanLon'],
                                   archiveType = row['archiveType'], verbose = False,
                                   label=row['dataSetName']+'_'+row['paleoData_variableName']))

Euro2k = pyleo.MultipleGeoSeries(ts_list, label='Euro2k',time_unit='years AD')
Euro2k.map()
Loading 16 LiPD files
Loaded..
(<Figure size 1800x600 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_105_8.png

Methods

append(ts[, inplace])

Append timeseries ts to MultipleSeries object

bin(**kwargs)

Aligns the time axes of a MultipleSeries object, via binning.

common_time([method, step, start, stop, ...])

Aligns the time axes of a MultipleSeries object

convert_time_unit([time_unit])

Convert the time units of the object

copy()

Copy the object

correlation([target, timespan, alpha, ...])

Calculate the correlation between a MultipleSeries and a target Series

detrend([method])

Detrend timeseries

equal_lengths()

Test whether all series in object have equal length

filter([cutoff_freq, cutoff_scale, method])

Filtering the timeseries in the MultipleSeries object

flip([axis])

Flips the Series along one or both axes

from_json(path)

Creates a pyleoclim.MulitpleSeries from a JSON file

gkernel(**kwargs)

Aligns the time axes of a MultipleSeries object, via Gaussian kernel.

increments([step_style, verbose])

Extract grid properties (start, stop, step) of all the Series objects in a collection.

interp(**kwargs)

Aligns the time axes of a MultipleSeries object, via interpolation.

map([marker, hue, size, cmap, edgecolor, ...])

param hue:

Grouping variable that will produce points with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case.

pca([weights, missing, tol_em, max_em_iter])

Principal Component Analysis (Empirical Orthogonal Functions)

plot([figsize, marker, markersize, ...])

Plot multiple timeseries on the same axis

remove(label)

Remove Series based on given label.

resolution([time_unit, verbose, statistic])

Generate a MultipleResolution object

sel([value, time, tolerance])

Slice MulitpleSeries based on 'value' or 'time'.

spectral([method, settings, mute_pbar, ...])

Perform spectral analysis on the timeseries

stackplot([figsize, savefig_settings, ...])

Stack plot of multiple series

standardize()

Standardize each series object in a collection

stripes([cmap, sat, ref_period, figsize, ...])

Represents a MultipleSeries object as a quilt of Ed Hawkins' "stripes" patterns

time_coverage_plot([figsize, marker, ...])

A plot of the temporal coverage of the records in a MultipleSeries object organized by ranked length.

time_geo_plot([figsize, marker, markersize, ...])

A plot of the temporal coverage of the records in a MultipleGeoSeries object organized by latitude or longitude.

to_csv([path, use_common_time])

Export MultipleSeries to CSV

to_json([path])

Export the pyleoclim.MultipleSeries object to a json file

to_pandas([paleo_style, use_common_time])

Align Series and place in DataFrame.

view()

Generates a DataFrame version of the MultipleSeries object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Wavelet analysis

map(marker='archiveType', hue='archiveType', size=None, cmap=None, edgecolor='k', projection='auto', proj_default=True, crit_dist=5000, colorbar=True, background=True, borders=False, coastline=True, rivers=False, lakes=False, land=True, ocean=True, figsize=None, fig=None, scatter_kwargs=None, gridspec_kwargs=None, legend=True, gridspec_slot=None, lgd_kwargs=None, savefig_settings=None, **kwargs)[source]
Parameters:
  • hue (string, optional) – Grouping variable that will produce points with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. The default is ‘archiveType’.

  • size (string, optional) – Grouping variable that will produce points with different sizes. Expects to be numeric. Any data without a value for the size variable will be filtered out. The default is None.

  • marker (string, optional) – Grouping variable that will produce points with different markers. Can have a numeric dtype but will always be treated as categorical. The default is ‘archiveType’.

  • edgecolor (color (string) or list of rgba tuples, optional) – Color of marker edge. The default is ‘w’.

  • projection (string) – the map projection. Available projections: ‘Robinson’ (default), ‘PlateCarree’, ‘AlbertsEqualArea’, ‘AzimuthalEquidistant’,’EquidistantConic’,’LambertConformal’, ‘LambertCylindrical’,’Mercator’,’Miller’,’Mollweide’,’Orthographic’, ‘Sinusoidal’,’Stereographic’,’TransverseMercator’,’UTM’, ‘InterruptedGoodeHomolosine’,’RotatedPole’,’OSGB’,’EuroPP’, ‘Geostationary’,’NearsidePerspective’,’EckertI’,’EckertII’, ‘EckertIII’,’EckertIV’,’EckertV’,’EckertVI’,’EqualEarth’,’Gnomonic’, ‘LambertAzimuthalEqualArea’,’NorthPolarStereo’,’OSNI’,’SouthPolarStereo’ By default, projection == ‘auto’, so the projection will be picked based on the degree of clustering of the sites.

  • proj_default (bool, optional) – If True, uses the standard projection attributes. Enter new attributes in a dictionary to change them. Lists of attributes can be found in the Cartopy documentation. The default is True.

  • crit_dist (float, optional) – critical radius for projection choice. Default: 5000 km Only active if projection == ‘auto’

  • background (bool, optional) – If True, uses a shaded relief background (only one available in Cartopy) Default is on (True).

  • borders (bool or dict, optional) – Draws the countries border. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Defaults is off (False).

  • coastline (bool or dict, optional) – Draws the coastline. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Defaults is on (True).

  • land (bool or dict, optional) – Colors land masses. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is off (True). Overriden if background=True.

  • ocean (bool or dict, optional) – Colors oceans. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is on (True). Overriden if background=True.

  • rivers (bool or dict, optional) – Draws major rivers. If a dictionary of formatting arguments is supplied (e.g. linewidth, alpha), will draw according to specifications. Default is off (False).

  • lakes (bool or dict, optional) – Draws major lakes. If a dictionary of formatting arguments is supplied (e.g. color, alpha), will draw according to specifications. Default is off (False).

  • figsize (list or tuple, optional) – Size for the figure

  • scatter_kwargs (dict, optional) – Dict of arguments available in seaborn.scatterplot. Dictionary of arguments available in matplotlib.pyplot.scatter.

  • legend (bool, optional) – Whether to draw a legend on the figure. Default is True.

  • colorbar (bool, optional) – Whether to draw a colorbar on the figure if the data associated with hue are numeric. Default is True.

  • lgd_kwargs (dict, optional) – Dictionary of arguments for matplotlib.pyplot.legend.

  • savefig_settings (dict, optional) –

    Dictionary of arguments for matplotlib.pyplot.saveFig.

    • ”path” must be specified; it can be any existed or non-existed path, with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • extent (TYPE, optional) – DESCRIPTION. The default is ‘global’.

  • cmap (string or list, optional) – Matplotlib supported colormap id or list of colors for creating a colormap. See choosing a matplotlib colormap. The default is None.

  • fig (matplotlib.pyplot.figure, optional) – See matplotlib.pyplot.figure <https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.figure.html#matplotlib-pyplot-figure>_. The default is None.

  • gs_slot (Gridspec slot, optional) – If generating a map for a multi-plot, pass a gridspec slot. The default is None.

  • gridspec_kwargs (dict, optional) – Function assumes the possibility of a colorbar, map, and legend. A list of floats associated with the keyword width_ratios will assume the first (index=0) is the relative width of the colorbar, the second to last (index = -2) is the relative width of the map, and the last (index = -1) is the relative width of the area for the legend. For information about Gridspec configuration, refer to `Matplotlib documentation <https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.gridspec.GridSpec.html#matplotlib.gridspec.GridSpec>_. The default is None.

  • kwargs (dict, optional) –

    • ‘missing_val_hue’, ‘missing_val_marker’, ‘missing_val_label’ can all be used to change the way missing values are represented (‘k’, ‘?’, are default hue and marker values will be associated with the label: ‘missing’).

    • ’hue_mapping’ and ‘marker_mapping’ can be used to submit dictionaries mapping hue values to colors and marker values to markers. Does not replace passing a string value for hue or marker.

Returns:

Matplotlib figure, dictionary of ax objects which includes the as many as three items: ‘cb’ (colorbar ax), ‘map’ (scatter map), and ‘leg’ (legend ax)

Return type:

fig, ax_d

See also

pyleoclim.utils.mapping.scatter_map

information-rich scatterplot on Cartopy map

Examples

from pylipd.utils.dataset import load_dir
lipd = load_dir(name='Pages2k')
df = lipd.get_timeseries_essentials()
dfs = df.query("archiveType in ('tree','documents','coral','lake sediment','borehole')")
# place in a MultipleGeoSeries object
ts_list = []
for _, row in dfs.iterrows():
    ts_list.append(pyleo.GeoSeries(time=row['time_values'],value=row['paleoData_values'],
                                   time_name=row['time_variableName'],value_name=row['paleoData_variableName'],
                                   time_unit=row['time_units'], value_unit=row['paleoData_units'],
                                   lat = row['geo_meanLat'], lon = row['geo_meanLon'],
                                   elevation = row['geo_meanElev'], observationType = row['paleoData_proxy'],
                                   archiveType = row['archiveType'], verbose = False,
                                   label=row['dataSetName']+'_'+row['paleoData_variableName']))

Euro2k = pyleo.MultipleGeoSeries(ts_list, label='Euro2k',time_unit='years AD')
Euro2k.map()
Loading 16 LiPD files
Loaded..
(<Figure size 1800x600 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_106_8.png

By default, a projection is picked based on the degree of geographic clustering of the sites. To focus on Europe and use a more local projection, do:

eur_coord = {'central_latitude':45, 'central_longitude':20}
Euro2k.map(projection='Orthographic',proj_default=eur_coord)
(<Figure size 1800x700 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_107_1.png

By default, the shape and colors of symbols denote proxy archives; however, one can use either graphical device to convey other information. For instance, if elevation is available, it may be displayed by size, like so:

Euro2k.map(projection='Orthographic', size='elevation', proj_default=eur_coord)
(<Figure size 1800x700 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_108_1.png

Same with observationType:

Euro2k.map(projection='Orthographic', hue = 'observationType', proj_default=eur_coord)
(<Figure size 1800x700 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_109_1.png

All three sources of information may be combined, but the figure height will need to be enlarged manually to fit the legend:

Euro2k.map(projection='Orthographic',hue='observationType',
           size='elevation', proj_default=eur_coord, figsize=[18, 8])
(<Figure size 1800x800 with 2 Axes>,
 {'map': <GeoAxes: xlabel='lon', ylabel='lat'>, 'leg': <Axes: >})
../_images/api_110_1.png
pca(weights=None, missing='fill-em', tol_em=0.005, max_em_iter=100, **pca_kwargs)[source]

Principal Component Analysis (Empirical Orthogonal Functions)

Decomposition of MultipleGeoSeries object in terms of orthogonal basis functions. Tolerant to missing values, infilled by an EM algorithm.

Do make sure the time axes are aligned, however! (e.g. use common_time())

Algorithm from statsmodels: https://www.statsmodels.org/stable/generated/statsmodels.multivariate.pca.PCA.html

Parameters:
  • weights (ndarray, optional) – Series weights to use after transforming data according to standardize or demean when computing the principal components.

  • missing ({str, None}) –

    Method for missing data. Choices are:

    • ’drop-row’ - drop rows with missing values.

    • ’drop-col’ - drop columns with missing values.

    • ’drop-min’ - drop either rows or columns, choosing by data retention.

    • ’fill-em’ - use EM algorithm to fill missing value [ default]. ncomp should be set to the number of factors required.

    • None raises if data contains NaN values.

  • tol_em (float) – Tolerance to use when checking for convergence of the EM algorithm.

  • max_em_iter (int) – Maximum iterations for the EM algorithm.

Returns:

res – Resulting pyleoclim.MultivariateDecomp object

Return type:

MultivariateDecomp

See also

pyleoclim.utils.tsutils.eff_sample_size

Effective Sample Size of timeseries y

pyleoclim.core.multivardecomp.MultivariateDecomp

The multivariate decomposition object

pyleoclim.core.mulitpleseries.MulitpleSeries.common_time

align time axes

Examples

from pylipd.utils.dataset import load_dir
lipd = load_dir(name='Pages2k') # this loads a small subset of the PAGES 2k database
lipd_euro = lipd.filter_by_geo_bbox(-20,20,40,80)
df = lipd_euro.get_timeseries_essentials()
dfs = df.query("archiveType in ('tree') & paleoData_variableName not in ('year')")
# place in a MultipleGeoSeries object
ts_list = []
for _, row in dfs.iterrows():
    ts_list.append(pyleo.GeoSeries(time=row['time_values'],value=row['paleoData_values'],
                                   time_name=row['time_variableName'],value_name=row['paleoData_variableName'],
                                   time_unit=row['time_units'], value_unit=row['paleoData_units'],
                                   lat = row['geo_meanLat'], lon = row['geo_meanLon'],
                                   elevation = row['geo_meanElev'], observationType = row['paleoData_proxy'],
                                   archiveType = row['archiveType'], verbose = False,
                                   label=row['dataSetName']+'_'+row['paleoData_variableName']))

Euro2k = pyleo.MultipleGeoSeries(ts_list, label='Euro2k',time_unit='years AD')

res = Euro2k.common_time().pca() # carry out PCA
type(res) # the result is a MultivariateDecomp object
Loading 16 LiPD files
Loaded..
pyleoclim.core.multivardecomp.MultivariateDecomp

To plot the eigenvalue spectrum:

res.screeplot()
The provided eigenvalue array has only one dimension. UQ defaults to NB82
(<Figure size 600x400 with 1 Axes>,
 <Axes: title={'center': 'Euro2k PCA eigenvalues'}, xlabel='Mode index $i$', ylabel='$\\lambda_i$'>)
../_images/api_112_2.png

To plot the first mode, equivalent to res.modeplot(index=0):

res.modeplot()
(<Figure size 800x800 with 5 Axes>,
 {'pc': <Axes: xlabel='Time [years AD]', ylabel='$PC_1$'>,
  'psd': <Axes: xlabel='Period [years]', ylabel='PSD'>,
  'map': {'cb': <Axes: ylabel='EOF'>,
   'map': <GeoAxes: xlabel='lon', ylabel='lat'>,
   'leg': <Axes: >}})
../_images/api_113_1.png

To plot the second (note the zero-based indexing):

res.modeplot(index=1)
(<Figure size 800x800 with 5 Axes>,
 {'pc': <Axes: xlabel='Time [years AD]', ylabel='$PC_2$'>,
  'psd': <Axes: xlabel='Period [years]', ylabel='PSD'>,
  'map': {'cb': <Axes: ylabel='EOF'>,
   'map': <GeoAxes: xlabel='lon', ylabel='lat'>,
   'leg': <Axes: >}})
../_images/api_114_1.png

One can use map semantics to display the observation type as well:

res.modeplot(index=1, marker='observationType', size='elevation')
(<Figure size 800x800 with 5 Axes>,
 {'pc': <Axes: xlabel='Time [years AD]', ylabel='$PC_2$'>,
  'psd': <Axes: xlabel='Period [years]', ylabel='PSD'>,
  'map': {'cb': <Axes: ylabel='EOF'>,
   'map': <GeoAxes: xlabel='lon', ylabel='lat'>,
   'leg': <Axes: >}})
../_images/api_115_1.png

There are many ways to configure the map component. As a simple example, specifying the projection:

res.modeplot(index=1, marker='observationType', size='elevation',
    map_kwargs={'projection':'Robinson'})
(<Figure size 800x800 with 5 Axes>,
 {'pc': <Axes: xlabel='Time [years AD]', ylabel='$PC_2$'>,
  'psd': <Axes: xlabel='Period [years]', ylabel='PSD'>,
  'map': {'cb': <Axes: ylabel='EOF'>,
   'map': <GeoAxes: xlabel='lon', ylabel='lat'>,
   'leg': <Axes: >}})
../_images/api_116_1.png

Or dive into the nuances of gridspec and legend configurations:

res.modeplot(index=1, marker='observationType', size='elevation',
            map_kwargs={'projection':'Robinson',
                        'gridspec_kwargs': {'width_ratios': [.5, 1,14, 4], 'wspace':-.065},
                        'lgd_kwargs':{'bbox_to_anchor':[-.015,1]}})
(<Figure size 800x800 with 5 Axes>,
 {'pc': <Axes: xlabel='Time [years AD]', ylabel='$PC_2$'>,
  'psd': <Axes: xlabel='Period [years]', ylabel='PSD'>,
  'map': {'cb': <Axes: ylabel='EOF'>,
   'map': <GeoAxes: xlabel='lon', ylabel='lat'>,
   'leg': <Axes: >}})
../_images/api_117_1.png
time_geo_plot(figsize=[10, 3], marker=None, markersize=None, alpha=0.8, y_criteria='lat', linestyle=None, linewidth=10, colors=None, cmap='turbo', norm=None, xlabel=None, ylabel=None, title=None, time_unit=None, legend=True, inline_legend=False, plot_kwargs=None, lgd_kwargs=None, label_x_offset=200, label_y_offset=0, savefig_settings=None, ax=None, invert_xaxis=False, invert_yaxis=False)[source]

A plot of the temporal coverage of the records in a MultipleGeoSeries object organized by latitude or longitude.

Similar in behaviour to MultipleSeries.time_coverage_plot

Inspired by Dr. Mara Y. McPartland.

Parameters:
  • figsize (list, optional) – Size of the figure. The default is [10, 4].

  • marker (str, optional) – Marker type. The default is None.

  • markersize (float, optional) – Marker size. The default is None.

  • alpha (float, optional) – Alpha of the lines

  • y_criteria (str, optional) – Criteria for the creation of the y_axis. Can be {‘lat’,’lon’,}

  • linestyle (str, optional) – Line style. The default is None.

  • linewidth (float, optional) – The width of the line. The default is 10.

  • colors (a list of, or one, Python supported color code (a string of hex code or a tuple of rgba values)) – Colors for plotting. If None, the plotting will cycle the ‘viridis’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None. Default is ‘turbo’.

  • norm (matplotlib.colors.Normalize) – The normalization for the colormap. If None, a linear normalization will be used.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Title. The default is None.

  • time_unit (str) –

    the target time unit, possible input: {

    ’year’, ‘years’, ‘yr’, ‘yrs’, ‘y BP’, ‘yr BP’, ‘yrs BP’, ‘year BP’, ‘years BP’, ‘ky BP’, ‘kyr BP’, ‘kyrs BP’, ‘ka BP’, ‘ka’, ‘my BP’, ‘myr BP’, ‘myrs BP’, ‘ma BP’, ‘ma’,

    } default is None, in which case the code picks the most common time unit in the collection. If no unambiguous winner can be found, the unit of the first series in the collection is used.

  • legend (bool, optional) – Whether the show the legend. The default is True.

  • inline_legend (bool, optional) – Whether to use inline labels or the default pyleoclim legend. This option overrides lgd_kwargs

  • plot_kwargs (dict, optional) – Plot parameters. The default is None.

  • lgd_kwargs (dict, optional) –

    Legend parameters. The default is None.

    If inline_legend is True, lgd_kwargs will be passed to ax.text() (see matplotlib.axes.Axes.text documentation) If inline_legend is False, lgd_kwargs will be passed to ax.legend() (see matplotlib.axes.Axes.legend documentation)

  • label_x_offset (float or list, optional) – Amount to offset label by in the x direction. Only used if inline_legend is True. Default is 200. If list, should have the same number of elements as the MultipleSeries object.

  • label_y_offset (float or list, optional) – Amount to offset label by in the y direction. Only used if inline_legend is True. Default is 0. If list, should have the same number of elements as the MultipleSeries object.

  • savefig_settings (dictionary, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the figure. The default is None.

  • invert_xaxis (bool, optional) – if True, the x-axis of the plot will be inverted

  • invert_yaxis (bool, optional) – if True, the y-axis of the plot will be inverted

Returns:

See also

pyleoclim.multipleseries.MultipleSeries.time_coverage_plot

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

from pylipd.utils.dataset import load_dir
lipd = load_dir(name='Pages2k')
df = lipd.get_timeseries_essentials()
dfs = df.query("archiveType in ('tree','documents','coral','lake sediment')")
# place in a MultipleGeoSeries object
ts_list = []
for _, row in dfs.iloc[:5].iterrows():
    ts_list.append(pyleo.GeoSeries(time=row['time_values'],value=row['paleoData_values'],
                                time_name=row['time_variableName'],value_name=row['paleoData_variableName'],
                                time_unit=row['time_units'], value_unit=row['paleoData_units'],
                                lat = row['geo_meanLat'], lon = row['geo_meanLon'],
                                archiveType = row['archiveType'], verbose = False,
                                label=row['dataSetName']+'_'+row['paleoData_variableName']))

ms = pyleo.MultipleGeoSeries(ts_list, time_unit='years AD')
ms.time_geo_plot()
Loading 16 LiPD files
Loaded..
(<Figure size 1000x300 with 1 Axes>,
 <Axes: xlabel='Time [years AD]', ylabel='Latitude'>)
../_images/api_118_8.png

EnsembleSeries (pyleoclim.EnsembleSeries)

class pyleoclim.core.ensembleseries.EnsembleSeries(series_list)[source]

EnsembleSeries object

The EnsembleSeries object is a child of the MultipleSeries object, that is, a special case of MultipleSeries, aiming for ensembles of similar series. Ensembles usually arise from age modeling or Bayesian calibrations. All members of an EnsembleSeries object are assumed to share identical labels and units.

All methods available for MultipleSeries are available for EnsembleSeries. Some functions were modified for the special case of ensembles. The class enables ensemble-oriented methods for computation (e.g., quantiles) and visualization (e.g., envelope plot) that are unavailable to other classes.

Methods

append(ts[, inplace])

Append timeseries ts to MultipleSeries object

bin(**kwargs)

Aligns the time axes of a MultipleSeries object, via binning.

common_time([method, step, start, stop, ...])

Aligns the time axes of a MultipleSeries object

convert_time_unit([time_unit])

Convert the time units of the object

copy()

Copy the object

correlation([target, timespan, alpha, ...])

Calculate the correlation between an EnsembleSeries object to a target.

detrend([method])

Detrend timeseries

equal_lengths()

Test whether all series in object have equal length

filter([cutoff_freq, cutoff_scale, method])

Filtering the timeseries in the MultipleSeries object

flip([axis])

Flips the Series along one or both axes

from_json(path)

Creates a pyleoclim.MulitpleSeries from a JSON file

gkernel(**kwargs)

Aligns the time axes of a MultipleSeries object, via Gaussian kernel.

histplot([figsize, title, savefig_settings, ...])

Plots the distribution of the timeseries across ensembles

increments([step_style, verbose])

Extract grid properties (start, stop, step) of all the Series objects in a collection.

interp(**kwargs)

Aligns the time axes of a MultipleSeries object, via interpolation.

make_labels()

Initialization of labels

pca([weights, name, missing, tol_em, ...])

Principal Component Analysis (Empirical Orthogonal Functions)

plot([figsize, marker, markersize, ...])

Plot multiple timeseries on the same axis

plot_envelope([figsize, qs, xlabel, ylabel, ...])

Plot EnsembleSeries as an envelope.

plot_traces([figsize, xlabel, ylabel, ...])

Plot EnsembleSeries as a subset of traces.

quantiles([qs, axis])

Calculate quantiles of an EnsembleSeries object.

remove(label)

Remove Series based on given label.

resolution([time_unit, verbose, statistic])

Generate a MultipleResolution object

sel([value, time, tolerance])

Slice MulitpleSeries based on 'value' or 'time'.

slice(timespan)

Selects a limited time span from the object

spectral([method, settings, mute_pbar, ...])

Perform spectral analysis on the timeseries

stackplot([figsize, savefig_settings, xlim, ...])

Stack plot of multiple series

standardize()

Standardize each series object in a collection

stripes([cmap, sat, ref_period, figsize, ...])

Represents a MultipleSeries object as a quilt of Ed Hawkins' "stripes" patterns

time_coverage_plot([figsize, marker, ...])

A plot of the temporal coverage of the records in a MultipleSeries object organized by ranked length.

to_array([axis, labels])

Returns an ensemble as a numpy array with an optional list for labels.

to_csv([path, use_common_time])

Export MultipleSeries to CSV

to_dataframe([axis])

Export the ensemble as a Pandas DataFrame, with members of the ensemble as columns.

to_json([path])

Export the pyleoclim.MultipleSeries object to a json file

to_pandas([paleo_style, use_common_time])

Align Series and place in DataFrame.

view()

Generates a DataFrame version of the MultipleSeries object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Wavelet analysis

correlation(target=None, timespan=None, alpha=0.05, method='ttest', statistic='pearsonr', settings=None, fdr_kwargs=None, common_time_kwargs=None, mute_pbar=False, seed=None)[source]

Calculate the correlation between an EnsembleSeries object to a target.

If the target is not specified, then the 1st member of the ensemble will be the target Note that the FDR approach is applied by default to determine the significance of the p-values (more information in See Also below).

Parameters:
  • target (Series or EnsembleSeries) – A pyleoclim Series object or EnsembleSeries object. When the target is also an EnsembleSeries object, then the calculation of correlation is performed in a one-to-one sense, and the ourput list of correlation values and p-values will be the size of the series_list of the self object. That is, if the self object contains n Series, and the target contains n+m Series, then only the first n Series from the object will be used for the calculation; otherwise, if the target contains only n-m Series, then the first m Series in the target will be used twice in sequence.

  • timespan (tuple) – The time interval over which to perform the calculation

  • alpha (float) – The significance level (0.05 by default)

  • method (str, {'ttest','built-in','ar1sim','phaseran'}) – method for significance testing. Default is ‘ttest’

  • statistic (str) – The name of the statistic used to measure the association, to be chosen from a subset of https://docs.scipy.org/doc/scipy/reference/stats.html#association-correlation-tests Currently supported: [‘pearsonr’,’spearmanr’,’pointbiserialr’,’kendalltau’,’weightedtau’] The default is ‘pearsonr’.

  • settings (dict) –

    Parameters for the correlation function, including:

    nsimint

    the number of simulations (default: 1000)

    methodstr, {‘ttest’,’isopersistent’,’isospectral’ (default)}

    method for significance testing

  • fdr_kwargs (dict) – Parameters for the FDR function

  • common_time_kwargs (dict) – Parameters for the method MultipleSeries.common_time()

  • mute_pbar (bool; {True,False}) – If True, the progressbar will be muted. Default is False.

  • seed (float or int) – random seed for isopersistent and isospectral methods

Returns:

corr_ens – The resulting object, see pyleoclim.CorrEns

Return type:

CorrEns

See also

pyleoclim.utils.correlation.corr_sig

Correlation function

pyleoclim.utils.correlation.fdr

False Discovery Rate

pyleoclim.core.correns.CorrEns

The correlation ensemble object

Examples

nn = 50 # number of noise realizations
nt = 100
series_list = []

time, signal = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=2.0)

ts = pyleo.Series(time=time, value = signal, verbose=False).standardize()
noise = np.random.randn(nt,nn)

for idx in range(nn):  # noise
    ts = pyleo.Series(time=time, value=ts.value+5*noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

# to set an arbitrary random seed to fix the result
corr_res = ts_ens.correlation(ts, seed=2333)
print(corr_res)

# to change the statistic:
corr_res = ts_ens.correlation(ts, statistic='kendalltau', method='phaseran', settings = {'nsim':20})
print(corr_res)
Looping over 50 Series in the ensemble
Time axis values sorted in ascending order
  correlation  p-value    signif. w/o FDR (α: 0.05)    signif. w/ FDR (α: 0.05)
-------------  ---------  ---------------------------  --------------------------
     0.126084  0.06       False                        False
     0.169373  0.01       True                         True
     0.194766  < 1e-2     True                         True
     0.203179  < 1e-2     True                         True
     0.214031  < 1e-2     True                         True
     0.310193  < 1e-5     True                         True
     0.388677  < 1e-9     True                         True
     0.425979  < 1e-10    True                         True
     0.470774  < 1e-13    True                         True
     0.491165  < 1e-13    True                         True
     0.462144  < 1e-11    True                         True
     0.499641  < 1e-13    True                         True
     0.476758  < 1e-12    True                         True
     0.483386  < 1e-13    True                         True
     0.532784  < 1e-15    True                         True
     0.597113  < 1e-20    True                         True
     0.601742  < 1e-19    True                         True
     0.600181  < 1e-18    True                         True
     0.573651  < 1e-16    True                         True
     0.600374  < 1e-18    True                         True
     0.622373  < 1e-19    True                         True
     0.653787  < 1e-22    True                         True
     0.657193  < 1e-22    True                         True
     0.67481   < 1e-23    True                         True
     0.708114  < 1e-27    True                         True
     0.691656  < 1e-26    True                         True
     0.704396  < 1e-27    True                         True
     0.732593  < 1e-30    True                         True
     0.746831  < 1e-31    True                         True
     0.759434  < 1e-33    True                         True
     0.774885  < 1e-35    True                         True
     0.790623  < 1e-38    True                         True
     0.795572  < 1e-38    True                         True
     0.819494  < 1e-43    True                         True
     0.825905  < 1e-45    True                         True
     0.858899  < 1e-54    True                         True
     0.879319  < 1e-61    True                         True
     0.878882  < 1e-63    True                         True
     0.889649  < 1e-68    True                         True
     0.881664  < 1e-65    True                         True
     0.899269  < 1e-71    True                         True
     0.915452  < 1e-79    True                         True
     0.933931  < 1e-86    True                         True
     0.951335  < 1e-98    True                         True
     0.957139  < 1e-102   True                         True
     0.957755  < 1e-102   True                         True
     0.965487  < 1e-104   True                         True
     0.980491  < 1e-123   True                         True
     0.989743  < 1e-144   True                         True
     1         < 1e-6     True                         True
Ensemble size: 50
Looping over 50 Series in the ensemble
Time axis values sorted in ascending order
  correlation  p-value      signif. w/o FDR (α: 0.05)  signif. w/ FDR (α: 0.05)
-------------  ---------  ---------------------------  --------------------------
    0.0828283  0.35                                 0  False
    0.109091   0.15                                 0  False
    0.113939   0.15                                 0  False
    0.114343   0.05                                 1  False
    0.139394   0.05                                 1  False
    0.206465   < 1e-6                               1  True
    0.252929   < 1e-6                               1  True
    0.268687   < 1e-6                               1  True
    0.284444   < 1e-6                               1  True
    0.322828   < 1e-6                               1  True
    0.299798   < 1e-6                               1  True
    0.311919   < 1e-6                               1  True
    0.282828   < 1e-6                               1  True
    0.286061   < 1e-6                               1  True
    0.337778   < 1e-6                               1  True
    0.389899   < 1e-6                               1  True
    0.397172   < 1e-6                               1  True
    0.383434   < 1e-6                               1  True
    0.356364   < 1e-6                               1  True
    0.390303   < 1e-6                               1  True
    0.414949   < 1e-6                               1  True
    0.433535   < 1e-6                               1  True
    0.446869   < 1e-6                               1  True
    0.461818   < 1e-6                               1  True
    0.491717   < 1e-6                               1  True
    0.479596   < 1e-6                               1  True
    0.483232   < 1e-6                               1  True
    0.519596   < 1e-6                               1  True
    0.527273   < 1e-6                               1  True
    0.545859   < 1e-6                               1  True
    0.556364   < 1e-6                               1  True
    0.579394   < 1e-6                               1  True
    0.588687   < 1e-6                               1  True
    0.607677   < 1e-6                               1  True
    0.621414   < 1e-6                               1  True
    0.656162   < 1e-6                               1  True
    0.682424   < 1e-6                               1  True
    0.678384   < 1e-6                               1  True
    0.696566   < 1e-6                               1  True
    0.669899   < 1e-6                               1  True
    0.701414   < 1e-6                               1  True
    0.720404   < 1e-6                               1  True
    0.749091   < 1e-6                               1  True
    0.796768   < 1e-6                               1  True
    0.808889   < 1e-6                               1  True
    0.810505   < 1e-6                               1  True
    0.830707   < 1e-6                               1  True
    0.876364   < 1e-6                               1  True
    0.921212   < 1e-6                               1  True
    1          < 1e-6                               1  True
Ensemble size: 50

The print function tabulates the output, and conveys the p-value according to the correlation test applied (“isospec”, by default). To plot the result:

corr_res.plot()
(<Figure size 400x400 with 1 Axes>, <Axes: xlabel='$r$', ylabel='Count'>)
../_images/api_120_1.png
histplot(figsize=[10, 4], title=None, savefig_settings=None, ax=None, ylabel='KDE', vertical=False, edgecolor='w', **plot_kwargs)[source]

Plots the distribution of the timeseries across ensembles

Reuses seaborn [histplot](https://seaborn.pydata.org/generated/seaborn.histplot.html) function.

Parameters:
  • figsize (list, optional) – The size of the figure. The default is [10, 4].

  • title (str, optional) – Title for the figure. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below:
    • ”path” must be specified; it can be any existed or non-existed path, with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}.

    The default is None.

  • ax (matplotlib.axis, optional) – A matplotlib axis. The default is None.

  • ylabel (str, optional) – Label for the count axis. The default is ‘KDE’.

  • vertical (bool; {True,False}, optional) – Whether to flip the plot vertically. The default is False.

  • edgecolor (matplotlib.color, optional) – The color of the edges of the bar. The default is ‘w’.

  • plot_kwargs (dict) – Plotting arguments for seaborn histplot: https://seaborn.pydata.org/generated/seaborn.histplot.html.

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

nn = 30 # number of noise realizations
nt = 500
series_list = []

time, signal = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)

ts = pyleo.Series(time=time, value = signal, verbose=False).standardize()
noise = np.random.randn(nt,nn)

for idx in range(nn):  # noise
    ts = pyleo.Series(time=time, value=signal+noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

fig, ax = ts_ens.histplot()
../_images/api_121_0.png
make_labels()[source]

Initialization of labels

Returns:

  • time_header (str) – Label for the time axis

  • value_header (str) – Label for the value axis

plot_envelope(figsize=[10, 4], qs=[0.025, 0.25, 0.5, 0.75, 0.975], xlabel=None, ylabel=None, title=None, xlim=None, ylim=None, savefig_settings=None, ax=None, plot_legend=True, curve_clr='#d9544d', curve_lw=2, shade_clr='#d9544d', shade_alpha=0.2, inner_shade_label='IQR', outer_shade_label='95% CI', lgd_kwargs=None)[source]

Plot EnsembleSeries as an envelope.

Parameters:
  • figsize (list, optional) – The figure size. The default is [10, 4].

  • qs (list, optional) – The significance levels to consider. The default is [0.025, 0.25, 0.5, 0.75, 0.975] (median, interquartile range, and central 95% region)

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Plot title. The default is None.

  • xlim (list, optional) – x-axis limits. The default is None.

  • ylim (list, optional) – y-axis limits. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – Matplotlib axis on which to return the plot. The default is None.

  • plot_legend (bool; {True,False}, optional) – Wether to plot the legend. The default is True.

  • curve_clr (str, optional) – Color of the main line (median). The default is sns.xkcd_rgb[‘pale red’].

  • curve_lw (str, optional) – Width of the main line (median). The default is 2.

  • shade_clr (str, optional) – Color of the shaded envelope. The default is sns.xkcd_rgb[‘pale red’].

  • shade_alpha (float, optional) – Transparency on the envelope. The default is 0.2.

  • inner_shade_label (str, optional) – Label for the envelope. The default is ‘IQR’.

  • outer_shade_label (str, optional) – Label for the envelope. The default is ‘95% CI’.

  • lgd_kwargs (dict, optional) – Parameters for the legend. The default is None.

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

nn = 30 # number of noise realizations
nt = 500
series_list = []

t,v = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)
signal = pyleo.Series(time=t,value=v, verbose=False)

for idx in range(nn):  # noise
    noise = np.random.randn(nt,nn)*100
    ts = pyleo.Series(time=signal.time, value=signal.value+noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

fig, ax = ts_ens.plot_envelope(curve_lw=1.5)
../_images/api_122_0.png
plot_traces(figsize=[10, 4], xlabel=None, ylabel=None, title=None, num_traces=10, seed=None, xlim=None, ylim=None, linestyle='-', savefig_settings=None, ax=None, plot_legend=True, color='#d9544d', lw=0.5, alpha=0.3, lgd_kwargs=None)[source]

Plot EnsembleSeries as a subset of traces.

Parameters:
  • figsize (list, optional) – The figure size. The default is [10, 4].

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Plot title. The default is None.

  • xlim (list, optional) – x-axis limits. The default is None.

  • ylim (list, optional) – y-axis limits. The default is None.

  • color (str, optional) – Color of the traces. The default is sns.xkcd_rgb[‘pale red’].

  • alpha (float, optional) – Transparency of the lines representing the multiple members. The default is 0.3.

  • linestyle ({'-', '--', '-.', ':', '', (offset, on-off-seq), ...}) – Set the linestyle of the line

  • lw (float, optional) – Width of the lines representing the multiple members. The default is 0.5.

  • num_traces (int, optional) – Number of traces to plot. The default is 10.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – Matplotlib axis on which to return the plot. The default is None.

  • plot_legend (bool; {True,False}, optional) – Whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Parameters for the legend. The default is None.

  • seed (int, optional) – Set the seed for the random number generator. Useful for reproducibility. The default is None.

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

nn = 30 # number of noise realizations
nt = 500
series_list = []

t,v = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)
signal = pyleo.Series(time=t,value=v, verbose=False)

for idx in range(nn):  # noise
    noise = np.random.randn(nt,nn)*100
    ts = pyleo.Series(time=signal.time, value=signal.value+noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

fig, ax = ts_ens.plot_traces(alpha=0.2,num_traces=8)
../_images/api_123_0.png
quantiles(qs=[0.05, 0.5, 0.95], axis='value')[source]

Calculate quantiles of an EnsembleSeries object. If axis is ‘value’, the calculation requires for the time axis to be the same. You can use the common_time method to do so. In essence, it transforms the time uncertainty into a y-axis uncertainty. If axis is ‘time’, the values should be the same for all members of the emsemble.

Reuses [scipy.stats.mstats.mquantiles](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles.html) function.

Parameters:
  • qs (list, optional) – List of quantiles to consider for the calculation. The default is [0.05, 0.5, 0.95].

  • axis (['time', 'value']) – Whether to calculate the quantiles over the values or time. Default is ‘value’.

Returns:

ens_qs – EnsembleSeries object containing empirical quantiles of original

Return type:

EnsembleSeries

Examples

nn = 30 # number of noise realizations
nt = 500
series_list = []

t,v = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)
signal = pyleo.Series(t,v)

for idx in range(nn):  # noise
    noise = np.random.randn(nt,nn)*100
    ts = pyleo.Series(time=signal.time, value=signal.value+noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

ens_qs = ts_ens.quantiles()
Time axis values sorted in ascending order

To calculate in the time dimension:

nn = 30 #number of age models
time = np.arange(1,20000,100) #create a time vector
std_dev = 20 # Noise to be considered

t,v = pyleo.utils.gen_ts(model='colored_noise',nt=len(time),alpha=1.0)

series_list = []

for i in range(nn):
    noise = np.random.normal(0,std_dev,len(time))
    ts=pyleo.Series(time=np.sort(time+noise),value=v,verbose=False)
    series_list.append(ts)

time_ens = pyleo.EnsembleSeries(series_list)

ens_qs = time_ens.quantiles(axis='time')
slice(timespan)[source]

Selects a limited time span from the object

Parameters:

timespan (tuple or list) – The list of time points for slicing, whose length must be even. When there are n time points, the output Series includes n/2 segments. For example, if timespan = [a, b], then the sliced output includes one segment [a, b]; if timespan = [a, b, c, d], then the sliced output includes segment [a, b] and segment [c, d].

Returns:

new – The sliced EnsembleSeries object.

Return type:

EnsembleSeries

Examples

Select part of an object

nn = 20 # number of noise realizations
nt = 200
series_list = []

time, signal = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=2.0)

ts = pyleo.Series(time=time, value = signal, verbose=False).standardize()
noise = np.random.randn(nt,nn)

for idx in range(nn):  # noise
    ts = pyleo.Series(time=time, value=ts.value+5*noise[:,idx], verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

fig, ax = ts_ens.plot_envelope(curve_lw=1.5)
fig, ax = ts_ens.slice([100, 199]).plot_envelope(curve_lw=1.5)
../_images/api_126_0.png ../_images/api_126_1.png
stackplot(figsize=[5, 15], savefig_settings=None, xlim=None, fill_between_alpha=0.2, colors=None, cmap='tab10', norm=None, spine_lw=1.5, grid_lw=0.5, font_scale=0.8, label_x_loc=-0.15, v_shift_factor=0.75, linewidth=1.5)[source]

Stack plot of multiple series

Note that the plotting style is uniquely designed for this one and cannot be properly reset with pyleoclim.set_style().

Parameters:
  • figsize (list) – Size of the figure.

  • colors (list) – Colors for plotting. If None, the plotting will cycle the ‘tab10’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None.

  • norm (matplotlib.colors.Normalize like) – The nomorlization for the colormap. If None, a linear normalization will be used.

  • savefig_settings (dictionary) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • xlim (list) – The x-axis limit.

  • fill_between_alpha (float) – The transparency for the fill_between shades.

  • spine_lw (float) – The linewidth for the spines of the axes.

  • grid_lw (float) – The linewidth for the gridlines.

  • linewidth (float) – The linewidth for the curves.

  • font_scale (float) – The scale for the font sizes. Default is 0.8.

  • label_x_loc (float) – The x location for the label of each curve.

  • v_shift_factor (float) – The factor for the vertical shift of each axis. The default value 3/4 means the top of the next axis will be located at 3/4 of the height of the previous one.

Returns:

See also

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

nn = 10 # number of noise realizations
nt = 200
series_list = []

t, v = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)
signal, _, _ = pyleo.utils.standardize(v)
noise = np.random.randn(nt,nn)

for idx in range(nn):  # noise
    ts = pyleo.Series(time=t, value=signal+noise[:,idx], label='trace #'+str(idx+1), verbose=False)
    series_list.append(ts)

ts_ens = pyleo.EnsembleSeries(series_list)

fig, ax = ts_ens.stackplot()
../_images/api_127_0.png
to_array(axis='value', labels=True)[source]

Returns an ensemble as a numpy array with an optional list for labels. Each column in the array corresponds to an ensemble member.

Parameters:
  • axis (str, ['time', 'value'], optional) – Whether the return the ensemble from value or time. The default is ‘value’.

  • labels (bool, [True,False], optional) – Whether to retrun a separate list with the timseries labels. The default is True.

Raises:

ValueError – Axis should be either ‘time’ or ‘value’

Returns:

  • vals (numpy.array) – An array where each column corresponds to an ensemble member

  • headers (list) – A list of corresponding labels for each columm

Example

nn = 30 #number of age models
time = np.arange(1,20000,100) #create a time vector
std_dev = 20 # Noise to be considered

t,v = pyleo.utils.gen_ts(model='colored_noise',nt=len(time),alpha=1.0)

series_list = []

for i in range(nn):
    noise = np.random.normal(0,std_dev,len(time))
    ts=pyleo.Series(time=np.sort(time+noise),value=v,verbose=False)
    series_list.append(ts)

time_ens = pyleo.EnsembleSeries(series_list)
ens_qs = time_ens.quantiles(axis='time')

vals,headers=ens_qs.to_array(axis='time')
to_dataframe(axis='value')[source]

Export the ensemble as a Pandas DataFrame, with members of the ensemble as columns. The columns are labeled according to the label in the individual series or numbered if ‘label’ is None.

Parameters:

axis (str, ['time', 'value']) – Whether the return the ensemble from value or time. each The default is ‘value’.

Raises:

ValueError – Axis should be either ‘time’ or ‘value’

Returns:

  • df (pandas.DataFrame) – A Pandas DataFrame containing members of the ensemble as columns.

  • .. jupyter-execute:: – nn = 30 #number of age models time = np.arange(1,20000,100) #create a time vector std_dev = 20 # Noise to be considered

    t,v = pyleo.utils.gen_ts(model=’colored_noise’,nt=len(time),alpha=1.0)

    series_list = []

    for i in range(nn):

    noise = np.random.normal(0,std_dev,len(time)) ts=pyleo.Series(time=np.sort(time+noise),value=v,verbose=False) series_list.append(ts)

    time_ens = pyleo.EnsembleSeries(series_list) ens_qs = time_ens.quantiles(axis=’time’)

    df=ens_qs.to_dataframe(axis=’time’)

SurrogateSeries (pyleoclim.SurrogateSeries)

class pyleoclim.core.surrogateseries.SurrogateSeries(series_list, label, surrogate_method=None, surrogate_args=None)[source]

Object containing surrogate timeseries, usually obtained through recursive modeling (e.g., AR(1))

Surrogate Series is a child of MultipleSeries. All methods available for MultipleSeries are available for surrogate series. EnsembleSeries would be a more logical choice, but it creates circular imports that break the package.

Methods

append(ts[, inplace])

Append timeseries ts to MultipleSeries object

bin(**kwargs)

Aligns the time axes of a MultipleSeries object, via binning.

common_time([method, step, start, stop, ...])

Aligns the time axes of a MultipleSeries object

convert_time_unit([time_unit])

Convert the time units of the object

copy()

Copy the object

correlation([target, timespan, alpha, ...])

Calculate the correlation between a MultipleSeries and a target Series

detrend([method])

Detrend timeseries

equal_lengths()

Test whether all series in object have equal length

filter([cutoff_freq, cutoff_scale, method])

Filtering the timeseries in the MultipleSeries object

flip([axis])

Flips the Series along one or both axes

from_json(path)

Creates a pyleoclim.MulitpleSeries from a JSON file

gkernel(**kwargs)

Aligns the time axes of a MultipleSeries object, via Gaussian kernel.

increments([step_style, verbose])

Extract grid properties (start, stop, step) of all the Series objects in a collection.

interp(**kwargs)

Aligns the time axes of a MultipleSeries object, via interpolation.

pca([weights, name, missing, tol_em, ...])

Principal Component Analysis (Empirical Orthogonal Functions)

plot([figsize, marker, markersize, ...])

Plot multiple timeseries on the same axis

remove(label)

Remove Series based on given label.

resolution([time_unit, verbose, statistic])

Generate a MultipleResolution object

sel([value, time, tolerance])

Slice MulitpleSeries based on 'value' or 'time'.

spectral([method, settings, mute_pbar, ...])

Perform spectral analysis on the timeseries

stackplot([figsize, savefig_settings, ...])

Stack plot of multiple series

standardize()

Standardize each series object in a collection

stripes([cmap, sat, ref_period, figsize, ...])

Represents a MultipleSeries object as a quilt of Ed Hawkins' "stripes" patterns

time_coverage_plot([figsize, marker, ...])

A plot of the temporal coverage of the records in a MultipleSeries object organized by ranked length.

to_csv([path, use_common_time])

Export MultipleSeries to CSV

to_json([path])

Export the pyleoclim.MultipleSeries object to a json file

to_pandas([paleo_style, use_common_time])

Align Series and place in DataFrame.

view()

Generates a DataFrame version of the MultipleSeries object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Wavelet analysis

Lipd (pyleoclim.Lipd)

This class allows to manipulate LiPD objects.

class pyleoclim.core.lipd.Lipd(usr_path=None, lipd_dict=None, validate=False, remove=False)[source]

The Lipd class allows to create a Lipd object from Lipd files. This allows to manipulate LiPD objects and take advantage of the metadata information for specific functionalities. Lipd objects are needed to create LipdSeries objects, which carry most of the timeseries functionalities.

Parameters:
  • usr_path (str) – Path to the Lipd file(s). Can be URL (LiPD utilities only support loading one file at a time from a URL). If it’s a URL, it must start with “http”, “https”, or “ftp”.

  • lidp_dict (dict) – LiPD files already loaded into Python through the LiPD utilities

  • validate (bool) – Validate the LiPD files upon loading. Note that for a large library (>300files) this can take up to half an hour.

  • remove (bool) – If validate is True and remove is True, ignores non-valid LiPD files. Note that loading unvalidated Lipd files may result in errors for some functionalities but not all.

References

McKay, N. P., & Emile-Geay, J. (2016). Technical Note: The Linked Paleo Data framework – a common tongue for paleoclimatology. Climate of the Past, 12, 1093-1100.

Examples

import pyleoclim as pyleo url=’http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004’ d=pyleo.Lipd(usr_path=url)

Methods

copy()

Copy the object

extract(dataSetName)

param dataSetName:

Extract a particular dataset

mapAllArchive([projection, proj_default, ...])

Map all the records contained in the LiPD object by the type of archive

to_LipdSeries([number, mode])

Extracts one timeseries from the Lipd object

to_LipdSeriesList([mode])

Extracts all LiPD timeseries objects to a list of LipdSeries objects

to_tso([mode])

Extracts all the variables to a list of LiPD timeseries objects

copy()[source]

Copy the object

extract(dataSetName)[source]
Parameters:

dataSetName (str) – Extract a particular dataset

Returns:

new – A new object corresponding to a particular dataset

Return type:

Lipd

mapAllArchive(projection='Robinson', proj_default=True, background=True, borders=False, rivers=False, lakes=False, figsize=None, ax=None, marker=None, color=None, markersize=None, scatter_kwargs=None, legend=True, lgd_kwargs=None, savefig_settings=None)[source]

Map all the records contained in the LiPD object by the type of archive

Note that the map is fully cusomizable by using the optional parameters.

Parameters:
  • projection (str, optional) – The projection to use. The default is ‘Robinson’.

  • proj_default (bool, optional) – Wether to use the Pyleoclim defaults for each projection type. The default is True.

  • background (bool, optional) – Wether to use a backgound. The default is True.

  • borders (bool, optional) – Draw borders. The default is False.

  • rivers (bool, optional) – Draw rivers. The default is False.

  • lakes (bool, optional) – Draw lakes. The default is False.

  • figsize (list, optional) – The size of the figure. The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the map. The default is None.

  • marker (str, optional) – The marker type for each archive. The default is None, which uses a pre-defined palette in Pyleoclim. To see the default option, run Lipd.plot_default where Lipd is the name of the object.

  • color (str, optional) – Color for each acrhive. The default is None. The default is None, which uses a pre-defined palette in Pyleoclim. To see the default option, run Lipd.plot_default where Lipd is the name of the object.

  • markersize (float, optional) – Size of the marker. The default is None.

  • scatter_kwargs (dict, optional) – Parameters for the scatter plot. The default is None.

  • legend (bool; {True,False}, optional) – Whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Arguments for the legend. The default is None.

  • savefig_settings (dictionary, optional) –

    The dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}.

    The default is None.

Returns:

res – The figure and axis if asked.

Return type:

tuple or fig

See also

pyleoclim.utils.mapping.map

Underlying mapping function for Pyleoclim

Examples

For speed, we are only using one LiPD file. But these functions can load and map multiple.

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
fig, ax = data.mapAllArchive()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
../_images/api_129_2.png

Change the markersize

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
fig, ax = data.mapAllArchive(markersize=100)
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
../_images/api_130_2.png
to_LipdSeries(number=None, mode='paleo')[source]

Extracts one timeseries from the Lipd object

In LiPD, timeseries objects are flatten dictionaries that contain the values for the time and variable axes as well as relevant metadata. Note that this function may require user interaction if the number of the column in the file is unknown. The numbers are fixed so automating the code is as simple as retaining a series of numbers when reopening the files.

Parameters:
  • number (int) – the number of the timeseries object

  • mode (str; {'paleo','chron'}) – whether to extract the paleo or chron series.

Returns:

ts – A LipdSeries object

Return type:

pyleoclim.LipdSeries

See also

pyleoclim.core.lipdseries.LipdSeries

LipdSeries object

to_LipdSeriesList(mode='paleo')[source]

Extracts all LiPD timeseries objects to a list of LipdSeries objects

In LiPD, timeseries objects are flatten dictionaries that contain the values for the time and variable axes as well as relevant metadata.

Parameters:

mode ({'paleo','chron'}) – Whether to extract the timeseries information from the paleo tables or chron tables

Returns:

res – A list of LiPDSeries objects

Return type:

list

References

McKay, N. P., & Emile-Geay, J. (2016). Technical Note: The Linked Paleo Data framework – a common tongue for paleoclimatology. Climate of the Past, 12, 1093-1100.

See also

pyleoclim.core.lipdseries.LipdSeries

a LipdSeries object

to_tso(mode='paleo')[source]

Extracts all the variables to a list of LiPD timeseries objects

In LiPD, timeseries objects are flatten dictionaries that contain the values for the time and variable axes as well as relevant metadata.

Parameters:

mode ({'paleo','chron'}) – Whether to extract the timeseries information from the paleo tables or chron tables

Returns:

ts_list – List of LiPD timeseries objects

Return type:

list

References

McKay, N. P., & Emile-Geay, J. (2016). Technical Note: The Linked Paleo Data framework – a common tongue for paleoclimatology. Climate of the Past, 12, 1093-1100.

LipdSeries (pyleoclim.LipdSeries)

class pyleoclim.core.lipdseries.LipdSeries(tso, clean_ts=True, verbose=False)[source]

LipdSeries are (you guessed it), Series objects that are created from LiPD objects. As a subclass of Series, they inherit all its methods. When created, LiPDSeries automatically instantiates the time, value and other parameters from what’s in the lipd file. These objects can be obtained from a LiPD file/object either through Pyleoclim or the LiPD utilities. If multiple objects (i.e., a list) are given, then the user will be prompted to choose one timeseries.

Returns:

object

Return type:

pyleoclim.LipdSeries

See also

pyleoclim.core.lipd.Lipd

Creates a Lipd object from LiPD Files

pyleoclim.core.series.Series

Creates pyleoclim Series object

pyleoclim.core.multipleseries.MultipleSeries

a collection of multiple Series objects

Examples

In this example, we will import a LiPD file and explore the various options to create a series object.

First, let’s look at the Lipd.to_tso option. This method is attractive because the object is a list of dictionaries that are easily explored in Python.

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
ts_list = data.to_tso()
# Print out the dataset name and the variable name
for item in ts_list:
    print(item['dataSetName']+': '+item['paleoData_variableName'])
# Load the sst data into a LipdSeries. Since Python indexing starts at zero, sst has index 5.
ts = pyleo.LipdSeries(ts_list[5])
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
MD982176.Stott.2004: depth
MD982176.Stott.2004: yrbp
MD982176.Stott.2004: d18og.rub
MD982176.Stott.2004: d18ow-s
MD982176.Stott.2004: mg/ca-g.rub
MD982176.Stott.2004: sst

If you attempt to pass the full list of series, Pyleoclim will prompt you to choose a series by printing out something similar as above. If you already now the number of the timeseries object you’re interested in, then you should use the following:

ts1 = data.to_LipdSeries(number=5)
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries

If number is not specified, Pyleoclim will prompt you for the number automatically.

Sometimes, one may want to create a MultipleSeries object from a collection of LiPD files. In this case, we recommend using the following:

ts_list = data.to_LipdSeriesList()
# only keep the Mg/Ca and SST
ts_list=ts_list[4:]
#create a MultipleSeries object
ms=pyleo.MultipleSeries(ts_list)
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
Attributes:
datetime_index

Convert time to pandas DatetimeIndex.

metadata

Methods

bin([keep_log])

Bin values in a time series

causality(target_series[, method, timespan, ...])

Perform causality analysis with the target timeseries. Specifically, whether there is information in the target series that influenced the original series.

center([timespan, keep_log])

Centers the series (i.e.

chronEnsembleToPaleo(D[, number, ...])

Fetch chron ensembles from a Lipd object and return the ensemble as MultipleSeries

clean([verbose, keep_log])

Clean up the timeseries by removing NaNs and sort with increasing time points

convert_time_unit([time_unit, keep_log])

Convert the time units of the Series object

copy()

Copy the object

correlation(target_series[, alpha, ...])

Estimates the correlation and its associated significance between two time series (not ncessarily IID).

dashboard([figsize, plt_kwargs, ...])

param figsize:

Figure size. The default is [11,8].

detrend([method, keep_log, preserve_mean])

Detrend Series object

equals(ts[, index_tol, value_tol])

Test whether two objects contain the same elements (values and datetime_index) A printout is returned if metadata are different, but the statement is considered True as long as data match.

fill_na([timespan, dt, keep_log])

Fill NaNs into the timespan

filter([cutoff_freq, cutoff_scale, method, ...])

Filtering methods for Series objects using four possible methods:

flip([axis, keep_log])

Flips the Series along one or both axes

from_csv(path)

Read in Series object from CSV file.

from_json(path)

Creates a pyleoclim.Series from a JSON file

gaussianize([keep_log])

Gaussianizes the timeseries (i.e.

getMetadata()

Get the necessary metadata for the ensemble plots

gkernel([step_style, keep_log, step_type])

Coarse-grain a Series object via a Gaussian kernel.

histplot([figsize, title, savefig_settings, ...])

Plot the distribution of the timeseries values

interp([method, keep_log])

Interpolate a Series object onto a new time axis

is_evenly_spaced([tol])

Check if the Series time axis is evenly-spaced, within tolerance

make_labels()

Initialization of plot labels based on Series metadata

map([projection, proj_default, background, ...])

Map the location of the record

mapNearRecord(D[, n, radius, sameArchive, ...])

Map records that are near the timeseries of interest

outliers([method, remove, settings, ...])

Remove outliers from timeseries data.

plot([figsize, marker, markersize, color, ...])

Plot the timeseries

plot_age_depth([figsize, plt_kwargs, ...])

param figsize:

Size of the figure. The default is [10,4].

resample(rule[, keep_log])

Run analogue to pandas.Series.resample.

resolution()

Generate a resolution object

segment([factor, verbose])

Gap detection

sel([value, time, tolerance])

Slice Series based on 'value' or 'time'.

slice(timespan)

Slicing the timeseries with a timespan (tuple or list)

sort([verbose, ascending, keep_log])

Ensure timeseries is set to a monotonically increasing axis.

spectral([method, freq_method, freq_kwargs, ...])

Perform spectral analysis on the timeseries

ssa([M, nMC, f, trunc, var_thresh, online])

Singular Spectrum Analysis

standardize([keep_log, scale])

Standardizes the series ((i.e.

stats()

Compute basic statistics from a Series

stripes([figsize, cmap, ref_period, sat, ...])

Represents the Series as an Ed Hawkins "stripes" pattern

summary_plot(psd, scalogram[, figsize, ...])

Produce summary plot of timeseries.

surrogates([method, number, length, seed, ...])

Generate surrogates of the Series object according to "method"

to_csv([metadata_header, path])

Export Series to csv

to_json([path])

Export the pyleoclim.Series object to a json file

to_pandas([paleo_style])

Export to pandas Series

view()

Generates a DataFrame version of the Series object, suitable for viewing in a Jupyter Notebook

wavelet([method, settings, freq_method, ...])

Perform wavelet analysis on a timeseries

wavelet_coherence(target_series[, method, ...])

Performs wavelet coherence analysis with the target timeseries

from_pandas

pandas_method

chronEnsembleToPaleo(D, number=None, chronNumber=None, modelNumber=None, tableNumber=None)[source]

Fetch chron ensembles from a Lipd object and return the ensemble as MultipleSeries

Parameters:
  • D (a LiPD object) –

  • number (int, optional) – The number of ensemble members to store. Default is None, which corresponds to all present

  • chronNumber (int, optional) – The chron object number. The default is None.

  • modelNumber (int, optional) – Age model number. The default is None.

  • tableNumber (int, optional) – Table number. The default is None.

Raises:

ValueError

Returns:

ens – An EnsembleSeries object with each series representing a possible realization of the age model

Return type:

EnsembleSeries

See also

pyleoclim.core.ensembleseries.EnsembleSeries

An EnsembleSeries object with each series representing a possible realization of the age model

pyleoclim.utils.lipdutils.mapAgeEnsembleToPaleoData

Map the depth for the ensemble age values to the paleo depth

copy()[source]

Copy the object

Returns:

object – New object with data copied from original

Return type:

pyleoclim.LipdSeries

dashboard(figsize=[11, 8], plt_kwargs=None, histplt_kwargs=None, spectral_kwargs=None, spectralsignif_kwargs=None, spectralfig_kwargs=None, map_kwargs=None, metadata=True, savefig_settings=None, ensemble=False, D=None)[source]
Parameters:
  • figsize (list or tuple, optional) – Figure size. The default is [11,8].

  • plt_kwargs (dict, optional) – Optional arguments for the timeseries plot. See Series.plot() or EnsembleSeries.plot_envelope(). The default is None.

  • histplt_kwargs (dict, optional) – Optional arguments for the distribution plot. See Series.histplot() or EnsembleSeries.plot_distplot(). The default is None.

  • spectral_kwargs (dict, optional) – Optional arguments for the spectral method. Default is to use Lomb-Scargle method. See Series.spectral() or EnsembleSeries.spectral(). The default is None.

  • spectralsignif_kwargs (dict, optional) – Optional arguments to estimate the significance of the power spectrum. See PSD.signif_test. Note that we currently do not support significance testing for ensembles. The default is None.

  • spectralfig_kwargs (dict, optional) – Optional arguments for the power spectrum figure. See PSD.plot() or MultiplePSD.plot_envelope(). The default is None.

  • map_kwargs (dict, optional) – Optional arguments for the map. See LipdSeries.map(). The default is None.

  • metadata (bool; {True,False}, optional) – Whether or not to produce a dashboard with printed metadata. The default is True.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}.

    The default is None.

  • ensemble (bool; {True, False}, optional) – If True, will return the dashboard in ensemble modes if ensembles are available

  • D (pyleoclim.Lipd object) – If asking for an ensemble plot, a pyleoclim.Lipd object must be provided

Returns:

  • fig (matplotlib.figure) – The figure

  • ax (matplolib.axis) – The axis

See also

pyleoclim.core.series.Series.plot

plot a timeseries

pyleoclim.core.ensembleseries.EnsembleSeries.plot_envelope

Envelope plots for an ensemble

pyleoclim.core.series.Series.histplot

plot a distribution of the timeseries

pyleoclim.core.ensembleseries.EnsembleSeries.histplot

plot a distribution of the timeseries across ensembles

pyleoclim.core.series.Series.spectral

spectral analysis method.

pyleoclim.core.multipleseries.MultipleSeries.spectral

spectral analysis method for multiple series.

pyleoclim.core.psds.PSD.signif_test

significance test for timeseries analysis

pyleoclim.core.psds.PSD.plot

plot power spectrum

pyleoclim.core.psds.MulitplePSD.plot

plot envelope of power spectrum

pyleoclim.core.lipdseries.LipdSeries.map

map location of dataset

pyleoclim.core.lipdseries.LipdSeries.getMetadata

get relevant metadata from the timeseries object

pyleoclim.utils.mapping.map

Underlying mapping function for Pyleoclim

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
ts = data.to_LipdSeries(number=5)
fig, ax = ts.dashboard()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
../_images/api_134_33.png
getMetadata()[source]

Get the necessary metadata for the ensemble plots

Parameters:

timeseries (object) – a specific timeseries object.

Returns:

res

A dictionary containing the following metadata:

archiveType, Authors (if more than 2, replace by et al), PublicationYear, Publication DOI, Variable Name, Units, Climate Interpretation, Calibration Equation, Calibration References, Calibration Notes

Return type:

dict

map(projection='Orthographic', proj_default=True, background=True, borders=False, rivers=False, lakes=False, figsize=None, ax=None, marker=None, color=None, markersize=None, scatter_kwargs=None, legend=True, lgd_kwargs=None, savefig_settings=None)[source]

Map the location of the record

Parameters:
  • projection (str, optional) – The projection to use. The default is ‘Robinson’.

  • proj_default (bool; {True, False}, optional) – Whether to use the Pyleoclim defaults for each projection type. The default is True.

  • background (bool; {True, False}, optional) – Whether to use a background. The default is True.

  • borders (bool; {True, False}, optional) – Draw borders. The default is False.

  • rivers (bool; {True, False}, optional) – Draw rivers. The default is False.

  • lakes (bool; {True, False}, optional) – Draw lakes. The default is False.

  • figsize (list or tuple, optional) – The size of the figure. The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the map. The default is None.

  • marker (str, optional) – The marker type for each archive. The default is None. Uses plot_default

  • color (str, optional) – Color for each archive. The default is None. Uses plot_default

  • markersize (float, optional) – Size of the marker. The default is None.

  • scatter_kwargs (dict, optional) – Parameters for the scatter plot. The default is None.

  • legend (bool; {True, False}, optional) – Whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Arguments for the legend. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}. The default is None.

Returns:

res

Return type:

fig,ax

See also

pyleoclim.utils.mapping.map

Underlying mapping function for Pyleoclim

Examples

import pyleoclim as pyleo
url = 'http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=MD982176.Stott.2004'
data = pyleo.Lipd(usr_path = url)
ts = data.to_LipdSeries(number=5)
fig, ax = ts.map()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: MD982176.Stott.2004.lpd
Finished read: 1 record
extracting paleoData...
extracting: MD982176.Stott.2004
Created time series: 6 entries
../_images/api_135_2.png
mapNearRecord(D, n=5, radius=None, sameArchive=False, projection='Orthographic', proj_default=True, background=True, borders=False, rivers=False, lakes=False, figsize=None, ax=None, marker_ref=None, color_ref=None, marker=None, color=None, markersize_adjust=False, scale_factor=100, scatter_kwargs=None, legend=True, lgd_kwargs=None, savefig_settings=None)[source]

Map records that are near the timeseries of interest

Parameters:
  • D (pyleoclim.Lipd) – A pyleoclim LiPD object

  • n (int, optional) – The n number of closest records. The default is 5.

  • radius (float, optional) – The radius to take into consideration when looking for records (in km). The default is None.

  • sameArchive (bool; {True, False}, optional) – Whether to consider records from the same archiveType as the original record. The default is False.

  • projection (str, optional) – A valid cartopy projection. The default is ‘Orthographic’. See pyleoclim.utils.mapping for a list of supported projections.

  • proj_default (True or dict, optional) – The projection arguments. If not True, then use a dictionary to pass the appropriate arguments depending on the projection. The default is True.

  • background (bool; {True, False}, optional) – Whether to use a background. The default is True.

  • borders (bool; {True, False}, optional) – Whether to plot country borders. The default is False.

  • rivers (bool; {True, False}, optional) – Whether to plot rivers. The default is False.

  • lakes (bool; {True, False}, optional) – Whether to plot rivers. The default is False.

  • figsize (list or tuple, optional) – the size of the figure. The default is None.

  • ax (matplotlib.ax, optional) – The matplotlib axis onto which to return the map. The default is None.

  • marker_ref (str, optional) – Marker shape to use for the main record. The default is None, which corresponds to the default marker for the archiveType

  • color_ref (str, optional) – The color for the main record. The default is None, which corresponds to the default color for the archiveType.

  • marker (str or list, optional) – Marker shape to use for the other records. The default is None, which corresponds to the marker shape for each archiveType.

  • color (str or list, optional) – Color for each marker. The default is None, which corresponds to the color for each archiveType

  • markersize_adjust (bool; {True, False}, optional) – Whether to adjust the marker size according to distance from record of interest. The default is False.

  • scale_factor (int, optional) – The maximum marker size. The default is 100.

  • scatter_kwargs (dict, optional) – Parameters for the scatter plot. The default is None.

  • legend (bool; {True, False}, optional) – Whether to show the legend. The default is True.

  • lgd_kwargs (dict, optional) – Parameters for the legend. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}. The default is None.

Returns:

res – contains fig and ax

Return type:

dict

See also

pyleoclim.utils.mapping.map

Underlying mapping function for Pyleoclim

pyleoclim.utils.mapping.dist_sphere

Calculate distance on a sphere

pyleoclim.utils.mapping.compute_dist

Compute the distance between a point and an array

pyleoclim.utils.mapping.within_distance

Returns point in an array within a certain distance

plot_age_depth(figsize=[10, 4], plt_kwargs=None, savefig_settings=None, ensemble=False, D=None, num_traces=10, ensemble_kwargs=None, envelope_kwargs=None, traces_kwargs=None)[source]
Parameters:
  • figsize (list or tuple, optional) – Size of the figure. The default is [10,4].

  • plt_kwargs (dict, optional) – Arguments for basic plot. See Series.plot() for details. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}. The default is None.

  • ensemble (bool; {True, False}, optional) – Whether to use age model ensembles stored in the file for the plot. The default is False. If no ensemble can be found, will error out.

  • D (pyleoclim.Lipd, optional) – The pyleoclim.Lipd object from which the pyleoclim.LipdSeries is derived. The default is None.

  • num_traces (int, optional) – Number of individual age models to plot. To plot only the envelope and median value, set this parameter to 0 or None. The default is 10.

  • ensemble_kwargs (dict, optional) – Parameters associated with identifying the chronEnsemble tables. See pyleoclim.core.lipdseries.LipdSeries.chronEnsembleToPaleo() for details. The default is None.

  • envelope_kwargs (dict, optional) – Parameters to control the envelope plot. See pyleoclim.EnsembleSeries.plot_envelope() for details. The default is None.

  • traces_kwargs (dict, optional) – Parameters to control the traces plot. See pyleoclim.EnsembleSeries.plot_traces() for details. The default is None.

Raises:
  • ValueError – In ensemble mode, make sure that the LiPD object is given

  • KeyError – Depth information needed.

Returns:

The figure

Return type:

fig,ax

See also

pyleoclim.core.lipd.Lipd

Pyleoclim internal representation of a LiPD file

pyleoclim.core.series.Series.plot

Basic plotting in pyleoclim

pyleoclim.core.lipdseries.LipdSeries.chronEnsembleToPaleo

Function to map the ensemble table to a paleo depth.

pyleoclim.core.ensembleseries.EnsembleSeries.plot_envelope

Create an envelope plot from an ensemble

pyleoclim.core.ensembleseries.EnsembleSeries.plot_traces

Create a trace plot from an ensemble

Examples

D = pyleo.Lipd('http://wiki.linked.earth/wiki/index.php/Special:WTLiPD?op=export&lipdid=Crystal.McCabe-Glynn.2013')
ts=D.to_LipdSeries(number=2)
fig, ax = ts.plot_age_depth()
Disclaimer: LiPD files may be updated and modified to adhere to standards

reading: Crystal.McCabe-Glynn.2013.lpd
Finished read: 1 record
extracting paleoData...
extracting: Crystal.McCabe-Glynn.2013
Created time series: 3 entries
Time axis values sorted in ascending order
../_images/api_136_2.png

PSD (pyleoclim.PSD)

class pyleoclim.core.psds.PSD(frequency, amplitude, label=None, timeseries=None, plot_kwargs=None, spec_method=None, spec_args=None, signif_qs=None, signif_method=None, period_unit=None, beta_est_res=None)[source]

The PSD (Power spectral density) class is intended for conveniently manipulating the result of spectral methods, including performing significance tests, estimating scaling coefficients, and plotting.

See examples in pyleoclim.core.series.Series.spectral to see how to create and manipulate these objects

Parameters:
  • frequency (numpy.array, list, or float) – One or more frequencies in power spectrum

  • amplitude (numpy.array, list, or float) – The amplitude at each (frequency, time) point; note the dimension is assumed to be (frequency, time)

  • label (str, optional) – Descriptor of the PSD. Default is None

  • timeseries (pyleoclim.Series, optional) – Default is None

  • plot_kwargs (dict, optional) – Plotting arguments for seaborn histplot: https://seaborn.pydata.org/generated/seaborn.histplot.html. Default is None

  • spec_method (str, optional) – The name of the spectral method to be applied on the timeseries Default is None

  • spec_args (dict, optional) – Arguments for wavelet analysis (‘freq’, ‘scale’, ‘mother’, ‘param’) Default is None

  • signif_qs (pyleoclim.MultipleScalogram, optional) – Pyleoclim MultipleScalogram object containing the quantiles qs of the surrogate scalogram distribution. Default is None

  • signif_method (str, optional) – The method used to obtain the significance level. Default is None

  • period_unit (str, optional) – Unit of time. Default is None

  • beta_est_res (list or numpy.array, optional) – Results of the beta estimation calculation. Default is None.

See also

pyleoclim.core.series.Series.spectral

Spectral analysis

pyleoclim.core.scalograms.Scalogram

Scalogram object

pyleoclim.core.scalograms.MultipleScalogram

Object storing multiple scalogram objects

pyleoclim.core.psds.MultiplePSD

Object storing several PSDs from different Series or ensemble members in an age model

Methods

anti_alias([avgs])

Apply the anti-aliasing filter

beta_est([fmin, fmax, logf_binning_step, ...])

Estimate the scaling exponent (beta) of the PSD

copy()

Copy object

plot([in_loglog, in_period, label, xlabel, ...])

Plots the PSD estimates and signif level if included

signif_test([method, number, seed, qs, ...])

param number:

Number of surrogate series to generate for significance testing. The default is None.

anti_alias(avgs=2)[source]

Apply the anti-aliasing filter

Parameters:

avgs (int) – flag for whether spectrum is derived from instantaneous point measurements (avgs<>1) OR from measurements averaged over each sampling interval (avgs==1)

Returns:

new – New PSD object with the spectral aliasing effect alleviated.

Return type:

pyleoclim.core.psds.PSD

Examples

Generate colored noise with scaling exponent equals to unity, and test the impact of anti-aliasing filter

import pyleoclim as pyleo

t, v = pyleo.utils.tsmodel.gen_ts('colored_noise', alpha=1, m=1e5) # m=1e5 leads to aliasing
ts = pyleo.Series(time=t, value=v, label='colored noise', verbose=False)

# without the anti-aliasing filter
fig, ax = ts.spectral(method='mtm').beta_est().plot()

# with the anti-aliasing filter
fig, ax = ts.spectral(method='mtm').anti_alias().beta_est().plot()
../_images/api_137_0.png ../_images/api_137_1.png

References

Kirchner, J. W. Aliasing in 1/f(alpha) noise spectra: origins, consequences, and remedies. Phys Rev E Stat Nonlin Soft Matter Phys 71, 66110 (2005).

beta_est(fmin=None, fmax=None, logf_binning_step='max', verbose=False)[source]

Estimate the scaling exponent (beta) of the PSD

For a power law S(f) ~ f^beta in log-log space, beta is simply the slope.

Parameters:
  • fmin (float, optional) – the minimum frequency edge for beta estimation; the default is the minimum of the frequency vector of the PSD obj

  • fmax (float, optional) – the maximum frequency edge for beta estimation; the default is the maximum of the frequency vector of the PSD obj

  • logf_binning_step (str, {'max', 'first'}) – if ‘max’, then the maximum spacing of log(f) will be used as the binning step if ‘first’, then the 1st spacing of log(f) will be used as the binning step

  • verbose (bool; {True, False}) – If True, will print warning messages if there is any

Returns:

new – New PSD object with the estimated scaling slope information, which is stored as a dictionary that includes: - beta: the scaling factor - std_err: the one standard deviation error of the scaling factor - f_binned: the binned frequency series, used as X for linear regression - psd_binned: the binned PSD series, used as Y for linear regression - Y_reg: the predicted Y from linear regression, used with f_binned for the slope curve plotting

Return type:

pyleoclim.core.psds.PSD

Examples

Generate fractal noise and verify that its scaling exponent is close to unity

import pyleoclim as pyleo
t, v = pyleo.utils.tsmodel.gen_ts(model='colored_noise')
ts = pyleo.Series(time=t, value= v, label = 'fractal noise, unit slope', verbose=False)
psd = ts.detrend().spectral(method='cwt')

# estimate the scaling slope
psd_beta = psd.beta_est(fmin=1/50, fmax=1/2)

fig, ax = psd_beta.plot(color='tab:blue',beta_kwargs={'color':'tab:red','linewidth':2})
../_images/api_138_0.png

See also

pyleoclim.core.series.Series.spectral

spectral analysis

pyleoclim.utils.spectral.beta_estimation

Estimate the scaling exponent of a power spectral density

pyleoclim.core.psds.PSD.plot

plotting method for PSD objects

copy()[source]

Copy object

plot(in_loglog=True, in_period=True, label=None, xlabel=None, ylabel='PSD', title=None, marker=None, markersize=None, color=None, linestyle=None, linewidth=None, transpose=False, xlim=None, ylim=None, figsize=[10, 4], savefig_settings=None, ax=None, legend=True, lgd_kwargs=None, xticks=None, yticks=None, alpha=None, zorder=None, plot_kwargs=None, signif_clr='red', signif_linestyles=['--', '-.', ':'], signif_linewidth=1, plot_beta=True, beta_kwargs=None)[source]

Plots the PSD estimates and signif level if included

Parameters:
  • in_loglog (bool; {True, False}, optional) – Plot on loglog axis. The default is True.

  • in_period (bool; {True, False}, optional) – Plot the x-axis as periodicity rather than frequency. The default is True.

  • label (str, optional) – label for the series. The default is None.

  • xlabel (str, optional) – Label for the x-axis. The default is None. Will guess based on Series

  • ylabel (str, optional) – Label for the y-axis. The default is ‘PSD’.

  • title (str, optional) – Plot title. The default is None.

  • marker (str, optional) – marker to use. The default is None.

  • markersize (int, optional) – size of the marker. The default is None.

  • color (str, optional) – Line color. The default is None.

  • linestyle (str, optional) – linestyle. The default is None.

  • linewidth (float, optional) – Width of the line. The default is None.

  • transpose (bool; {True, False}, optional) – Plot periodicity on y-. The default is False.

  • xlim (list, optional) – x-axis limits. The default is None.

  • ylim (list, optional) – y-axis limits. The default is None.

  • figsize (list, optional) – Figure size. The default is [10, 4].

  • savefig_settings (dict, optional) –

    save settings options. The default is None. the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (ax, optional) – The matplotlib.Axes object onto which to return the plot. The default is None.

  • legend (bool; {True, False}, optional) – whether to plot the legend. The default is True.

  • lgd_kwargs (dict, optional) – Arguments for the legend. The default is None.

  • xticks (list, optional) – xticks to use. The default is None.

  • yticks (list, optional) – yticks to use. The default is None.

  • alpha (float, optional) – Transparency setting. The default is None.

  • zorder (int, optional) – Order for the plot. The default is None.

  • plot_kwargs (dict, optional) – Other plotting argument. The default is None.

  • signif_clr (str, optional) – Color for the significance line. The default is ‘red’.

  • signif_linestyles (list of str, optional) – Linestyles for significance. The default is [’–’, ‘-.’, ‘:’].

  • signif_linewidth (float, optional) – width of the significance line. The default is 1.

  • plot_beta (bool; {True, False}, optional) – If True and self.beta_est_res is not None, then the scaling slope line will be plotted

  • beta_kwargs (dict, optional) – The visualization keyword arguments for the scaling slope

Return type:

fig, ax

Examples

Generate fractal noise, assess significance against an AR(1) benchmark, and plot:

import matplotlib.pyplot as plt

t, v = pyleo.utils.tsmodel.gen_ts(model='colored_noise')
ts = pyleo.Series(time = t, value = v, label = 'fractal noise', verbose=False)
tsn = ts.standardize()

psd_sim = tsn.spectral(method='mtm').signif_test(number=20)
psd_sim.plot()
(<Figure size 1000x400 with 1 Axes>,
 <Axes: xlabel='Period [years]', ylabel='PSD'>)
../_images/api_139_10.png

If you add the estimate of the scaling exponent, the line of best fit will be added to the plot, and the estimated exponent to its legend. For instance:

psd_beta = psd_sim.beta_est(fmin=1/100, fmax=1/2)
fig, ax = psd_beta.plot()
../_images/api_140_0.png

See also

pyleoclim.core.series.Series.spectral

spectral analysis

pyleoclim.core.psds.PSD.signif_test

significance testing for PSD objects

pyleoclim.core.psds.PSD.beta_est

scaling exponent estimation for PSD objects

signif_test(method='ar1sim', number=None, seed=None, qs=[0.95], settings=None, scalogram=None)[source]
Parameters:
  • number (int, optional) – Number of surrogate series to generate for significance testing. The default is None.

  • method (str; {'ar1asym','ar1sim'}) – Method to generate surrogates. AR1sim uses simulated timeseries with similar persistence. AR1asymp represents the closed form solution. The default is AR1sim

  • seed (int, optional) – Option to set the seed for reproducibility. The default is None.

  • qs (list, optional) – Significance levels to return. The default is [0.95].

  • settings (dict, optional) – Parameters for the specific significance test. The default is None. Note that the default value for the asymptotic solution is time-average

  • scalogram (pyleoclim.Scalogram object, optional) – Scalogram containing signif_scals exported during significance testing of scalogram. If number is None and signif_scals are present, will use length of scalogram list as number of significance tests

Returns:

new – New PSD object with appropriate significance test

Return type:

pyleoclim.core.psds.PSD

Examples

Compute the spectrum of the Southern Oscillation Index and assess significance against an AR(1) benchmark:

import pyleoclim as pyleo
soi = pyleo.utils.load_dataset('SOI')
psd = soi.standardize().spectral('mtm',settings={'NW':2})
psd_sim = psd.signif_test(number=20)
fig, ax = psd_sim.plot()
../_images/api_141_5.png

By default, this method uses 200 Monte Carlo simulations of an AR(1) process. For a smoother benchmark, up the number of simulations. Also, you may obtain and visualize several quantiles at once, e.g. 90% and 95%:

psd_1000 = psd.signif_test(number=100, qs=[0.90, 0.95])
fig, ax = psd_1000.plot()
../_images/api_142_17.png

Another option is to use a closed-form, asymptotic solution for the AR(1) spectrum:

psd_asym = psd.signif_test(method='ar1asym',qs=[0.90, 0.95])
fig, ax = psd_asym.plot()
../_images/api_143_0.png

If significance tests from a comparable scalogram have been saved, they can be passed here to speed up the generation of noise realizations for significance testing. Setting export_scal to True saves the noise realizations generated during significance testing for future use:

scalogram = soi.standardize().wavelet().signif_test(number=20, export_scal=True)

The psd can be calculated by using the previously generated scalogram

psd_scal = soi.standardize().spectral(scalogram=scalogram)

The same scalogram can then be passed to do significance testing. Pyleoclim will dig through the scalogram object to find the saved noise realizations and reuse them flexibly.

fig, ax = psd.signif_test(scalogram=scalogram).plot()
../_images/api_146_5.png

See also

pyleoclim.utils.wavelet.tc_wave_signif

asymptotic significance calculation

pyleoclim.core.psds.MultiplePSD

Object storing several PSDs from different Series or ensemble members in an age model

pyleoclim.core.scalograms.Scalogram

Scalogram object

pyleoclim.core.series.Series.surrogates

Generate surrogates with increasing time axis

pyleoclim.core.series.Series.spectral

Performs spectral analysis on Pyleoclim Series

pyleoclim.core.series.Series.wavelet

Performs wavelet analysis on Pyleoclim Series

MultiplePSD (pyleoclim.MultiplePSD)

class pyleoclim.core.psds.MultiplePSD(psd_list, beta_est_res=None)[source]

MultiplePSD objects store several PSDs from different Series or ensemble members from a posterior distribution (e.g. age model, Bayesian climate reconstruction, etc). This is used extensively for Monte Carlo significance tests.

Methods

anti_alias([avgs, mute_pbar])

Apply the anti-aliasing filter

beta_est([fmin, fmax, logf_binning_step, ...])

Estimate the scaling exponent of each constituent PSD

copy()

Copy object

plot([figsize, in_loglog, in_period, ...])

Plot multiple PSDs on the same plot

plot_envelope([figsize, qs, in_loglog, ...])

Plot an envelope statistics for mulitple PSD

quantiles([qs, lw])

Calculate the quantiles of the significance testing

anti_alias(avgs=2, mute_pbar=False)[source]

Apply the anti-aliasing filter

Parameters:
  • avgs (int) – flag for whether spectrum is derived from instantaneous point measurements (avgs<>1) OR from measurements averaged over each sampling interval (avgs==1)

  • mute_pbar (bool; {True,False}) – If True, the progressbar will be muted. Default is False.

Returns:

new – New MultiplePSD object with the spectral aliasing effect alleviated.

Return type:

pyleoclim.core.psds.MultiplePSD

References

Kirchner, J. W. Aliasing in 1/f(alpha) noise spectra: origins, consequences, and remedies. Phys Rev E Stat Nonlin Soft Matter Phys 71, 66110 (2005).

beta_est(fmin=None, fmax=None, logf_binning_step='max', verbose=False)[source]

Estimate the scaling exponent of each constituent PSD

This function calculates the scaling exponent (beta) for each of the PSDs stored in the object. The scaling exponent represents the slope of the spectrum in log-log space.

Parameters:
  • fmin (float) – the minimum frequency edge for beta estimation; the default is the minimum of the frequency vector of the PSD object

  • fmax (float) – the maximum frequency edge for beta estimation; the default is the maximum of the frequency vector of the PSD object

  • logf_binning_step (str; {'max', 'first'}) – if ‘max’, then the maximum spacing of log(f) will be used as the binning step. if ‘first’, then the 1st spacing of log(f) will be used as the binning step.

  • verbose (bool) – If True, will print warning messages if there is any

Returns:

new – New MultiplePSD object with the estimated scaling slope information, which is stored as a dictionary that includes: - beta: the scaling factor - std_err: the one standard deviation error of the scaling factor - f_binned: the binned frequency series, used as X for linear regression - psd_binned: the binned PSD series, used as Y for linear regression - Y_reg: the predicted Y from linear regression, used with f_binned for the slope curve plotting

Return type:

pyleoclim.core.psds.MultiplePSD

See also

pyleoclim.core.psds.PSD.beta_est

scaling exponent estimation for a single PSD object

copy()[source]

Copy object

plot(figsize=[10, 4], in_loglog=True, in_period=True, xlabel=None, ylabel='PSD', title=None, xlim=None, ylim=None, savefig_settings=None, ax=None, xticks=None, yticks=None, legend=True, colors=None, cmap=None, norm=None, plot_kwargs=None, lgd_kwargs=None)[source]

Plot multiple PSDs on the same plot

Parameters:
  • figsize (list, optional) – Figure size. The default is [10, 4].

  • in_loglog (bool, optional) – Whether to plot in loglog. The default is True.

  • in_period (bool, {True, False} optional) – Plots against periods instead of frequencies. The default is True.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is ‘Amplitude’.

  • title (str, optional) – Title for the figure. The default is None.

  • xlim (list, optional) – Limits for the x-axis. The default is None.

  • ylim (list, optional) – limits for the y-axis. The default is None.

  • colors (a list of, or one, Python supported color code (a string of hex code or a tuple of rgba values)) – Colors for plotting. If None, the plotting will cycle the ‘tab10’ colormap; if only one color is specified, then all curves will be plotted with that single color; if a list of colors are specified, then the plotting will cycle that color list.

  • cmap (str) – The colormap to use when “colors” is None.

  • norm (matplotlib.colors.Normalize like) – The nomorlization for the colormap. If None, a linear normalization will be used.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (matplotlib axis, optional) – The matplotlib axis object on which to retrun the figure. The default is None.

  • xticks (list, optional) – x-ticks label. The default is None.

  • yticks (list, optional) – y-ticks label. The default is None.

  • legend (bool, optional) – Whether to plot the legend. The default is True.

  • plot_kwargs (dictionary, optional) – Parameters for plot function. The default is None.

  • lgd_kwargs (dictionary, optional) – Parameters for legend. The default is None.

Returns:

  • fig (matplotlib.pyplot.figure)

  • ax (matplotlib.pyplot.axis)

See also

pyleoclim.core.psds.PSD.plot

plotting method for PSD objects

plot_envelope(figsize=[10, 4], qs=[0.025, 0.5, 0.975], in_loglog=True, in_period=True, xlabel=None, ylabel='PSD', title=None, xlim=None, ylim=None, savefig_settings=None, ax=None, xticks=None, yticks=None, plot_legend=True, curve_clr='#d9544d', curve_lw=3, shade_clr='#d9544d', shade_alpha=0.3, shade_label=None, lgd_kwargs=None, members_plot_num=10, members_alpha=0.3, members_lw=1, seed=None)[source]

Plot an envelope statistics for mulitple PSD

This function plots an envelope statistics from multiple PSD. This is especially useful when the PSD are coming from an ensemble of possible solutions (e.g., age ensembles)

Parameters:
  • figsize (list, optional) – The figure size. The default is [10, 4].

  • qs (list, optional) – The significance levels to consider. The default is [0.025, 0.5, 0.975].

  • in_loglog (bool, optional) – Plot in log space. The default is True.

  • in_period (bool, optional) – Whether to plot periodicity instead of frequency. The default is True.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is ‘Amplitude’.

  • title (str, optional) – Plot title. The default is None.

  • xlim (list, optional) – x-axis limits. The default is None.

  • ylim (list, optional) – y-axis limits. The default is None.

  • savefig_settings (dict, optional) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or non-existing path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”} The default is None.

  • ax (matplotlib.ax, optional) – Matplotlib axis on which to return the plot. The default is None.

  • xticks (list, optional) – xticks label. The default is None.

  • yticks (list, optional) – yticks label. The default is None.

  • plot_legend (bool, optional) – Wether to plot the legend. The default is True.

  • curve_clr (str, optional) – Color of the main PSD. The default is sns.xkcd_rgb[‘pale red’].

  • curve_lw (str, optional) – Width of the main PSD line. The default is 3.

  • shade_clr (str, optional) – Color of the shaded envelope. The default is sns.xkcd_rgb[‘pale red’].

  • shade_alpha (float, optional) – Transparency on the envelope. The default is 0.3.

  • shade_label (str, optional) – Label for the envelope. The default is None.

  • lgd_kwargs (dict, optional) – Parameters for the legend. The default is None.

  • members_plot_num (int, optional) – Number of individual members to plot. The default is 10.

  • members_alpha (float, optional) – Transparency of the lines representing the multiple members. The default is 0.3.

  • members_lw (float, optional) – With of the lines representing the multiple members. The default is 1.

  • seed (int, optional) – Set the seed for random number generator. Useful for reproducibility. The default is None.

Returns:

  • fig (matplotlib.pyplot.figure)

  • ax (matplotlib.pyplot.axis)

See also

pyleoclim.core.psds.PSD.plot

plotting method for PSD objects

pyleoclim.core.ensembleseries.EnsembleSeries.plot_envelope

envelope plot for ensembles

Examples

import pyleoclim as pyleo
import numpy as np
nn = 30 # number of noise realizations
nt = 500 # timeseries length
psds = []

time, signal = pyleo.utils.gen_ts(model='colored_noise',nt=nt,alpha=1.0)

ts = pyleo.Series(time=time, value = signal, verbose=False).standardize()
noise = np.random.randn(nt,nn)

for idx in range(nn):  # noise
    ts = pyleo.Series(time=time, value=signal+10*noise[:,idx], verbose=False)
    psd = ts.spectral()
    psds.append(psd)

mPSD = pyleo.MultiplePSD(psds)

fig, ax = mPSD.plot_envelope()
../_images/api_147_0.png
quantiles(qs=[0.05, 0.5, 0.95], lw=[0.5, 1.5, 0.5])[source]

Calculate the quantiles of the significance testing

Parameters:
  • qs (list, optional) – List of quantiles to consider for the calculation. The default is [0.05, 0.5, 0.95].

  • lw (list, optional) – Linewidth to use for plotting each level. Should be the same length as qs. The default is [0.5, 1.5, 0.5].

Raises:

ValueError – Frequency axis not consistent across the PSD list!

Returns:

psds

Return type:

pyleoclim.core.psds.MultiplePSD

Scalogram (pyleoclim.Scalogram)

class pyleoclim.core.scalograms.Scalogram(frequency, scale, time, amplitude, coi=None, label=None, Neff_threshold=3, wwz_Neffs=None, timeseries=None, wave_method=None, wave_args=None, signif_qs=None, signif_method=None, freq_method=None, freq_kwargs=None, scale_unit=None, time_label=None, signif_scals=None, qs=None)[source]

The Scalogram class is analogous to PSD, but for wavelet spectra (scalograms).

Methods

copy()

Copy object

plot([variable, in_scale, xlabel, ylabel, ...])

Plot the scalogram

signif_test([method, number, seed, qs, ...])

Significance test for scalograms

copy()[source]

Copy object

Returns:

scal – The copied version of the pyleoclim.Scalogram object

Return type:

pyleoclim.core.scalograms.Scalogram

Examples

import pyleoclim as pyleo
series = pyleo.utils.load_dataset('SOI')
scalogram = series.wavelet()
scalogram_copy = scalogram.copy()
plot(variable='amplitude', in_scale=True, xlabel=None, ylabel=None, title=None, ylim=None, xlim=None, yticks=None, figsize=[10, 8], signif_clr='white', signif_linestyles='-', signif_linewidths=1, contourf_style={}, cbar_style={}, plot_cb=True, savefig_settings={}, ax=None, signif_thresh=0.95)[source]

Plot the scalogram

Parameters:
  • in_scale (bool, optional) – Plot the in scale instead of frequency space. The default is True.

  • variable ({'amplitude','power'}) – Whether to plot the amplitude or power. Default is amplitude

  • xlabel (str, optional) – Label for the x-axis. The default is None.

  • ylabel (str, optional) – Label for the y-axis. The default is None.

  • title (str, optional) – Title for the figure. The default is ‘default’, which auto-generates a title.

  • ylim (list, optional) – Limits for the y-axis. The default is None.

  • xlim (list, optional) – Limits for the x-axis. The default is None.

  • yticks (list, optional) – yticks label. The default is None.

  • figsize (list, optional) – Figure size The default is [10, 8].

  • signif_clr (str, optional) – Color of the singificance line. The default is ‘white’.

  • signif_thresh (float in [0, 1]) – Significance threshold. Default is 0.95. If this quantile is not found in the qs field of the Coherence object, the closest quantile will be picked.

  • signif_linestyles (str, optional) – Linestyle of the significance line. The default is ‘-‘.

  • signif_linewidths (float, optional) – Width for the significance line. The default is 1.

  • contourf_style (dict, optional) – Arguments for the contour plot. The default is {}.

  • cbar_style (dict, optional) – Arguments for the colarbar. The default is {}.

  • savefig_settings (dict, optional) – saving options for the figure. The default is {}.

  • ax (ax, optional) – Matplotlib Axis on which to return the figure. The default is None.

Returns:

See also

pyleoclim.core.series.Series.wavelet

Wavelet analysis

pyleoclim.utils.plotting.savefig

Saving figure in Pyleoclim

Examples

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')
scalogram = ts.wavelet()

fig,ax = scalogram.plot()
../_images/api_149_0.png
signif_test(method='ar1sim', number=None, seed=None, qs=[0.95], settings=None, export_scal=False)[source]

Significance test for scalograms

Parameters:
  • method ({'ar1asym', 'ar1sim'}) – Method to use to generate the surrogates. ar1sim uses simulated timeseries with similar persistence. ar1asym represents the theoretical, closed-form solution. The default is ar1sim

  • number (int) – Number of surrogates to generate for significance analysis based on simulations. The default is 200.

  • seed (int, optional) – Set the seed for the random number generator. Useful for reproducibility The default is None.

  • qs (list, optional) – Significane level to consider. The default is [0.95].

  • settings (dict, optional) – Parameters for the model. The default is None.

  • export_scal (bool; {True,False}) – Whether or not to export the scalograms used in the noise realizations. Note: For the wwz method, the scalograms used for wavelet analysis are slightly different than those used for spectral analysis (different decay constant). As such, this functionality should be used only to expedite exploratory analysis.

Raises:

ValueError – qs should be a list with at least one value.

Returns:

new – A new Scalogram object with the significance level

Return type:

pyleoclim.core.scalograms.Scalogram

See also

pyleoclim.core.series.Series.wavelet

Wavelet analysis

pyleoclim.core.scalograms.MultipleScalogram

MultipleScalogram object

pyleoclim.utils.wavelet.tc_wave_signif

Asymptotic significance calculation

Examples

Generating scalogram, running significance tests, and saving the output for future use in generating psd objects or in summary_plot()

import pyleoclim as pyleo
ts = pyleo.utils.load_dataset('SOI')

By setting export_scal to True, the noise realizations used to generate the significance test will be saved. These come in handy for generating summary plots and for running significance tests on spectral objects.

scalogram = ts.wavelet().signif_test(number=2, export_scal=True)

MultipleScalogram (pyleoclim.MultipleScalogram)

class pyleoclim.core.scalograms.MultipleScalogram(scalogram_list)[source]

MultipleScalogram objects are used to store the results of significance testing for wavelet analysis

Methods

copy()

Copy the object

quantiles([qs])

Calculate quantiles

copy()[source]

Copy the object

See also

pyleoclim.core.scalograms.Scalogram.copy

Scalogram object copy

quantiles(qs=[0.05, 0.5, 0.95])[source]

Calculate quantiles

Parameters:

qs (list, optional) – List of quantiles to consider for the calculation. The default is [0.05, 0.5, 0.95].

Raises:
  • ValueError – Frequency axis not consistent across the PSD list!

  • Value Error – Time axis not consistent across the scalogram list!

Returns:

scals

Return type:

pyleoclim.core.scalograms.MultipleScalogram

Coherence (pyleoclim.Coherence)

class pyleoclim.core.coherence.Coherence(frequency, scale, time, wtc, xwt, phase, coi=None, wave_method=None, wave_args=None, timeseries1=None, timeseries2=None, signif_qs=None, signif_method=None, qs=None, freq_method=None, freq_kwargs=None, Neff_threshold=3, scale_unit=None, time_label=None)[source]

Coherence object, meant to receive the WTC and XWT part of Series.wavelet_coherence()

See also

pyleoclim.core.series.Series.wavelet_coherence

Wavelet coherence method

Methods

copy()

Copy object

dashboard([title, figsize, overlap, ...])

Cross-wavelet dashboard, including the two series, their WTC and XWT.

phase_stats(scales[, number, level])

Estimate phase angle statistics of a Coherence object

plot([var, xlabel, ylabel, title, figsize, ...])

Plot the cross-wavelet results

signif_test([number, method, seed, qs, ...])

Significance testing for Coherence objects

copy()[source]

Copy object

dashboard(title=None, figsize=[9, 12], overlap=True, phase_style={}, line_colors=['tab:blue', 'tab:orange'], savefig_settings={}, ts_plot_kwargs=None, wavelet_plot_kwargs=None)[source]

Cross-wavelet dashboard, including the two series, their WTC and XWT.

Note: this design balances many considerations, and is not easily customizable.

Parameters:
  • title (str, optional) – Title of the plot. The default is None.

  • figsize (list, optional) – Figure size. The default is [9, 12], as this is an information-rich figure.

  • overlap (boolean, optional) – whether to restrict the plot to the period of overlap between the series. Defaults to True

  • phase_style (dict, optional) – Arguments for the phase arrows. The default is {}. It includes: - ‘pt’: the default threshold above which phase arrows will be plotted - ‘skip_x’: the number of points to skip between phase arrows along the x-axis - ‘skip_y’: the number of points to skip between phase arrows along the y-axis - ‘scale’: number of data units per arrow length unit (see matplotlib.pyplot.quiver) - ‘width’: shaft width in arrow units (see matplotlib.pyplot.quiver) - ‘color’: arrow color (see matplotlib.pyplot.quiver)

  • line_colors (list, optional) – Colors for the 2 traces For nomenclature, see https://matplotlib.org/stable/gallery/color/named_colors.html

  • savefig_settings (dict, optional) –

    The default is {}. the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ts_plot_kwargs (dict) – arguments to be passed to the timeseries subplot, see pyleoclim.core.series.Series.plot for details

  • wavelet_plot_kwargs (dict) – arguments to be passed to the contour subplots (XWT and WTC), [see pyleoclim.core.coherence.Coherence.plot for details]

Return type:

fig, ax

See also

pyleoclim.core.coherence.Coherence.plot

creates a coherence plot

pyleoclim.core.series.Series.wavelet_coherence

computes the coherence between two timeseries.

pyleoclim.core.series.Series.plot

plots a timeseries

matplotlib.pyplot.quiver

makes a quiver plot

Examples

Calculate the coherence of NINO3 and All India Rainfall and plot it as a dashboard:

import pyleoclim as pyleo
ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')

coh = ts_air.wavelet_coherence(ts_nino)
coh_sig = coh.signif_test(number=10)

coh_sig.dashboard()
(<Figure size 900x1200 with 6 Axes>,
 {'ts1': <Axes: ylabel='AIR [mm/month]'>,
  'ts2': <Axes: xlabel='Time [year C.E.]', ylabel='NINO3 [$^{\\circ}$C]'>,
  'wtc': <Axes: ylabel='Scale [yrs]'>,
  'xwt': <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>})
../_images/api_152_14.png

You may customize colors like so:

coh_sig.dashboard(line_colors=['teal','gold'])
(<Figure size 900x1200 with 6 Axes>,
 {'ts1': <Axes: ylabel='AIR [mm/month]'>,
  'ts2': <Axes: xlabel='Time [year C.E.]', ylabel='NINO3 [$^{\\circ}$C]'>,
  'wtc': <Axes: ylabel='Scale [yrs]'>,
  'xwt': <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>})
../_images/api_153_1.png

To export the figure, use savefig_settings:

coh_sig.dashboard(savefig_settings={'path':'./coh_dash.png','dpi':300})
Figure saved at: "coh_dash.png"
(<Figure size 900x1200 with 6 Axes>,
 {'ts1': <Axes: ylabel='AIR [mm/month]'>,
  'ts2': <Axes: xlabel='Time [year C.E.]', ylabel='NINO3 [$^{\\circ}$C]'>,
  'wtc': <Axes: ylabel='Scale [yrs]'>,
  'xwt': <Axes: xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>})
phase_stats(scales, number=1000, level=0.05)[source]

Estimate phase angle statistics of a Coherence object

As per [1], the strength (consistency) of a phase relationship is assessed using:

  • sigma, the circular standard deviation

  • kappa, an estimate of the Von Mises distribution’s concentration parameter.

    It is a reciprocal measure of dispersion, so 1/kappa is analogous to the variance) [3].

Because of inherent persistence of geophysical signals and of the reproducing kernel of the continuous wavelet transform [3], phase statistics are assessed relative to an AR(1) model fit to the angle deviations observed at the requested scale(s).

Specifically, if number is specified, the method simulates number Monte Carlo realizations of an AR(1) process fit to fluctuations around the mean angle. This ensemble is used to obtain the confidence limits: sigma_lo (level quantile) and kappa_hi (1-level quantile). These correspond to 1-tailed tests of the strength of the relationship.

Parameters:
  • scales (float) – scale at which to evaluate the phase angle

  • number (int, optional) – number of AR(1) series to create for significance testing. The default is 1000.

  • level (float, optional) – significance level against which to gauge sigma and kappa. default: 0.05

Returns:

result – contains angle_mean (the mean angle for those scales), sigma (the circular standard deviation), kappa, sigma_lo (alpha-level quantile for sigma) and kappa_hi, the (1-alpha)-level quantile for kappa.

Return type:

dict

See also

pyleoclim.core.series.Series.wavelet_coherence

Wavelet coherence

pyleoclim.core.scalograms.Scalogram

Scalogram object

pyleoclim.core.scalograms.MultipleScalogram

Multiple Scalogram object

pyleoclim.core.coherence.Coherence.plot

plotting method for Coherence objects

pyleoclime.utils.wavelet.angle_sig

significance of phase angle statistics

pyleoclim.utils.wavelet.angle_stats

phase angle statistics

References

[1] Grinsted, A., J. C. Moore, and S. Jevrejeva (2004), Application of the cross wavelet transform and wavelet coherence to geophysical time series, Nonlinear Processes in Geophysics, 11, 561–566.

[2] Huber, R., Dutra, L. V., & da Costa Freitas, C. (2001). SAR interferogram phase filtering based on the Von Mises distribution. In IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217) (Vol. 6, pp. 2816-2818). IEEE.

[3] Farge, M. and Schneider, K. (2006): Wavelets: application to turbulence Encyclopedia of Mathematical Physics (Eds. J.-P. Françoise, G. Naber and T.S. Tsun) pp 408-420.

Examples

Calculate the phase angle between NINO3 and All India Rainfall at 5y scales:

import pyleoclim as pyleo
ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')
coh = ts_air.wavelet_coherence(ts_nino)
coh.phase_stats(scales=5)
Results(mean_angle=-2.681280000025798, kappa=3.5019848885758718, sigma=0.5853785320215593, kappa_hi=0.6826463639182965, sigma_lo=1.5033821091007913)

One may also obtain phase angle statistics over an interval, like the 2-8y ENSO band:

phase = coh.phase_stats(scales=[2,8])
print("The mean angle is {:4.2f}°".format(phase.mean_angle/np.pi*180))
print(phase)
The mean angle is -154.37°
Results(mean_angle=-2.6942957845112434, kappa=3.35491229728558, sigma=0.6019124449170709, kappa_hi=0.48079692824666576, sigma_lo=1.7050596251642216)

From this example, one diagnoses a strong anti-phased relationship in the ENSO band, with high von Mises concentration (kappa ~ 3.35 >> kappa_hi) and low circular dispersion (sigma ~ 0.6 << sigma_lo). This would be strong evidence of a consistent anti-phasing between NINO3 and AIR at those scales.

plot(var='wtc', xlabel=None, ylabel=None, title='auto', figsize=[10, 8], ylim=None, xlim=None, in_scale=True, yticks=None, contourf_style={}, phase_style={}, cbar_style={}, savefig_settings={}, ax=None, signif_clr='white', signif_linestyles='-', signif_linewidths=1, signif_thresh=0.95, under_clr='ivory', over_clr='black', bad_clr='dimgray')[source]

Plot the cross-wavelet results

Parameters:
  • var (str {'wtc', 'xwt'}) – variable to be plotted as color field. Default: ‘wtc’, the wavelet transform coherency. ‘xwt’ plots the cross-wavelet transform instead.

  • xlabel (str, optional) – x-axis label. The default is None.

  • ylabel (str, optional) – y-axis label. The default is None.

  • title (str, optional) – Title of the plot. The default is ‘auto’, where it is made from object metadata. To mute, pass title = None.

  • figsize (list, optional) – Figure size. The default is [10, 8].

  • ylim (list, optional) – y-axis limits. The default is None.

  • xlim (list, optional) – x-axis limits. The default is None.

  • in_scale (bool, optional) – Plots scales instead of frequencies The default is True.

  • yticks (list, optional) – y-ticks label. The default is None.

  • contourf_style (dict, optional) – Arguments for the contour plot. The default is {}.

  • phase_style (dict, optional) – Arguments for the phase arrows. The default is {}. It includes: - ‘pt’: the default threshold above which phase arrows will be plotted - ‘skip_x’: the number of points to skip between phase arrows along the x-axis - ‘skip_y’: the number of points to skip between phase arrows along the y-axis - ‘scale’: number of data units per arrow length unit (see matplotlib.pyplot.quiver) - ‘width’: shaft width in arrow units (see matplotlib.pyplot.quiver) - ‘color’: arrow color (see matplotlib.pyplot.quiver)

  • cbar_style (dict, optional) – Arguments for the color bar. The default is {}.

  • savefig_settings (dict, optional) –

    The default is {}. the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • ax (ax, optional) – Matplotlib axis on which to return the figure. The default is None.

  • signif_thresh (float in [0, 1]) – Significance threshold. Default is 0.95. If this quantile is not found in the qs field of the Coherence object, the closest quantile will be picked.

  • signif_clr (str, optional) – Color of the significance line. The default is ‘white’.

  • signif_linestyles (str, optional) – Style of the significance line. The default is ‘-‘.

  • signif_linewidths (float, optional) – Width of the significance line. The default is 1.

  • under_clr (str, optional) – Color for under 0. The default is ‘ivory’.

  • over_clr (str, optional) – Color for over 1. The default is ‘black’.

  • bad_clr (str, optional) – Color for missing values. The default is ‘dimgray’.

Return type:

fig, ax

See also

pyleoclim.core.coherence.Coherence.dashboard

plots a a dashboard showing the coherence and the cross-wavelet transform.

pyleoclim.core.series.Series.wavelet_coherence

computes the coherence from two timeseries.

matplotlib.pyplot.quiver

quiver plot

Examples

Calculate the wavelet coherence of NINO3 and All India Rainfall and plot it: .. jupyter-execute:

import pyleoclim as pyleo
ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')

coh = ts_air.wavelet_coherence(ts_nino)
coh.plot()

Establish significance against an AR(1) benchmark:

coh_sig = coh.signif_test(number=20, qs=[.9,.95,.99])
coh_sig.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:95% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_157_24.png

Note that specifiying 3 significance thresholds does not take any more time as the quantiles are simply estimated from the same ensemble. By default, the plot function looks for the closest quantile to 0.95, but this is easy to adjust, e.g. for the 99th percentile:

coh_sig.plot(signif_thresh = 0.99)
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:99% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_158_1.png

By default, the function plots the wavelet transform coherency (WTC), which quantifies where two timeseries exhibit similar behavior in time-frequency space, regardless of whether this corresponds to regions of high common power. To visualize the latter, you want to plot the cross-wavelet transform (XWT) instead, like so:

coh_sig.plot(var='xwt')
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:95% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_159_1.png
signif_test(number=200, method='ar1sim', seed=None, qs=[0.95], settings=None, mute_pbar=False)[source]

Significance testing for Coherence objects

The method obtains quantiles qs of the distribution of coherence between number pairs of Monte Carlo simulations of a process that resembles the original series. Currently, only AR(1) surrogates are supported.

Parameters:
  • number (int, optional) – Number of surrogate series to create for significance testing. The default is 200.

  • method ({'ar1sim','phaseran'}, optional) – Method through which to generate the surrogate series. The default is ‘ar1sim’.

  • seed (int, optional) – Fixes the seed for NumPy’s random number generator. Useful for reproducibility. The default is None, so fresh, unpredictable entropy will be pulled from the operating system.

  • qs (list, optional) – Significance levels to return. The default is [0.95].

  • settings (dict, optional) – Parameters for surrogate model. The default is None.

  • mute_pbar (bool, optional) – Mute the progress bar. The default is False.

Returns:

new – original Coherence object augmented with significance levels signif_qs, a list with the following MultipleScalogram objects: * 0: MultipleScalogram for the wavelet transform coherency (WTC) * 1: MultipleScalogram for the cross-wavelet transform (XWT)

Each object contains as many Scalogram objects as qs contains values

Return type:

pyleoclim.core.coherence.Coherence

See also

pyleoclim.core.series.Series.wavelet_coherence

Wavelet coherence

pyleoclim.core.scalograms.Scalogram

Scalogram object

pyleoclim.core.scalograms.MultipleScalogram

Multiple Scalogram object

pyleoclim.core.coherence.Coherence.plot

plotting method for Coherence objects

Examples

Calculate the coherence of NINO3 and All India Rainfall and assess significance:

import pyleoclim as pyleo
ts_air = pyleo.utils.load_dataset('AIR')
ts_nino = pyleo.utils.load_dataset('NINO3')

coh = ts_air.wavelet_coherence(ts_nino)
coh_sig = coh.signif_test(number=20)
coh_sig.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:95% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_160_24.png

By default, significance is assessed against a 95% benchmark derived from an AR(1) process fit to the data, using 200 Monte Carlo simulations. To customize, one can increase the number of simulations (more reliable, but slower), and the quantile levels.

coh_sig2 = coh.signif_test(number=100, qs=[.9,.95,.99])
coh_sig2.plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:95% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_161_104.png

The plot() function will represent the 95% level as contours by default. If you need to show 99%, say, use the signif_thresh argument:

coh_sig2.plot(signif_thresh=0.99)
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:99% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_162_1.png

Note that if the 99% quantile is not present, the plot method will look for the closest match, but lines are always labeled appropriately. For reproducibility purposes, it may be good to specify the (pseudo)random number generator’s seed, like so:

coh_sig27 = coh.signif_test(number=20, seed=27)

This will generate exactly the same set of draws from the (pseudo)random number at every execution, which may be important for marginal features in small ensembles. In general, however, we recommend increasing the number of draws to check that features are robust.

One can also specifiy a different method to obtain surrogates, e.g. phase randomization:

coh.signif_test(method='phaseran').plot()
(<Figure size 1000x800 with 2 Axes>,
 <Axes: title={'center': 'Lines:95% threshold'}, xlabel='Time [year C.E.]', ylabel='Scale [yrs]'>)
../_images/api_164_204.png

Corr (pyleoclim.Corr)

class pyleoclim.core.corr.Corr(r, p, signif, alpha, p_fmt_td=0.01, p_fmt_style='exp')[source]

The object for correlation results in order to format the print message

Parameters:
  • r (float) – the correlation coefficient

  • p (float) – the p-value

  • p_fmt_td (float) – the threshold for p-value formatting (0.01 by default, i.e., if p<0.01, will print “< 0.01” instead of “0”)

  • p_fmt_style (str) – the style for p-value formatting (exponential notation by default)

  • signif (bool) – the significance

  • alpha (float) – The significance level (0.05 by default)

Methods

copy()

Copy object

copy()[source]

Copy object

CorrEns (pyleoclim.CorrEns)

class pyleoclim.core.correns.CorrEns(r, p, signif, signif_fdr, alpha, p_fmt_td=0.01, p_fmt_style='exp')[source]

CorrEns objects store the result of an ensemble correlation calculation between timeseries and/or ensemble of timeseries. The class enables a print and plot function to easily visualize the result.

Parameters:
  • r (list) – the list of correlation coefficients

  • p (list) – the list of p-values

  • p_fmt_td (float) – the threshold for p-value formating (0.01 by default, i.e., if p<0.01, will print “< 0.01” instead of “0”)

  • p_fmt_style (str) – the style for p-value formating (exponential notation by default)

  • signif (list) – the list of significance without FDR

  • signif_fdr (list) – the list of significance with FDR

  • signif_fdr – the list of significance with FDR

  • alpha (float) – The significance level

See also

pyleoclim.utils.correlation.corr_sig

Correlation function

pyleoclim.utils.correlation.fdr

FDR (False Discovery Rate) function

Methods

copy()

Copy object

plot([figsize, title, ax, savefig_settings, ...])

Plot the distribution of correlation values as a histogram

copy()[source]

Copy object

plot(figsize=[4, 4], title=None, ax=None, savefig_settings=None, hist_kwargs=None, title_kwargs=None, xlim=None, alpha=0.8, multiple='layer', clr_insignif='silver', clr_signif='#029386', clr_signif_fdr='darkorange', clr_percentile='#ff796c')[source]

Plot the distribution of correlation values as a histogram

Uses seaborn’s histplot

Color-coding is used to indicate significance, with or without applying the False Discovery Rate (FDR) method.

Parameters:
  • figsize (list, optional) – The figure size. The default is [4, 4].

  • title (str, optional) – Plot title. The default is None.

  • multiple (str, optional) – Approach to organizing the 3 different histrograms on the plot. possible values: “layer”[default], “dodge”, “stack”, “fill”

  • alpha (float in [0, 1]) – transparency parameter for histrogram bars. Default: 0.8

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existing or new path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • hist_kwargs (dict) – additional keyword arguments for sns.histplot() [experimental]

  • title_kwargs (dict) – the keyword arguments for ax.set_title()

  • ax (matplotlib.axis, optional) – the axis object from matplotlib See [matplotlib.axes](https://matplotlib.org/api/axes_api.html) for details.

  • xlim (list, optional) – x-axis limits. The default is None.

MultivarDecomp (pyleoclim.MultivariateDecomp)

class pyleoclim.core.multivardecomp.MultivariateDecomp(name, eigvals, eigvecs, pctvar, pcs, neff, orig)[source]

Class to hold the results of multivariate decompositions applies to : pca(), mcpca(), mssa()

Parameters:
  • time (float) – the common time axis

  • name (str) – name of the dataset/analysis to use in plots

  • eigvals (1d array) – vector of eigenvalues from the decomposition

  • eigvecs (2d array) – array of eigenvectors from the decomposition (e.g. EOFs)

  • pcs (1d array) – array containing the temporal expansion coefficients (e.g. “principal components” in the climate lore)

  • pctvar (float) – array of pct variance accounted for by each mode

  • orig (MultipleSeries, or MultipleGeoSeries object) – original data, on a common time axis

  • neff (float) – scalar representing the effective sample size of the leading mode

Methods

modeplot([index, figsize, fig, ...])

Dashboard visualizing the properties of a given mode, including:

screeplot([figsize, uq, title, ax, ...])

Plot the eigenvalue spectrum with uncertainties

modeplot(index=0, figsize=[8, 8], fig=None, savefig_settings=None, gs=None, title=None, title_kwargs=None, spec_method='mtm', cmap=None, hue='EOF', marker='archiveType', size=None, scatter_kwargs=None, flip=False, map_kwargs=None, gridspec_kwargs=None)[source]
Dashboard visualizing the properties of a given mode, including:
  1. The temporal coefficient (PC or similar)

  2. its spectrum

  3. The loadings (EOF or similar), possibly geolocated. If the object

    does not have geolocation information, a spaghetti plot of the standardized series is displayed.

Parameters:
  • index (int) – the (0-based) index of the mode to visualize. Default is 0, corresponding to the first mode.

  • figsize (list, optional) – The figure size. The default is [8, 8].

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • title (str, optional) – text for figure title

  • title_kwargs (dict) – the keyword arguments for ax.set_title()

  • gs (matplotlib.gridspec object, optional) – Requires at least two rows and two columns. - top row, left: timeseries of principle component - top row, right: PSD - bottom row: spaghetti plot or map See [matplotlib.gridspec.GridSpec](https://matplotlib.org/stable/tutorials/intermediate/gridspec.html) for details.

  • gridspec_kwargs (dict, optional) – Dictionary with custom gridspec values. - wspace changes space between columns (default: wspace=0.05) - hspace changes space between rows (default: hspace=0.03) - width_ratios: relative width of each column (default: width_ratios=[5,1,3] where middle column serves as a spacer) - height_ratios: relative height of each row (default: height_ratios=[2,1,5] where middle row serves as a spacer)

  • spec_method (str, optional) – The name of the spectral method to be applied on the PC. Default: MTM Note that the data are evenly-spaced, so any spectral method that assumes even spacing is applicable here: ‘mtm’, ‘welch’, ‘periodogram’ ‘wwz’ is relevant if scaling exponents need to be estimated, but ill-advised otherwise, as it is very slow.

  • cmap (str, optional) – if ‘hue’ is specified, will be used for map scatter plot values. colormap name for the loadings (https://matplotlib.org/stable/tutorials/colors/colormaps.html)

  • map_kwargs (dict, optional) – Optional arguments for map configuration - projection: str; Optional value for map projection. Default ‘auto’. - proj_default: bool - lakes, land, ocean, rivers, borders, coastline, background: bool or dict; - lgd_kwargs: dict; Optional values for how the map legend is configured - gridspec_kwargs: dict; Optional values for adjusting the arrangement of the colorbar, map and legend in the map subplot - legend: bool; Whether to draw a legend on the figure. Default is True - colorbar: bool; Whether to draw a colorbar on the figure if the data associated with hue are numeric. Default is True The default is None.

  • scatter_kwargs (dict, optional) – Optional arguments configuring how data are plotted on a map. See description of scatter_kwargs in pyleoclim.utils.mapping.scatter_map

  • hue (str, optional) – (only applicable if using scatter map) Variable associated with color coding for points plotted on map. May correspond to a continuous or categorical variable. The default is ‘EOF’.

  • size (str, optional) – (only applicable if using scatter map) Variable associated with size. Must correspond to a continuous numeric variable. The default is None.

  • marker (string, optional) – (only applicable if using scatter map) Grouping variable that will produce points with different markers. Can have a numeric dtype but will always be treated as categorical. The default is ‘archiveType’.

Returns:

  • fig (matplotlib.figure) – The figure

  • ax (dict) – dictionary of matplotlib ax

See also

pyleoclim.core.MultipleSeries.pca

Principal Component Analysis

pyleoclim.core.MultipleGeoSeries.pca

Principal Component Analysis

pyleoclim.utils.tsutils.eff_sample_size

Effective sample size

pyleoclim.utils.mapping.scatter_map

mapping

screeplot(figsize=[6, 4], uq='N82', title=None, ax=None, savefig_settings=None, title_kwargs=None, xlim=[0, 10], clr_eig='C0')[source]

Plot the eigenvalue spectrum with uncertainties

Parameters:
  • figsize (list, optional) – The figure size. The default is [6, 4].

  • title (str, optional) – Plot title. The default is ‘scree plot’.

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • title_kwargs (dict, optional) – the keyword arguments for ax.set_title()

  • ax (matplotlib.axis, optional) – the axis object from matplotlib See [matplotlib.axes](https://matplotlib.org/api/axes_api.html) for details.

  • xlim (list, optional) – x-axis limits. The default is [0, 10] (first 10 eigenvalues)

  • uq (str, optional) – Method used for uncertainty quantification of the eigenvalues. ‘N82’ uses the North et al “rule of thumb” [1] with effective sample size computed as in [2]. ‘MC’ uses Monte-Carlo simulations (e.g. MC-EOF). Returns an error if no ensemble is found.

  • clr_eig (str, optional) – color to be used for plotting eigenvalues

See also

pyleoclim.core.MultipleSeries.pca

Principal Component Analysis

References

[1]_ North, G. R., T. L. Bell, R. F. Cahalan, and F. J. Moeng (1982), Sampling errors in the estimation of empirical orthogonal functions, Mon. Weather Rev., 110, 699–706.

[2]_ Hannachi, A., I. T. Jolliffe, and D. B. Stephenson (2007), Empirical orthogonal functions and related techniques in atmospheric science: A review, International Journal of Climatology, 27(9), 1119–1152, doi:10.1002/joc.1499.

SsaRes (pyleoclim.SsaRes)

class pyleoclim.core.ssares.SsaRes(orig, label, eigvals, eigvecs, pctvar, PC, RCmat, RCseries, mode_idx, eigvals_q=None)[source]

This class is meant to hold the output of the Singular Spectrum Analysis (SSA) method, which applies to Series objets. Two functions are enabled by this class:

  • screeplot, which plots the eigenvalue spectrum to help determine what modes to keep

  • modeplot, which plots the individual mode temporal EOF and temporal PC

Parameters:
  • orig (Series) – timeseries on which SSA was performed

  • eigvals (float (M, 1)) – a vector of real eigenvalues derived from the signal

  • pctvar (float (M, 1)) – same vector, expressed in % variance accounted for by each mode.

  • eigvals_q (float (M, 2)) – array containing the 5% and 95% quantiles of the Monte-Carlo eigenvalue spectrum [ assigned NaNs if unused ]

  • eigvecs (float (M, M)) – a matrix of the temporal eigenvectors (T-EOFs), i.e. the temporal patterns that explain most of the variations in the original series.

  • PC (float (N - M + 1, M)) – array of principal components, i.e. the loadings that, convolved with the T-EOFs, produce the reconstructed components, or RCs

  • RCmat (float (N, M)) – array of reconstructed components, One can think of each RC as the contribution of each mode to the timeseries, weighted by their eigenvalue (loosely speaking, their “amplitude”). Summing over all columns of RC recovers the original series. (synthesis, the reciprocal operation of analysis).

  • mode_idx (list) – index of retained modes

  • RCseries (float (N, 1)) – reconstructed series based on the RCs of mode_idx (scaled to original series; mean must be added after the fact)

See also

pyleoclim.utils.decomposition.ssa

Singular Spectrum Analysis

Methods

copy()

Make a copy of the SsaRes object

modeplot([index, figsize, savefig_settings, ...])

Dashboard visualizing the properties of a given SSA mode, including:

screeplot([figsize, title, ax, ...])

Scree plot for SSA, visualizing the eigenvalue spectrum and indicating which modes were retained.

copy()[source]

Make a copy of the SsaRes object

Returns:

SsaRes – A copy of the SsaRes object

Return type:

pyleoclim.SsaRes

modeplot(index=0, figsize=[10, 5], savefig_settings=None, title_kwargs=None, spec_method='mtm', plot_original=True)[source]
Dashboard visualizing the properties of a given SSA mode, including:
  1. the analyzing function (T-EOF)

  2. the reconstructed component (RC)

  3. its spectrum

Parameters:
  • index (int) – the (0-based) index of the mode to visualize. Default is 0, corresponding to the first mode.

  • figsize (list, optional) – The figure size. The default is [10, 5].

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below: - “path” must be specified; it can be any existed or non-existed path,

    with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • title_kwargs (dict) – the keyword arguments for ax.set_title()

  • spec_method (str, optional) – The name of the spectral method to be applied on the PC. Default: MTM Note that the data are evenly-spaced, so any spectral method that assumes even spacing is applicable here: ‘mtm’, ‘welch’, ‘periodogram’. ‘wwz’ is relevant too if scaling exponents need to be estimated.

See also

pyleoclim.core.series.Series.ssa

Singular Spectrum Analysis for timeseries objects

pyleoclim.utils.decomposition.ssa

Singular Spectrum Analysis utility

pyleoclim.core.ssares.SsaRes.screeplot

plot SSA eigenvalue spectrum

Examples

Plot the first SSA mode of the Southern Oscillation Index:

import pyleoclim as pyleo
ts  = pyleo.utils.load_dataset('SOI')
ssa = ts.ssa()

fig, ax = ssa.modeplot()
../_images/api_165_0.png

Plot the second mode (note 0-based indexing):

fig, ax = ssa.modeplot(index=1)
../_images/api_166_0.png
screeplot(figsize=[6, 4], title='SSA scree plot', ax=None, savefig_settings=None, title_kwargs=None, xlim=None, clr_mcssa='#e50000', clr_signif='#029386', clr_eig='black')[source]

Scree plot for SSA, visualizing the eigenvalue spectrum and indicating which modes were retained.

Parameters:
  • figsize (list, optional) – The figure size. The default is [6, 4].

  • title (str, optional) – Plot title. The default is ‘SSA scree plot’.

  • savefig_settings (dict) –

    the dictionary of arguments for plt.savefig(); some notes below:

    • ”path” must be specified; it can be any existed or non-existed path, with or without a suffix; if the suffix is not given in “path”, it will follow “format”

    • ”format” can be one of {“pdf”, “eps”, “png”, “ps”}

  • title_kwargs (dict) – the keyword arguments for ax.set_title()

  • ax (matplotlib.axis, optional) – the axis object from matplotlib See matplotlib.axes. for details.

  • xlim (list, optional) – x-axis limits. The default is None.

  • clr_mcssa (str, optional) – color of the Monte Carlo SSA AR(1) shading (if data are provided) default: red

  • clr_eig (str, optional) – color of the eigenvalues, default: black

  • clr_signif (str, optional) – color of the highlights for significant eigenvalue. (default: teal)

See also

pyleoclim.core.series.Series.ssa

Singular Spectrum Analysis for timeseries objects

pyleoclim.core.utils.decomposition.ssa

Singular Spectrum Analysis utility

pyleoclim.core.ssares.SsaRes.modeplot

plot SSA modes

Examples

Plot the SSA eigenvalue spectrum of the Southern Oscillation Index:

import pyleoclim as pyleo

ts = pyleo.utils.load_dataset('SOI')
ssa = ts.ssa()

fig, ax = ssa.screeplot()
../_images/api_167_0.png

Resolution (pyleoclim.Resolution)