Skip to content

Time Series

This example demonstrates operator learning for a time series with non-uniform time steps. Actually, a discretization-invariant neural operator can learn from an arbitrary discretization and generalize to a different one!

No description has been provided for this image

Setup

import torch
import matplotlib.pyplot as plt
from continuiti.operators import BelNet
from continuiti.data import OperatorDataset
from continuiti.trainer import Trainer

Problem

As a simple demonstration of the concepts, we will consider sequences of \(n\) observations \(f_i\) with \(i \in \{1, \dots, n\}\). The index \(i\) corresponds to samples of a function \(f\) at times \(t_i\). For these observations, we would like to predict \(m\) future values \(f_j\) where \(j \in \{n+1, \dots, n+m\}\).

In this example, we choose a simple sine function

\[ f(t) = \sin(2\pi t), \]

and choose \(n=32\) and \(m = 16\). Both \(t_i\) and \(t_j\) are sampled random uniformly from the intervals \(t_i\in [0, 1)\) and \(t_j\in [1, 1.5)\).

The trained neural operator is supposed to predict the future of the time series by only having access to \(n\) evaluations of the historical function.

f = lambda t: torch.sin(2 * torch.pi * t)

def random_locations(num_sensors):
    return torch.sort(torch.rand(num_sensors))[0]

# History will be given as input
num_sensors = 32
t_hist = random_locations(num_sensors)

# Future will be given as labels
num_labels = 16
t_fut = 1 + 0.5 * random_locations(num_labels)

f_hist = f(t_hist)
f_fut = f(t_fut)
No description has been provided for this image

Dataset

For training an operator, we first construct the corresponding OperatorDataset.

n_samples = 32
x_dim = u_dim = y_dim = v_dim = 1

x = torch.zeros(n_samples, x_dim, num_sensors)
u = torch.zeros(n_samples, u_dim, num_sensors)
y = torch.zeros(n_samples, y_dim, num_labels)
v = torch.zeros(n_samples, v_dim, num_labels)

for i in range(n_samples):
    t_hist = random_locations(num_sensors)
    t_fut = 1 + 0.5 * random_locations(num_labels)
    f_hist = f(t_hist)
    f_fut = f(t_fut)
    x[i, 0, :] = t_hist
    u[i, 0, :] = f_hist
    y[i, 0, :] = t_fut
    v[i, 0, :] = f_fut

dataset = OperatorDataset(x, u, y, v)

Operator

In this example, we use BelNet, a discretization-invariant neural operator that can interpolate between different input discretizations. It can also learn mappings of functions that are defined on different domains, this is what is referred to as domain-independence.

operator = BelNet(dataset.shapes, D_1=32, D_2=32)

Training

We train the operator on the given input discretization.

Trainer(operator).fit(dataset, tol=1e-4)

Evaluation

Let's plot the predictions of the trained BelNet on the interval \([1, 1.5)\). Note that the operator makes a good prediction even if we sample the sine wave in new random time steps!

t_plot = torch.linspace(1, 1.5, 100).reshape(1, 1, -1)

# Some time steps used for training
x, u, t_fut, f_fut = dataset[0:1]
f_pred = operator(x, u, t_plot)    # x, u = t_hist, f_hist

# Different time steps
t_hist2 = random_locations(num_sensors)
f_hist2 = f(t_hist2)
x2 = t_hist2.reshape(1, 1, -1)
u2 = f_hist2.reshape(1, 1, -1)
f_pred2 = operator(x2, u2, t_plot) # x2, u2 = t_hist2, f_hist2
No description has been provided for this image

This example demonstrates how useful it can be to consider functional data as functions and apply machine learning to these functions directly. This is what operator learning is about!


Last update: 2024-08-20
Created: 2024-08-20