Training¶
Now, as you are familiar with operators and functions, let's learn an operator! In the following, we will learn the basics of training a neural operator in continuiti.
Operator¶
Given two sets of functions
that maps functions
In this example, we choose to learn the operator that maps the set of functions
to the set of functions
such that
import torch
from continuiti.discrete import RegularGridSampler
from continuiti.data.function import FunctionSet
U = FunctionSet(lambda a: lambda x: torch.sin(a * torch.pi * x))
V = FunctionSet(lambda a: lambda y: a * torch.pi * torch.cos(a * torch.pi * y))
a = torch.Tensor([[1., 1.5, 2.]])
u_a = U(a)
v_a = V(a)
print(f"len(u) = {len(u_a)} ", f"len(v) = {len(v_a)}")
Note
In these examples, we hide the code for visualization, but you can find it in the source code of this notebook.
Discretization¶
Operator learning is about learning mappings between infinite dimensional spaces. To work with infinite-dimensional objects numerically, we have to discretize the input and output function somehow. In continuiti, this is done by point-wise evaluation.
Discretized functions can be collected in an OperatorDataset
for
operator learning.
The OperatorDataset
is a container of discretizations of input-output functions.
It contains tuples (x
, u
, y
, v
) of tensors, where every sample consists of
- the sensor positions
x
, - the values
u
of the input function at the sensor positions, - the evaluation points
y
, and - the values
v
of the output functions at the evaluation points.
If we already have a FunctionSet
, we can use the FunctionOperatorDataset
to
sample elements
Split data set into training, validation and test set.
Neural Operator¶
In order to learn the operator
A neural operator
In this example, we train a DeepONet, a common neural operator architecture motivated by the universal approximation theorem for operators.
Training¶
continuiti provides the Trainer
class
which implements a default training loop for neural operators.
It is instantiated with an Operator
, an optimizer (Adam(lr=1e-3)
by default),
and a loss function (MSELoss
by default).
The fit
method takes an OperatorDataset
and trains the neural operator
up to a given tolerance on the training data (but at most for a
given number of epochs
, 1000 by default).
Evaluation¶
The mapping of the trained operator can be evaluated at arbitrary positions,
so let's plot the prediction of
Let us evaluate some training metrics, e.g., a validation error.
As you can observe, the neural operator is able to learn the operator
Created: 2024-08-20