Skip to content

Transforms

continuiti.transforms

Data transformations in continuiti.

Transform(*args, **kwargs)

Bases: Module, ABC

Abstract base class for transformations of tensors.

Transformations are applied to tensors to improve model performance, enhance generalization, handle varied input sizes, facilitate specific features, reduce overfitting, improve computational efficiency or many other reasons. This class takes some tensor and transforms it into some other tensor.

PARAMETER DESCRIPTION
*args

Arguments passed to nn.Module parent class.

DEFAULT: ()

**kwargs

Arbitrary keyword arguments passed to nn.Module parent class.

DEFAULT: {}

Source code in src/continuiti/transforms/transform.py
def __init__(self, *args, **kwargs):
    super().__init__(*args, **kwargs)

forward(tensor) abstractmethod

Applies the transformation.

PARAMETER DESCRIPTION
tensor

Tensor that should be transformed.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Transformed tensor.

Source code in src/continuiti/transforms/transform.py
@abstractmethod
def forward(self, tensor: torch.Tensor) -> torch.Tensor:
    """Applies the transformation.

    Args:
        tensor: Tensor that should be transformed.

    Returns:
        Transformed tensor.
    """

undo(tensor)

Applies the inverse of the transformation (if it exists).

PARAMETER DESCRIPTION
tensor

Transformed tensor.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Tensor with the transformation undone.

RAISES DESCRIPTION
NotImplementedError

If the inverse of the transformation is not implemented.

Source code in src/continuiti/transforms/transform.py
def undo(self, tensor: torch.Tensor) -> torch.Tensor:
    """Applies the inverse of the transformation (if it exists).

    Args:
        tensor: Transformed tensor.

    Returns:
        Tensor with the transformation undone.

    Raises:
        NotImplementedError: If the inverse of the transformation is not implemented.
    """
    raise NotImplementedError(
        "The undo method is not implemented for this transform."
    )

Compose(transforms, *args, **kwargs)

Bases: Transform

Handles the chained sequential application of multiple transformations.

PARAMETER DESCRIPTION
transforms

transformations that should be applied in the order they are in the list.

TYPE: List[Transform]

*args

Arguments of parent class.

DEFAULT: ()

**kwargs

Arbitrary keyword arguments of parent class.

DEFAULT: {}

ATTRIBUTE DESCRIPTION
transforms

Encapsulates multiple transformations into one.

TYPE: List

Source code in src/continuiti/transforms/compose.py
def __init__(self, transforms: List[Transform], *args, **kwargs):
    super().__init__(*args, **kwargs)
    self.transforms = transforms

forward(tensor)

Applies multiple transformations to a tensor in sequential order.

PARAMETER DESCRIPTION
tensor

Tensor to be transformed.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Tensor with all transformations applied.

Source code in src/continuiti/transforms/compose.py
def forward(self, tensor: torch.Tensor) -> torch.Tensor:
    """Applies multiple transformations to a tensor in sequential order.

    Args:
        tensor: Tensor to be transformed.

    Returns:
        Tensor with all transformations applied.
    """
    for transform in self.transforms:
        tensor = transform(tensor)
    return tensor

undo(tensor)

Undoes multiple transformations.

PARAMETER DESCRIPTION
tensor

Transformed tensor.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Tensor with undone transformations (if possible).

Source code in src/continuiti/transforms/compose.py
def undo(self, tensor: torch.Tensor) -> torch.Tensor:
    """Undoes multiple transformations.

    Args:
        tensor: Transformed tensor.

    Returns:
        Tensor with undone transformations (if possible).
    """
    for transform in reversed(self.transforms):
        tensor = transform.undo(tensor)
    return tensor

Normalize(mean, std)

Bases: Transform

Normalization transformation (Z-normalization).

This transformation takes a mean \(\mu\) and standard deviation \(\sigma\) to scale tensors \(x\) according to

\[\operatorname{Normalize}(x) = \frac{x - \mu}{\sigma + \varepsilon} := z,\]

where \(\varepsilon\) is a small value to prevent division by zero.

ATTRIBUTE DESCRIPTION
epsilon

small value to prevent division by zero (torch.finfo.tiny)

PARAMETER DESCRIPTION
mean

mean used to scale tensors

TYPE: Tensor

std

standard deviation used to scale tensors

TYPE: Tensor

Source code in src/continuiti/transforms/scaling.py
def __init__(self, mean: torch.Tensor, std: torch.Tensor):
    super().__init__()
    self.mean = nn.Parameter(mean)
    self.std = nn.Parameter(std)

forward(x)

Apply normalization to the input tensor.

\[z = \frac{x - \mu}{\sigma + \varepsilon}\]
PARAMETER DESCRIPTION
x

input tensor \(x\)

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

normalized tensor \(z\)

Source code in src/continuiti/transforms/scaling.py
def forward(self, x: torch.Tensor) -> torch.Tensor:
    r"""Apply normalization to the input tensor.

    $$z = \frac{x - \mu}{\sigma + \varepsilon}$$

    Args:
        x: input tensor $x$

    Returns:
        normalized tensor $z$
    """
    return (x - self.mean) / (self.std + self.epsilon)

undo(z)

Undo the normalization.

\[x = z~(\sigma + \varepsilon) + \mu\]
PARAMETER DESCRIPTION
z

(normalized) tensor \(z\)

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

un-normalized tensor \(x\)

Source code in src/continuiti/transforms/scaling.py
def undo(self, z: torch.Tensor) -> torch.Tensor:
    r"""Undo the normalization.

    $$x = z~(\sigma + \varepsilon) + \mu$$

    Args:
        z: (normalized) tensor $z$

    Returns:
        un-normalized tensor $x$
    """
    return z * (self.std + self.epsilon) + self.mean

QuantileScaler(src, n_quantile_intervals=1000, target_mean=0.0, target_std=1.0, eps=0.001)

Bases: Transform

Quantile Scaler Class.

A transform for scaling input data to a specified target distribution using quantiles. This is particularly useful for normalizing data in a way that is more robust to outliers than standard z-score normalization.

The transformation maps the quantiles of the input data to the quantiles of the target distribution, effectively performing a non-linear scaling that preserves the relative distribution of the data.

PARAMETER DESCRIPTION
src

tensor from which the source distribution is drawn.

TYPE: Tensor

n_quantile_intervals

Number of individual bins into which the data is categorized.

TYPE: int DEFAULT: 1000

target_mean

Mean of the target Gaussian distribution. Can be float (all dimensions use the same mean), or tensor (allows for different means along different dimensions).

TYPE: Union[float, Tensor] DEFAULT: 0.0

target_std

Std of the target Gaussian distribution. Can be float (all dimensions use the same std), or tensor (allows for different stds along different dimensions).

TYPE: Union[float, Tensor] DEFAULT: 1.0

eps

Small value to bound the target distribution to a finite interval.

TYPE: float DEFAULT: 0.001

Source code in src/continuiti/transforms/quantile_scaler.py
def __init__(
    self,
    src: torch.Tensor,
    n_quantile_intervals: int = 1000,
    target_mean: Union[float, torch.Tensor] = 0.0,
    target_std: Union[float, torch.Tensor] = 1.0,
    eps: float = 1e-3,
):
    assert eps <= 0.5
    assert eps >= 0

    super().__init__()

    if isinstance(target_mean, float):
        target_mean = target_mean * torch.ones(1)
    if isinstance(target_std, float):
        target_std = target_std * torch.ones(1)

    self.target_mean = target_mean
    self.target_std = target_std

    assert n_quantile_intervals > 0
    self.n_quantile_intervals = n_quantile_intervals
    self.n_q_points = n_quantile_intervals + 2  # n intervals have n + 2 edges

    self.n_dim = src.size(-1)

    # source "distribution"
    self.quantile_fractions = torch.linspace(0, 1, self.n_q_points)
    quantile_points = torch.quantile(
        src.view(-1, self.n_dim),
        self.quantile_fractions,
        dim=0,
        interpolation="linear",
    )
    self.quantile_points = nn.Parameter(quantile_points)
    self.deltas = nn.Parameter(quantile_points[1:] - quantile_points[:-1])

    # target distribution
    self.target_distribution = torch.distributions.normal.Normal(
        target_mean, target_std
    )
    self.target_quantile_fractions = torch.linspace(
        0 + eps, 1 - eps, self.n_q_points
    )  # bounded domain
    target_quantile_points = self.target_distribution.icdf(
        self.target_quantile_fractions
    )
    target_quantile_points = target_quantile_points.unsqueeze(1).repeat(
        1, self.n_dim
    )
    self.target_quantile_points = nn.Parameter(target_quantile_points)
    self.target_deltas = nn.Parameter(
        target_quantile_points[1:] - target_quantile_points[:-1]
    )

forward(tensor)

Transforms the input tensor to match the target distribution using quantile scaling.

PARAMETER DESCRIPTION
tensor

The input tensor to transform.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

The transformed tensor, scaled to the target distribution.

Source code in src/continuiti/transforms/quantile_scaler.py
def forward(self, tensor: torch.Tensor) -> torch.Tensor:
    """Transforms the input tensor to match the target distribution using quantile scaling.

    Args:
        tensor: The input tensor to transform.

    Returns:
        The transformed tensor, scaled to the target distribution.
    """
    indices = self._get_scaling_indices(tensor, self.quantile_points)
    # Scale input tensor to the unit interval based on source quantiles
    p_min = self.quantile_points[indices].view(tensor.shape)
    delta = self.deltas[indices].view(tensor.shape)
    out = tensor - p_min
    out = out / delta

    # Scale and shift to match the target distribution
    p_t_min = self.target_quantile_points[indices].view(tensor.shape)
    delta_t = self.target_deltas[indices].view(tensor.shape)
    out = out * delta_t
    out = out + p_t_min

    return out

undo(tensor)

Reverses the transformation applied by the forward method, mapping the tensor back to its original distribution.

PARAMETER DESCRIPTION
tensor

The tensor to reverse the transformation on.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

The tensor with the quantile scaling transformation reversed according to the src distribution.

Source code in src/continuiti/transforms/quantile_scaler.py
def undo(self, tensor: torch.Tensor) -> torch.Tensor:
    """Reverses the transformation applied by the forward method, mapping the tensor back to its original
    distribution.

    Args:
        tensor: The tensor to reverse the transformation on.

    Returns:
        The tensor with the quantile scaling transformation reversed according to the src distribution.
    """
    indices = self._get_scaling_indices(tensor, self.target_quantile_points)

    # Scale input tensor to the unit interval based on the target distribution
    p_t_min = self.target_quantile_points[indices].view(tensor.shape)
    delta_t = self.target_deltas[indices].view(tensor.shape)
    out = tensor - p_t_min
    out = out / delta_t

    # Scale and shift to match the src distribution
    p_min = self.quantile_points[indices].view(tensor.shape)
    delta = self.deltas[indices].view(tensor.shape)
    out = out * delta
    out = out + p_min

    return out

Last update: 2024-08-22
Created: 2024-08-22