seq_modules

class EncoderProtocol(*args, **kwds)[source]

Bases: typing_extensions.Protocol

forward(x: torch.Tensor, lengths: Optional[torch.Tensor] = None) torch.Tensor
Parameters
  • x – a tensor of shape (batch_size, seq_length=max(lengths), history_features) containing the sequence of history features to encode

  • lengths – an optional tensor of shape (batch_size) containing the lengths of the sequences in x

Returns

a tensor of shape (batch_size, latent_dim) containing the encodings

__init__(*args, **kwargs)
class DecoderProtocol(*args, **kwds)[source]

Bases: typing_extensions.Protocol

forward(latent: torch.Tensor, target_features: Optional[torch.Tensor] = None, target_lengths: Optional[torch.Tensor] = None) torch.Tensor
Parameters
  • latent – a tensor of shape (batch_size, latent_dim) containing the latent representations

  • target_features – a tensor of shape (batch_size, target_seq_length=max(target_lengths), target_feature_dim)

  • target_lengths – a tensor of shape (batch_size) containing the lengths of sequences in target_features

Returns

a tensor of shape (batch_size, output_dim) or (batch_size, target_seq_length, output_dim) containing the predictions, where the shape depends on the use case and can vary depending on the needs

__init__(*args, **kwargs)
class PredictorProtocol(*args, **kwds)[source]

Bases: typing_extensions.Protocol

forward(x: torch.Tensor) torch.Tensor
Parameters

x – a tensor of shape (batch_size, input_dim) an intermediate representation

Returns

a tensor of shape (batch_size, output_dim)

__init__(*args, **kwargs)
class EncoderFactory[source]

Bases: sensai.util.string.ToStringMixin, abc.ABC

Represents a factory for an encoder modules that map a sequence of items to a latent vector

abstract create_encoder(input_dim: int, latent_dim: int) Union[sensai.torch.torch_models.seq.seq_modules.EncoderProtocol, torch.nn.Module]
Parameters
  • input_dim – the input dimension per sequence item

  • latent_dim – the latent vector dimension that is to be generated by the encoder

Returns

a torch module satisfying EncoderProtocol

class DecoderFactory[source]

Bases: sensai.util.string.ToStringMixin, abc.ABC

abstract create_decoder(latent_dim: int, target_feature_dim: int) Union[sensai.torch.torch_models.seq.seq_modules.DecoderProtocol, torch.nn.Module]
Parameters
  • latent_dim – the latent vector size which is used for the representation of the history

  • target_feature_dim – the number of dimensions/features that are given for each prediction to be made (each future sequence item)

Returns

a torch module satisfying DecoderProtocol

class PredictorFactory[source]

Bases: sensai.util.string.ToStringMixin, abc.ABC

Represents a factory for predictor components which sample map from an intermediate representation to the desired output dimension.

create_predictor(input_dim: int, output_dim: int) Union[sensai.torch.torch_models.seq.seq_modules.PredictorProtocol, torch.nn.Module]
Parameters
  • input_dim – the input dimension

  • output_dim – the output dimension

Returns

a module which maps an input with dimension input_dim to the desired prediction dimension (output_dim)

class LinearPredictorFactory[source]

Bases: sensai.torch.torch_models.seq.seq_modules.PredictorFactory

A factory for predictors consisting only of a linear layer (without subsequent activation)

create_predictor(input_dim: int, output_dim: int) torch.nn.Module
Parameters
  • input_dim – the input dimension

  • output_dim – the output dimension

Returns

a module which maps an input with dimension input_dim to the desired prediction dimension (output_dim)

class MLPPredictorFactory(hidden_dims: Sequence[int] = (), hid_activation_fn: sensai.torch.torch_enums.ActivationFunction = ActivationFunction.RELU, output_activation_fn: sensai.torch.torch_enums.ActivationFunction = ActivationFunction.NONE, p_dropout: Optional[float] = None)[source]

Bases: sensai.torch.torch_models.seq.seq_modules.PredictorFactory

A factor for predictors that are multi-layer perceptrons

__init__(hidden_dims: Sequence[int] = (), hid_activation_fn: sensai.torch.torch_enums.ActivationFunction = ActivationFunction.RELU, output_activation_fn: sensai.torch.torch_enums.ActivationFunction = ActivationFunction.NONE, p_dropout: Optional[float] = None)
create_predictor(input_dim: int, output_dim: int) Union[sensai.torch.torch_models.seq.seq_modules.PredictorProtocol, torch.nn.Module]
Parameters
  • input_dim – the input dimension

  • output_dim – the output dimension

Returns

a module which maps an input with dimension input_dim to the desired prediction dimension (output_dim)

class RnnEncoderModule(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.Module

Encodes a sequence of feature vectors, outputting a latent vector. The input sequence may either be fixed-length or variable-length.

class RnnType

Bases: object

GRU = 'gru'

gated recurrent unit

LSTM = 'lstm'

long short-term memory

__init__(input_dim, latent_dim: int, rnn_type: sensai.torch.torch_models.seq.seq_modules.RnnEncoderModule.RnnType = 'lstm')
Parameters
  • input_dim – the input dimension per time slice

  • latent_dim – the dimension of the latent output vector

  • rnn_type – the type of recurrent network to use

forward(x: torch.Tensor, lengths: Optional[torch.Tensor] = None)
Parameters
  • x – a tensor of size (batch_size, seq_length, dim_per_item)

  • lengths – an optional tensor containing the lengths of the sequences; if None, all sequences are assumed to have the same full length

Returns

a tensor of size (batch_size, latent_dim)

class RnnEncoderFactory(input_dim: int, latent_dim: int, rnn_type: sensai.torch.torch_models.seq.seq_modules.RnnEncoderModule.RnnType = 'gru')[source]

Bases: sensai.torch.torch_models.seq.seq_modules.EncoderFactory

__init__(input_dim: int, latent_dim: int, rnn_type: sensai.torch.torch_models.seq.seq_modules.RnnEncoderModule.RnnType = 'gru')
create_encoder(input_dim: int, latent_dim: int)
Parameters
  • input_dim – the input dimension per sequence item

  • latent_dim – the latent vector dimension that is to be generated by the encoder

Returns

a torch module satisfying EncoderProtocol

class LSTNetworkEncoder(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.Module

Adapts an LSTNetwork instance to the encoder interface

__init__(lstnet: sensai.torch.torch_models.lstnet.lstnet_modules.LSTNetwork)
forward(x: torch.Tensor, lengths: Optional[torch.Tensor] = None)
Parameters
  • x – a tensor of size (batch_size, seq_length, dim_per_item)

  • lengths – an optional tensor containing the lengths of the sequences; if None, all sequences are assumed to have the same full length

Returns

a tensor of size (batch_size, latent_dim)

class LSTNetworkEncoderFactory(num_input_time_slices: int, num_convolutions: int, num_cnn_time_slices: int, hid_rnn: int, skip: int, hid_skip: int, dropout: float = 0.2)[source]

Bases: sensai.torch.torch_models.seq.seq_modules.EncoderFactory

__init__(num_input_time_slices: int, num_convolutions: int, num_cnn_time_slices: int, hid_rnn: int, skip: int, hid_skip: int, dropout: float = 0.2)
create_encoder(input_dim: int, latent_dim: int) torch.nn.Module
Parameters
  • input_dim – the input dimension per sequence item

  • latent_dim – the latent vector dimension that is to be generated by the encoder

Returns

a torch module satisfying EncoderProtocol

get_latent_dim() int
class SingleTargetDecoderModule(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.Module, sensai.torch.torch_models.seq.seq_modules.DecoderProtocol

Represents a decoder that output a single value for a single target item, taking as input the concatenation of the latent tensor (generated by the encoder) and the target item’s feature vector.

__init__(target_feature_dim, latent_dim, predictor_factory: sensai.torch.torch_models.seq.seq_modules.PredictorFactory, output_dim=1)
Parameters
  • target_feature_dim – the number of target item features

  • latent_dim – the dimension of the latent vector generated by the encoder, which we receive as input

  • predictor_factory – a factory for the creation of the predictor that will map the combined latent vector and target feature vector to the prediction of size output_dim

  • output_dim – the output (prediction) dimension

forward(latent, target_features=None, target_lengths=None)
Parameters
  • latent – a tensor of shape (batch_size, latent_dim) containing the latent representations

  • target_features – a tensor of shape (batch_size, target_seq_length=max(target_lengths), target_feature_dim)

  • target_lengths – a tensor of shape (batch_size) containing the lengths of sequences in target_features

Returns

a tensor of shape (batch_size, output_dim) or (batch_size, target_seq_length, output_dim) containing the predictions, where the shape depends on the use case and can vary depending on the needs

class TargetSequenceDecoderModule(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.Module, sensai.torch.torch_models.seq.seq_modules.DecoderProtocol, sensai.util.string.ToStringMixin

Wrapper for decoders that take as input a latent representation (generated by an encoder) and a sequence of target features. It can generate either a single prediction for the entire sequence of target features or a sequence of predictions (one for each target sequence item), depending on the prediction/output mode.

class PredictionMode(value)

Bases: enum.Enum

Defines how the prediction works

SINGLE_LATENT = 'single_latent'

Use an LSTM to process the target feature sequence and use only the final hidden state for prediction, outputting a single average prediction only (for OutputMode.SINGLE_OUTPUT only)

MULTI_LATENT = 'multi_latent'

Use an LSTM to process the target feature sequence and use all hidden states (full output) for prediction

DIRECT = 'direct'

Directly use the latent vector and target features to make predictions for each target sequence item (use with LatentPassOnMode.CONCAT_INPUT & NO_LATENT only)

class LatentPassOnMode(value)

Bases: enum.Enum

Defines how the latent state from the encoder stage is passed on to the decoder

INIT_HIDDEN = 'init_hidden'

Pass on the encoder output as the initial hidden state of the LSTM (only possible for OutputMode in {SINGLE_LATENT, MULTI_LATENT})

CONCAT_INPUT = 'concat_input'

Pass on the encoder output by concatenating it with each target feature input vector

NO_LATENT = 'no_latent'

Do not pass on the latent vector at all (ignored by subsequent decoder component). This is mostly useful for ablation testing.

class OutputMode(value)

Bases: enum.Enum

Defines how to treat multiple predictions (for PredictionMode != SINGLE_LATENT)

SINGLE_OUTPUT = 'single'

Output a single result from a single input (for PredictionMode.SINGLE_LATENT only)

SINGLE_OUTPUT_MEAN = 'mean'

Output the mean of multiple (intermediate) predictions

MULTI_OUTPUT = 'multi'

Output multiple predictions directly

__init__(target_feature_dim: int, latent_dim: int, predictor_factory: sensai.torch.torch_models.seq.seq_modules.PredictorFactory, output_dim: int = 1, prediction_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.PredictionMode = PredictionMode.MULTI_LATENT, latent_pass_on_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.LatentPassOnMode = LatentPassOnMode.CONCAT_INPUT, output_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.OutputMode = OutputMode.MULTI_OUTPUT, p_recurrent_dropout: float = 0.0)
forward(latent, target_features=None, target_lengths=None)
Parameters
  • latent – a tensor of shape (batch_size, latent_dim)

  • target_features – a tensor of shape (batch_size, max_seq_length, target_feature_dim)

  • target_lengths – a tensor indicating the lengths of the sequences in target_features

Returns

class TargetSequenceDecoderFactory(prediction_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.PredictionMode = PredictionMode.MULTI_LATENT, output_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.OutputMode = OutputMode.MULTI_OUTPUT, latent_pass_on_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.LatentPassOnMode = LatentPassOnMode.CONCAT_INPUT, predictor_factory: Optional[sensai.torch.torch_models.seq.seq_modules.PredictorFactory] = None, p_recurrent_dropout: float = 0.0, output_dim: int = 1)[source]

Bases: sensai.torch.torch_models.seq.seq_modules.DecoderFactory

A factory for TargetSequenceDecoderModule which takes the latent encoding and a sequence of target items as input

__init__(prediction_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.PredictionMode = PredictionMode.MULTI_LATENT, output_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.OutputMode = OutputMode.MULTI_OUTPUT, latent_pass_on_mode: sensai.torch.torch_models.seq.seq_modules.TargetSequenceDecoderModule.LatentPassOnMode = LatentPassOnMode.CONCAT_INPUT, predictor_factory: Optional[sensai.torch.torch_models.seq.seq_modules.PredictorFactory] = None, p_recurrent_dropout: float = 0.0, output_dim: int = 1)
create_decoder(latent_dim: int, target_feature_dim: int) torch.nn.Module
Parameters
  • latent_dim – the latent vector size which is used for the representation of the history

  • target_feature_dim – the number of dimensions/features that are given for each prediction to be made (each future sequence item)

Returns

a torch module satisfying DecoderProtocol

class SingleTargetDecoderFactory(predictor_factory: sensai.torch.torch_models.seq.seq_modules.PredictorFactory)[source]

Bases: sensai.torch.torch_models.seq.seq_modules.DecoderFactory

A factory for SingleTargetDecoderModule which takes the latent encoding and a single-element sequence of target items as input, producing a single prediction

__init__(predictor_factory: sensai.torch.torch_models.seq.seq_modules.PredictorFactory)
create_decoder(latent_dim: int, target_feature_dim: int) torch.nn.Module
Parameters
  • latent_dim – the latent vector size which is used for the representation of the history

  • target_feature_dim – the number of dimensions/features that are given for each prediction to be made (each future sequence item)

Returns

a torch module satisfying DecoderProtocol

class EncoderDecoderModule(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.Module

Represents and encoder-decoder (where both components can be injected). It takes a history sequence and a sequence of target feature vectors as input. Both sequences are potentially of variable length, and for the target sequence, the common special case where there is but one target and thus one prediction to be made is specifically catered for using dedicated decoders (see SingleTargetDecoderModule).

The module first encodes the history sequence to a latent vector and then uses the decoder to map this latent vector along with the target features to a prediction.

__init__(encoder: Union[sensai.torch.torch_models.seq.seq_modules.EncoderProtocol, torch.nn.Module], decoder: Union[sensai.torch.torch_models.seq.seq_modules.DecoderProtocol, torch.nn.Module], variable_history_length: bool)
Parameters
  • encoder – a torch module satisfying EncoderProtocol

  • decoder – a torch module satisfying DecoderProtocol

  • variable_history_length – whether the history sequence is variable-length. If it is not, then the model will not pass on the lengths tensor to the encoder, allowing it to simplify its handling of this case (even if the original input provides the lengths).

forward(window_features: torch.Tensor, window_lengths: Optional[torch.Tensor] = None, target_features: Optional[torch.Tensor] = None, target_lengths: Optional[torch.Tensor] = None)
Parameters
  • window_features – a tensor of size (batch_size, max(window_lengths), dim_per_window_item) containing the window features

  • window_lengths – a tensor containing the lengths of windows in w

  • target_features – an optional tensor containing target features with shape (batch_size, max_target_seq_length, target_feature_dim). For the case where there is only one target item (no actual sequence), max_target_seq_length should be 1.

  • target_lengths – an optional tensor containing the lengths target the target sequences, allowing the actual sequence lengths to differ