Informer

class flood_forecast.transformer_xl.informer.Informer(n_time_series: int, dec_in: int, c_out: int, seq_len, label_len, out_len, factor=5, d_model=512, n_heads=8, e_layers=3, d_layers=2, d_ff=512, dropout=0.0, attn='prob', embed='fixed', temp_depth=4, activation='gelu', device=device(type='cuda', index=0))[source]
__init__(n_time_series: int, dec_in: int, c_out: int, seq_len, label_len, out_len, factor=5, d_model=512, n_heads=8, e_layers=3, d_layers=2, d_ff=512, dropout=0.0, attn='prob', embed='fixed', temp_depth=4, activation='gelu', device=device(type='cuda', index=0))[source]
This is based on the implementation of the Informer available from the original authors

https://github.com/zhouhaoyi/Informer2020. We have done some minimal refactoring, but the core code remains the same. Additionally, we have added a few more options to the code

Parameters
  • n_time_series (int) – The number of time series present in the multivariate forecasting problem.

  • dec_in (int) – The input size to the decoder (e.g. the number of time series passed to the decoder)

  • c_out (int) – The output dimension of the model (usually will be the number of variables you are forecasting).

  • seq_len (int) – The number of historical time steps to pass into the model.

  • label_len (int) – The length of the label sequence passed into the decoder (n_time_steps not used forecasted)

  • out_len (int) – The predicted number of time steps. forecast_length should equal out_len + label_len

  • factor (int, optional) – The multiplicative factor in the probablistic attention mechanism, defaults to 5

  • d_model (int, optional) – The embedding dimension of the model, defaults to 512

  • n_heads (int, optional) – The number of heads in the multi-head attention mechanism , defaults to 8

  • e_layers (int, optional) – The number of layers in the encoder, defaults to 3

  • d_layers (int, optional) – The number of layers in the decoder, defaults to 2

  • d_ff (int, optional) – The dimension of the forward pass, defaults to 512

  • dropout (float, optional) – Whether to use dropout, defaults to 0.0

  • attn (str, optional) – The type of the attention mechanism either ‘prob’ or ‘full’, defaults to ‘prob’

  • embed (str, optional) – Whether to use class: FixedEmbedding or torch.nn.Embbeding , defaults to ‘fixed’

  • temp_depth – The temporal depth (e.g year, month, day, weekday, etc), defaults to 4

  • activation (str, optional) – The activation function, defaults to ‘gelu’

  • device (str, optional) – The device the model uses, defaults to torch.device(‘cuda:0’)

forward(x_enc: torch.Tensor, x_mark_enc, x_dec, x_mark_dec, enc_self_mask=None, dec_self_mask=None, dec_enc_mask=None)[source]
Parameters
  • x_enc (torch.Tensor) – The core tensor going into the model. Of dimension (batch_size, seq_len, n_time_series)

  • x_mark_enc (torch.Tensor) – A tensor with the relevant datetime information. (batch_size, seq_len, n_datetime_feats)

  • x_dec (torch.Tensor) – The datetime tensor information. Has dimension batch_size, seq_len, n_time_series

  • x_mark_dec (torch.Tensor) – A tensor with the relevant datetime information. (batch_size, seq_len, n_datetime_feats)

  • enc_self_mask ([type], optional) – The mask of the encoder model has size (), defaults to None

  • dec_self_mask ([type], optional) – [description], defaults to None

  • dec_enc_mask (torch.Tensor, optional) – torch.Tensor, defaults to None

Returns

Returns a PyTorch tensor of shape (batch_size, out_len, n_targets)

Return type

torch.Tensor

training: bool
class flood_forecast.transformer_xl.informer.ConvLayer(c_in)[source]
__init__(c_in)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class flood_forecast.transformer_xl.informer.EncoderLayer(attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]
__init__(attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]

[summary]

Parameters
  • attention ([type]) – [description]

  • d_model ([type]) – [description]

  • d_ff ([type], optional) – [description], defaults to None

  • dropout (float, optional) – [description], defaults to 0.1

  • activation (str, optional) – [description], defaults to “relu”

forward(x, attn_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class flood_forecast.transformer_xl.informer.Encoder(attn_layers, conv_layers=None, norm_layer=None)[source]
__init__(attn_layers, conv_layers=None, norm_layer=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, attn_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class flood_forecast.transformer_xl.informer.DecoderLayer(self_attention, cross_attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]
__init__(self_attention, cross_attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cross, x_mask=None, cross_mask=None) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class flood_forecast.transformer_xl.informer.Decoder(layers, norm_layer=None)[source]
__init__(layers, norm_layer=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cross, x_mask=None, cross_mask=None) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool