Model
- class flood_forecast.da_rnn.model.MetaMerger(meta_params, meta_method, embed_shape, in_shape)[source]
- __init__(meta_params, meta_method, embed_shape, in_shape)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(temporal_data, meta_data)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flood_forecast.da_rnn.model.DARNN(n_time_series: int, hidden_size_encoder: int, forecast_history: int, decoder_hidden_size: int, out_feats=1, dropout=0.01, meta_data=False, gru_lstm=True, probabilistic=False, final_act=None)[source]
- __init__(n_time_series: int, hidden_size_encoder: int, forecast_history: int, decoder_hidden_size: int, out_feats=1, dropout=0.01, meta_data=False, gru_lstm=True, probabilistic=False, final_act=None)[source]
For model benchmark information see link on side https://rb.gy/koozff.
- Parameters:
n_time_series (int) – Number of time series present in input
hidden_size_encoder (int) – dimension of the hidden state encoder
forecast_history (int) – How many historic time steps to use for forecasting (add one to this number)
decoder_hidden_size (int) – dimension of hidden size of the decoder
out_feats (int, optional) – The number of targets (or in classification classes), defaults to 1
dropout (float, optional) – defaults to .01
meta_data (bool, optional) – [description], defaults to False
gru_lstm (bool, optional) – Specify true if you want to use LSTM, defaults to True
probabilistic (bool, optional) – Specify true if you want to use a probablistic variation, defaults to False
- forward(x: Tensor, meta_data: Tensor = None) Tensor [source]
Performs standard forward pass of the DARNN. Special handling of probablistic.
- Parameters:
x (torch.Tensor) – The core temporal data represented as a tensor (batch_size, forecast_history, n_time_series)
meta_data (torch( ).Tensor, optional) – The meta-data represented as a tensor (), defaults to None
- Returns:
The predictetd number should be in format
- Return type:
torch.Tensor
Train the initial value of the hidden state:
https://r2rt.com/non-zero-initial-states-for-recurrent-neural-networks.html
- class flood_forecast.da_rnn.model.Encoder(input_size: int, hidden_size: int, T: int, gru_lstm: bool = True, meta_data: bool = False)[source]
- __init__(input_size: int, hidden_size: int, T: int, gru_lstm: bool = True, meta_data: bool = False)[source]
input size: number of underlying factors (81) T: number of time steps (10) hidden_size: dimension of the hidden stats
- forward(input_data: Tensor, meta_data=None) Tuple[Tensor, Tensor] [source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flood_forecast.da_rnn.model.Decoder(encoder_hidden_size: int, decoder_hidden_size: int, T: int, out_feats=1, gru_lstm: bool = True, probabilistic: bool = True)[source]
- __init__(encoder_hidden_size: int, decoder_hidden_size: int, T: int, out_feats=1, gru_lstm: bool = True, probabilistic: bool = True)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input_encoded: Tensor, y_history: Tensor) Tensor [source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.