PyTorch Loaders

class flood_forecast.preprocessing.pytorch_loaders.CSVDataLoader(file_path: str, forecast_history: int, forecast_length: int, target_col: List, relevant_cols: List, scaling=None, start_stamp: int = 0, end_stamp: Optional[int] = None, gcp_service_key: Optional[str] = None, interpolate_param: bool = False, sort_column=None, scaled_cols=None, feature_params=None, no_scale=False)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(file_path: str, forecast_history: int, forecast_length: int, target_col: List, relevant_cols: List, scaling=None, start_stamp: int = 0, end_stamp: Optional[int] = None, gcp_service_key: Optional[str] = None, interpolate_param: bool = False, sort_column=None, scaled_cols=None, feature_params=None, no_scale=False)[source]

A data loader that takes a CSV file and properly batches for use in training/eval a PyTorch model :param file_path: The path to the CSV file you wish to use. :param forecast_history: This is the length of the historical time series data you wish to

utilize for forecasting

Parameters
  • forecast_length – The number of time steps to forecast ahead (for transformer this must equal history_length)

  • relevant_cols – Supply column names you wish to predict in the forecast (others will not be used)

  • target_col – The target column or columns you to predict. If you only have one still use a list [‘cfs’]

  • scaling – (highly reccomended) If provided should be a subclass of sklearn.base.BaseEstimator

and sklearn.base.TransformerMixin) i.e StandardScaler, MaxAbsScaler, MinMaxScaler, etc) Note without a scaler the loss is likely to explode and cause infinite loss which will corrupt weights :param start_stamp int: Optional if you want to only use part of a CSV for training, validation

or testing supply these

Parameters
  • int (end_stamp) – Optional if you want to only use part of a CSV for training, validation, or testing supply these

  • str (sort_column) – The column to sort the time series on prior to forecast.

  • scaled_cols – The columns you want scaling applied to (if left blank will default to all columns)

  • feature_params – These are the datetime features you want to create.

  • no_scale – This means that the end labels will not be scaled when running

inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor[source]

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.CSVSeriesIDLoader(series_id_col: str, main_params: dict, return_method: str, return_all=True)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(series_id_col: str, main_params: dict, return_method: str, return_all=True)[source]

A data-loader for a CSV file that contains a series ID column.

Parameters
  • series_id_col (str) – The id

  • main_params (dict) – The central set of parameters

  • return_method (str) – The method of return

  • return_all (bool, optional) – Whether to return all items, defaults to True

inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader(df_path: str, forecast_total: int, use_real_precip=True, use_real_temp=True, target_supplied=True, interpolate=False, sort_column_clone=None, **kwargs)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(df_path: str, forecast_total: int, use_real_precip=True, use_real_temp=True, target_supplied=True, interpolate=False, sort_column_clone=None, **kwargs)[source]
Parameters

df_path (str) –

A data loader for the test data.

get_from_start_date(forecast_start: datetime.datetime)[source]
convert_real_batches(the_col: str, rows_to_convert)[source]

A helper function to return properly divided precip and temp values to be stacked with forecasted cfs.

convert_history_batches(the_col: Union[str, List[str]], rows_to_convert: pandas.core.frame.DataFrame)[source]

A helper function to return dataframe in batches of size (history_len, num_features)

Args:

the_col (str): column names rows_to_convert (pd.Dataframe): rows in a dataframe to be converted into batches

inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.AEDataloader(file_path: str, relevant_cols: List, scaling=None, start_stamp: int = 0, target_col: Optional[List] = None, end_stamp: Optional[int] = None, unsqueeze_dim: int = 1, interpolate_param=False, forecast_history=1, no_scale=True, sort_column=None)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(file_path: str, relevant_cols: List, scaling=None, start_stamp: int = 0, target_col: Optional[List] = None, end_stamp: Optional[int] = None, unsqueeze_dim: int = 1, interpolate_param=False, forecast_history=1, no_scale=True, sort_column=None)[source]
A data loader class for autoencoders. Overrides __len__ and __getitem__ from generic dataloader.

Also defaults forecast_history and forecast_length to 1. Since AE will likely only use one row. Same parameters as before.

Parameters
  • file_path (str) – The path to the file

  • relevant_cols (List) – d

  • scaling ([type], optional) – [description], defaults to None

  • start_stamp (int, optional) – [description], defaults to 0

  • target_col (List, optional) – [description], defaults to None

  • end_stamp (int, optional) – [description], defaults to None

  • unsqueeze_dim (int, optional) – [description], defaults to 1

  • interpolate_param (bool, optional) – [description], defaults to False

  • forecast_history (int, optional) – [description], defaults to 1

  • no_scale (bool, optional) – [description], defaults to True

  • sort_column ([type], optional) – [description], defaults to None

inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.GeneralClassificationLoader(params: Dict, n_classes: int = 2)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(params: Dict, n_classes: int = 2)[source]

A generic data loader class for TS classification problems.

Parameters
  • params (Dict) – The standard dictionary for a dataloader (see CSVDataLoader)

  • n_classes – The number of classes in the problem

inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.TemporalLoader(time_feats: List[str], kwargs: Dict, label_len=0)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(time_feats: List[str], kwargs: Dict, label_len=0)[source]

A data loader class for creating specific temporal features/embeddings.

Parameters
  • time_feats (List[str]) – A list of strings of the time features (e.g. [‘month’, ‘day’, ‘hour’])

  • kwargs (Dict[str, Any]) – The set of parameters

  • label_len (int, optional) – For Informer based model the, defaults to 0

static df_to_numpy(pandas_stuff: pandas.core.frame.DataFrame)[source]
inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

class flood_forecast.preprocessing.pytorch_loaders.TemporalTestLoader(time_feats, kwargs={}, decoder_step_len=None)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

__init__(time_feats, kwargs={}, decoder_step_len=None)[source]

A test data-loader class for data in the format of the TemporalLoader.

Parameters
  • time_feats (List[str]) – The temporal featuers to use in encoding.

  • kwargs (dict, optional) – The dict used to instantiate CSVTestLoader parent, defaults to {}

  • decoder_step_len ([type], optional) – [description], defaults to None

convert_history_batches(the_col: Union[str, List[str]], rows_to_convert: pandas.core.frame.DataFrame)

A helper function to return dataframe in batches of size (history_len, num_features)

Args:

the_col (str): column names rows_to_convert (pd.Dataframe): rows in a dataframe to be converted into batches

convert_real_batches(the_col: str, rows_to_convert)

A helper function to return properly divided precip and temp values to be stacked with forecasted cfs.

get_from_start_date(forecast_start: datetime.datetime)
inverse_scale(result_data: Union[torch.Tensor, pandas.core.series.Series, numpy.ndarray]) torch.Tensor

Un-does the scaling of the data

Parameters

result_data (Union[torch.Tensor, pd.Series, np.ndarray]) – The data you want to unscale can handle multiple data types.

Returns

Returns the unscaled data as PyTorch tensor.

Return type

torch.Tensor

static df_to_numpy(pandas_stuff: pandas.core.frame.DataFrame)[source]