Model Evaluator

flood_forecast.evaluator.stream_baseline(river_flow_df: pandas.core.frame.DataFrame, forecast_column: str, hours_forecast=336) -> (<class 'pandas.core.frame.DataFrame'>, <class 'float'>)[source]

Function to compute the baseline MSE by using the mean value from the train data.

flood_forecast.evaluator.plot_r2(river_flow_preds: pandas.core.frame.DataFrame) → float[source]

We assume at this point river_flow_preds already has a predicted_baseline and a predicted_model column

flood_forecast.evaluator.get_model_r2_score(river_flow_df: pandas.core.frame.DataFrame, model_evaluate_function: Callable, forecast_column: str, hours_forecast=336)[source]

model_evaluate_function should call any necessary preprocessing

flood_forecast.evaluator.get_r2_value(model_mse, baseline_mse)[source]
flood_forecast.evaluator.get_value(the_path: str) → None[source]
flood_forecast.evaluator.metric_dict(metric: str) → Callable[source]
flood_forecast.evaluator.evaluate_model(model: Type[flood_forecast.time_model.TimeSeriesModel], model_type: str, target_col: List[str], evaluation_metrics: List[T], inference_params: Dict[KT, VT], eval_log: Dict[KT, VT]) → Tuple[Dict[KT, VT], pandas.core.frame.DataFrame, int, pandas.core.frame.DataFrame][source]

A function to evaluate a model. Requires a model of type TimeSeriesModel

flood_forecast.evaluator.infer_on_torch_model(model, test_csv_path: str = None, datetime_start: datetime.datetime = datetime.datetime(2018, 9, 22, 0, 0), hours_to_forecast: int = 336, decoder_params=None, dataset_params: Dict[KT, VT] = {}, num_prediction_samples: int = None) -> (<class 'pandas.core.frame.DataFrame'>, <class 'torch.Tensor'>, <class 'int'>, <class 'int'>, <class 'flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader'>, <class 'pandas.core.frame.DataFrame'>)[source]

Function to handle both test evaluation and inference on a test dataframe. :returns

df: df including training and test data end_tensor: the final tensor after the model has finished predictions history_length: num rows to use in training forecast_start_idx: row index to start forecasting test_data: CSVTestLoader instance df_prediction_samples: has same index as df, and num cols equal to num_prediction_samples

or no columns if num_prediction_samples is None
flood_forecast.evaluator.generate_predictions(model: Type[flood_forecast.time_model.TimeSeriesModel], df: pandas.core.frame.DataFrame, test_data: flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader, history: torch.Tensor, device: torch.device, forecast_start_idx: int, forecast_length: int, hours_to_forecast: int, decoder_params: Dict[KT, VT]) → torch.Tensor[source]
flood_forecast.evaluator.generate_predictions_non_decoded(model: Type[flood_forecast.time_model.TimeSeriesModel], df: pandas.core.frame.DataFrame, test_data: flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader, history_dim: torch.Tensor, forecast_length: int, hours_to_forecast: int) → torch.Tensor[source]
flood_forecast.evaluator.generate_decoded_predictions(model: Type[flood_forecast.time_model.TimeSeriesModel], test_data: flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader, forecast_start_idx: int, device: torch.device, history_dim: torch.Tensor, hours_to_forecast: int, decoder_params: Dict[KT, VT]) → torch.Tensor[source]
flood_forecast.evaluator.generate_prediction_samples(model: Type[flood_forecast.time_model.TimeSeriesModel], df: pandas.core.frame.DataFrame, test_data: flood_forecast.preprocessing.pytorch_loaders.CSVTestLoader, history: torch.Tensor, device: torch.device, forecast_start_idx: int, forecast_length: int, hours_to_forecast: int, decoder_params: Dict[KT, VT], num_prediction_samples: int) → numpy.ndarray[source]