notice

This is documentation for Rasa Open Source Documentation v2.7.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (2.8.x).

Version: 2.7.x

rasa.model_testing

test_core_models_in_directory

test_core_models_in_directory(model_directory: Text, stories: Text, output: Text, use_conversation_test_files: bool = False) -> None

Evaluates a directory with multiple Core models using test data.

Arguments:

  • model_directory - Directory containing multiple model files.
  • stories - Path to a conversation test file.
  • output - Output directory to store results to.
  • use_conversation_test_files - True if conversation test files should be used for testing instead of regular Core story files.

plot_core_results

plot_core_results(output_directory: Text, number_of_examples: List[int]) -> None

Plot core model comparison graph.

Arguments:

  • output_directory - path to the output directory
  • number_of_examples - number of examples per run

test_core_models

test_core_models(models: List[Text], stories: Text, output: Text, use_conversation_test_files: bool = False) -> None

Compares multiple Core models based on test data.

Arguments:

  • models - A list of models files.
  • stories - Path to test data.
  • output - Path to output directory for test results.
  • use_conversation_test_files - True if conversation test files should be used for testing instead of regular Core story files.

test_core

test_core(model: Optional[Text] = None, stories: Optional[Text] = None, output: Text = DEFAULT_RESULTS_PATH, additional_arguments: Optional[Dict] = None, use_conversation_test_files: bool = False) -> None

Tests a trained Core model against a set of test stories.

test_nlu

async test_nlu(model: Optional[Text], nlu_data: Optional[Text], output_directory: Text = DEFAULT_RESULTS_PATH, additional_arguments: Optional[Dict] = None) -> None

Tests the NLU Model.

compare_nlu_models

async compare_nlu_models(configs: List[Text], test_data: TrainingData, output: Text, runs: int, exclusion_percentages: List[int]) -> None

Trains multiple models, compares them and saves the results.

plot_nlu_results

plot_nlu_results(output_directory: Text, number_of_examples: List[int]) -> None

Plot NLU model comparison graph.

Arguments:

  • output_directory - path to the output directory
  • number_of_examples - number of examples per run

perform_nlu_cross_validation

perform_nlu_cross_validation(config: Text, data: TrainingData, output: Text, additional_arguments: Optional[Dict[Text, Any]]) -> None

Runs cross-validation on test data.

Arguments:

  • config - The model configuration.
  • data - The data which is used for the cross-validation.
  • output - Output directory for the cross-validation results.
  • additional_arguments - Additional arguments which are passed to the cross-validation, like number of disable_plotting.

get_evaluation_metrics

get_evaluation_metrics(targets: Iterable[Any], predictions: Iterable[Any], output_dict: bool = False, exclude_label: Optional[Text] = None) -> Tuple[Union[Text, Dict[Text, Dict[Text, float]]], float, float, float]

Compute the f1, precision, accuracy and summary report from sklearn.

Arguments:

  • targets - target labels
  • predictions - predicted labels
  • output_dict - if True sklearn returns a summary report as dict, if False the report is in string format
  • exclude_label - labels to exclude from evaluation

Returns:

Report from sklearn, precision, f1, and accuracy values.

clean_labels

clean_labels(labels: Iterable[Text]) -> List[Text]

Remove None labels. sklearn metrics do not support them.

Arguments:

  • labels - list of labels

Returns:

Cleaned labels.

get_unique_labels

get_unique_labels(targets: Iterable[Text], exclude_label: Optional[Text]) -> List[Text]

Get unique labels. Exclude 'exclude_label' if specified.

Arguments:

  • targets - labels
  • exclude_label - label to exclude

Returns:

Unique labels.