notice

This is documentation for Rasa Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).

Version: 2.x

rasa.core.test

WrongPredictionException Objects

class WrongPredictionException(RasaException, ValueError)

Raised if a wrong prediction is encountered.

WarningPredictedAction Objects

class WarningPredictedAction(ActionExecuted)

The model predicted the correct action with warning.

__init__

| __init__(action_name_prediction: Text, action_name: Optional[Text] = None, policy: Optional[Text] = None, confidence: Optional[float] = None, timestamp: Optional[float] = None, metadata: Optional[Dict] = None)

Creates event action_unlikely_intent predicted as warning.

See the docstring of the parent class for more information.

inline_comment

| inline_comment() -> Text

A comment attached to this event. Used during dumping.

WronglyPredictedAction Objects

class WronglyPredictedAction(ActionExecuted)

The model predicted the wrong action.

Mostly used to mark wrong predictions and be able to dump them as stories.

__init__

| __init__(action_name_target: Text, action_text_target: Text, action_name_prediction: Text, policy: Optional[Text] = None, confidence: Optional[float] = None, timestamp: Optional[float] = None, metadata: Optional[Dict] = None, predicted_action_unlikely_intent: bool = False) -> None

Creates event for a successful event execution.

See the docstring of the parent class ActionExecuted for more information.

inline_comment

| inline_comment() -> Text

A comment attached to this event. Used during dumping.

as_story_string

| as_story_string() -> Text

Returns the story equivalent representation.

__repr__

| __repr__() -> Text

Returns event as string for debugging.

EvaluationStore Objects

class EvaluationStore()

Class storing action, intent and entity predictions and targets.

__init__

| __init__(action_predictions: Optional[PredictionList] = None, action_targets: Optional[PredictionList] = None, intent_predictions: Optional[PredictionList] = None, intent_targets: Optional[PredictionList] = None, entity_predictions: Optional[List["EntityPrediction"]] = None, entity_targets: Optional[List["EntityPrediction"]] = None) -> None

Initialize store attributes.

add_to_store

| add_to_store(action_predictions: Optional[PredictionList] = None, action_targets: Optional[PredictionList] = None, intent_predictions: Optional[PredictionList] = None, intent_targets: Optional[PredictionList] = None, entity_predictions: Optional[List["EntityPrediction"]] = None, entity_targets: Optional[List["EntityPrediction"]] = None) -> None

Add items or lists of items to the store.

merge_store

| merge_store(other: "EvaluationStore") -> None

Add the contents of other to self.

check_prediction_target_mismatch

| check_prediction_target_mismatch() -> bool

Checks if intent, entity or action predictions don't match expected ones.

serialise

| serialise() -> Tuple[PredictionList, PredictionList]

Turn targets and predictions to lists of equal size for sklearn.

EndToEndUserUtterance Objects

class EndToEndUserUtterance(UserUttered)

End-to-end user utterance.

Mostly used to print the full end-to-end user message in the failed_test_stories.yml output file.

as_story_string

| as_story_string(e2e: bool = True) -> Text

Returns the story equivalent representation.

WronglyClassifiedUserUtterance Objects

class WronglyClassifiedUserUtterance(UserUttered)

The NLU model predicted the wrong user utterance.

Mostly used to mark wrong predictions and be able to dump them as stories.

__init__

| __init__(event: UserUttered, eval_store: EvaluationStore) -> None

Set predicted_intent and predicted_entities attributes.

inline_comment

| inline_comment() -> Optional[Text]

A comment attached to this event. Used during dumping.

inline_comment_for_entity

| @staticmethod
| inline_comment_for_entity(predicted: Dict[Text, Any], entity: Dict[Text, Any]) -> Optional[Text]

Returns the predicted entity which is then printed as a comment.

as_story_string

| as_story_string(e2e: bool = True) -> Text

Returns text representation of event.

emulate_loop_rejection

emulate_loop_rejection(partial_tracker: DialogueStateTracker) -> None

Add ActionExecutionRejected event to the tracker.

During evaluation, we don't run action server, therefore in order to correctly test unhappy paths of the loops, we need to emulate loop rejection.

Arguments:

  • partial_tracker - a :class:rasa.core.trackers.DialogueStateTracker

test

async test(stories: Text, agent: "Agent", max_stories: Optional[int] = None, out_directory: Optional[Text] = None, fail_on_prediction_errors: bool = False, e2e: bool = False, disable_plotting: bool = False, successes: bool = False, errors: bool = True, warnings: bool = True) -> Dict[Text, Any]

Run the evaluation of the stories, optionally plot the results.

Arguments:

  • stories - the stories to evaluate on
  • agent - the agent
  • max_stories - maximum number of stories to consider
  • out_directory - path to directory to results to
  • fail_on_prediction_errors - boolean indicating whether to fail on prediction errors or not
  • e2e - boolean indicating whether to use end to end evaluation or not
  • disable_plotting - boolean indicating whether to disable plotting or not
  • successes - boolean indicating whether to write down successful predictions or not
  • errors - boolean indicating whether to write down incorrect predictions or not
  • warnings - boolean indicating whether to write down prediction warnings or not

Returns:

Evaluation summary.

compare_models_in_dir

async compare_models_in_dir(model_dir: Text, stories_file: Text, output: Text, use_conversation_test_files: bool = False) -> None

Evaluates multiple trained models in a directory on a test set.

Arguments:

  • model_dir - path to directory that contains the models to evaluate
  • stories_file - path to the story file
  • output - output directory to store results to
  • use_conversation_test_files - True if conversation test files should be used for testing instead of regular Core story files.

compare_models

async compare_models(models: List[Text], stories_file: Text, output: Text, use_conversation_test_files: bool = False) -> None

Evaluates multiple trained models on a test set.

Arguments:

  • models - Paths to model files.
  • stories_file - path to the story file
  • output - output directory to store results to
  • use_conversation_test_files - True if conversation test files should be used for testing instead of regular Core story files.