notice

This is documentation for Rasa Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).

Version: 2.x

rasa.core.agent

load_from_server

async load_from_server(agent: "Agent", model_server: EndpointConfig) -> "Agent"

Load a persisted model from a server.

create_agent

create_agent(model: Text, endpoints: Text = None) -> "Agent"

Create an agent instance based on a stored model.

Arguments:

  • model - file path to the stored model
  • endpoints - file path to the used endpoint configuration

load_agent

async load_agent(model_path: Optional[Text] = None, model_server: Optional[EndpointConfig] = None, remote_storage: Optional[Text] = None, interpreter: Optional[NaturalLanguageInterpreter] = None, generator: Union[EndpointConfig, NaturalLanguageGenerator] = None, tracker_store: Optional[TrackerStore] = None, lock_store: Optional[LockStore] = None, action_endpoint: Optional[EndpointConfig] = None) -> Optional["Agent"]

Loads agent from server, remote storage or disk.

Arguments:

  • model_path - Path to the model if it's on disk.
  • model_server - Configuration for a potential server which serves the model.
  • remote_storage - URL of remote storage for model.
  • interpreter - NLU interpreter to parse incoming messages.
  • generator - Optional response generator.
  • tracker_store - TrackerStore for persisting the conversation history.
  • lock_store - LockStore to avoid that a conversation is modified by concurrent actors.
  • action_endpoint - Action server configuration for executing custom actions.

Returns:

The instantiated Agent or None.

Agent Objects

class Agent()

The Agent class provides a convenient interface for the most important Rasa functionality.

This includes training, handling messages, loading a dialogue model, getting the next action, and handling a channel.

load

| @classmethod
| load(cls, model_path: Union[Text, Path], interpreter: Optional[NaturalLanguageInterpreter] = None, generator: Union[EndpointConfig, NaturalLanguageGenerator] = None, tracker_store: Optional[TrackerStore] = None, lock_store: Optional[LockStore] = None, action_endpoint: Optional[EndpointConfig] = None, model_server: Optional[EndpointConfig] = None, remote_storage: Optional[Text] = None, path_to_model_archive: Optional[Text] = None, new_config: Optional[Dict] = None, finetuning_epoch_fraction: float = 1.0) -> "Agent"

Load a persisted model from the passed path.

is_core_ready

| is_core_ready() -> bool

Check if all necessary components and policies are ready to use the agent.

is_ready

| is_ready() -> bool

Check if all necessary components are instantiated to use agent.

Policies might not be available, if this is an NLU only agent.

parse_message_using_nlu_interpreter

| async parse_message_using_nlu_interpreter(message_data: Text, tracker: DialogueStateTracker = None) -> Dict[Text, Any]

Handles message text and intent payload input messages.

The return value of this function is parsed_data.

Arguments:

  • message_data Text - Contain the received message in text or\ intent payload format.
  • tracker DialogueStateTracker - Contains the tracker to be\ used by the interpreter.

Returns:

The parsed message.

Example:

{\

  • "text" - '/greet{"name":"Rasa"}',\
  • "intent" - {"name": "greet", "confidence": 1.0},\
  • "intent_ranking" - [{"name": "greet", "confidence": 1.0}],\
  • "entities" - [{"entity": "name", "start": 6,\
  • "end" - 21, "value": "Rasa"}],\ }

handle_message

| async handle_message(message: UserMessage, message_preprocessor: Optional[Callable[[Text], Text]] = None, **kwargs: Any, ,) -> Optional[List[Dict[Text, Any]]]

Handle a single message.

predict_next

| async predict_next(sender_id: Text, **kwargs: Any) -> Optional[Dict[Text, Any]]

Handle a single message.

log_message

| async log_message(message: UserMessage, message_preprocessor: Optional[Callable[[Text], Text]] = None, **kwargs: Any, ,) -> DialogueStateTracker

Append a message to a dialogue - does not predict actions.

execute_action

| async execute_action(sender_id: Text, action: Text, output_channel: OutputChannel, policy: Optional[Text], confidence: Optional[float]) -> Optional[DialogueStateTracker]

Handle a single message.

trigger_intent

| async trigger_intent(intent_name: Text, entities: List[Dict[Text, Any]], output_channel: OutputChannel, tracker: DialogueStateTracker) -> None

Trigger a user intent, e.g. triggered by an external event.

handle_text

| async handle_text(text_message: Union[Text, Dict[Text, Any]], message_preprocessor: Optional[Callable[[Text], Text]] = None, output_channel: Optional[OutputChannel] = None, sender_id: Optional[Text] = DEFAULT_SENDER_ID) -> Optional[List[Dict[Text, Any]]]

Handle a single message.

If a message preprocessor is passed, the message will be passed to that function first and the return value is then used as the input for the dialogue engine.

The return value of this function depends on the output_channel. If the output channel is not set, set to None, or set to CollectingOutputChannel this function will return the messages the bot wants to respond.

:Example:

>>> from rasa.core.agent import Agent >>> from rasa.core.interpreter import RasaNLUInterpreter >>> agent = Agent.load("examples/moodbot/models") >>> await agent.handle_text("hello") [u'how can I help you?']

load_data

| async load_data(training_resource: Union[Text, TrainingDataImporter], remove_duplicates: bool = True, unique_last_num_states: Optional[int] = None, augmentation_factor: int = 50, tracker_limit: Optional[int] = None, use_story_concatenation: bool = True, debug_plots: bool = False, exclusion_percentage: Optional[int] = None) -> List[DialogueStateTracker]

Load training data from a resource.

train

| train(training_trackers: List[DialogueStateTracker], **kwargs: Any) -> None

Train the policies / policy ensemble using dialogue data from file.

Arguments:

  • training_trackers - trackers to train on
  • **kwargs - additional arguments passed to the underlying ML trainer (e.g. keras parameters)

persist

| persist(model_path: Text) -> None

Persists this agent into a directory for later loading and usage.

visualize

| async visualize(resource_name: Text, output_file: Text, max_history: Optional[int] = None, nlu_training_data: Optional[TrainingData] = None, should_merge_nodes: bool = True, fontsize: int = 12) -> None

Visualize the loaded training data from the resource.

create_processor

| create_processor(preprocessor: Optional[Callable[[Text], Text]] = None) -> MessageProcessor

Instantiates a processor based on the set state of the agent.