Version: 3.x
rasa.core.policies.ted_policy TEDPolicy Objects# Copy @DefaultV1Recipe . register (
DefaultV1Recipe . ComponentType . POLICY_WITH_END_TO_END_SUPPORT , is_trainable = True
)
class TEDPolicy ( Policy )
Transformer Embedding Dialogue (TED) Policy.
The model architecture is described in
detail in https://arxiv.org/abs/1910.00486 .
In summary, the architecture comprises of the
following steps:
Copy - concatenate user input (user intent and entities), previous system actions,
slots and active forms for each time step into an input vector to
pre-transformer embedding layer;
- feed it to transformer;
- apply a dense layer to the output of the transformer to get embeddings of a
dialogue for each time step;
- apply a dense layer to create embeddings for system actions for each time
step;
- calculate the similarity between the dialogue embedding and embedded system
actions. This step is based on the StarSpace
(https://arxiv.org/abs/1709.03856) idea.
get_default_config# Copy @staticmethod
def get_default_config ( ) - > Dict [ Text , Any ]
Returns the default config (see parent class for full docstring).
__init__# Copy def __init__ ( config : Dict [ Text , Any ] ,
model_storage : ModelStorage ,
resource : Resource ,
execution_context : ExecutionContext ,
model : Optional [ RasaModel ] = None ,
featurizer : Optional [ TrackerFeaturizer ] = None ,
fake_features : Optional [ Dict [ Text , List [ Features ] ] ] = None ,
entity_tag_specs : Optional [ List [ EntityTagSpec ] ] = None ) - > None
Declares instance variables with default values.
model_class# Copy @staticmethod
def model_class ( ) - > Type [ TED ]
Gets the class of the model architecture to be used by the policy.
Returns :
Required class.
run_training# Copy def run_training ( model_data : RasaModelData ,
label_ids : Optional [ np . ndarray ] = None ) - > None
Feeds the featurized training data to the model.
Arguments :
model_data
- Featurized training data.label_ids
- Label ids corresponding to the data points in model_data
.
These may or may not be used by the function depending
on how the policy is trained. train# Copy def train (
training_trackers : List [ TrackerWithCachedStates ] ,
domain : Domain ,
precomputations : Optional [ MessageContainerForCoreFeaturization ] = None ,
** kwargs : Any ) - > Resource
Trains the policy (see parent class for full docstring).
predict_action_probabilities# Copy def predict_action_probabilities (
tracker : DialogueStateTracker ,
domain : Domain ,
rule_only_data : Optional [ Dict [ Text , Any ] ] = None ,
precomputations : Optional [ MessageContainerForCoreFeaturization ] = None ,
** kwargs : Any ) - > PolicyPrediction
Predicts the next action (see parent class for full docstring).
persist# Persists the policy to a storage.
persist_model_utilities# Copy def persist_model_utilities ( model_path : Path ) - > None
Persists model's utility attributes like model weights, etc.
Arguments :
model_path
- Path where model is to be persisted load# Copy @classmethod
def load ( cls , config : Dict [ Text , Any ] , model_storage : ModelStorage ,
resource : Resource , execution_context : ExecutionContext ,
** kwargs : Any ) - > TEDPolicy
Loads a policy from the storage (see parent class for full docstring).
TED Objects# Copy class TED ( TransformerRasaModel )
TED model architecture from https://arxiv.org/abs/1910.00486 .
__init__# Copy def __init__ ( data_signature : Dict [ Text , Dict [ Text , List [ FeatureSignature ] ] ] ,
config : Dict [ Text , Any ] , max_history_featurizer_is_used : bool ,
label_data : RasaModelData ,
entity_tag_specs : Optional [ List [ EntityTagSpec ] ] ) - > None
Initializes the TED model.
Arguments :
data_signature
- the data signature of the input dataconfig
- the model configurationmax_history_featurizer_is_used
- if 'True'
only the last dialogue turn will be usedlabel_data
- the label dataentity_tag_specs
- the entity tag specifications batch_loss# Copy def batch_loss (
batch_in : Union [ Tuple [ tf . Tensor , . . . ] , Tuple [ np . ndarray ,
. . . ] ] ) - > tf . Tensor
Calculates the loss for the given batch.
Arguments :
Returns :
The loss of the given batch.
prepare_for_predict# Copy def prepare_for_predict ( ) - > None
Prepares the model for prediction.
batch_predict# Copy def batch_predict (
batch_in : Union [ Tuple [ tf . Tensor , . . . ] , Tuple [ np . ndarray , . . . ] ]
) - > Dict [ Text , Union [ tf . Tensor , Dict [ Text , tf . Tensor ] ] ]
Predicts the output of the given batch.
Arguments :
Returns :
The output to predict.