Warning: This document is for an old version of Rasa NLU. The latest version is 0.13.8.

Custom Components

You can create a custom Component to perform a specific task which NLU doesn’t currently offer (for example, sentiment analysis). Below is the specification of the rasa_nlu.components.Component class with the methods you’ll need to implement.

You can add a custom component to your pipeline by adding the module path. So if you have a module called sentiment containing a SentimentAnalyzer class:

pipeline:
- name: "sentiment.SentimentAnalyzer"

Also be sure to read the section on the Component Lifecycle .

Component

class rasa_nlu.components.Component(component_config=None)

A component is a message processing unit in a pipeline.

Components are collected sequentially in a pipeline. Each component is called one after another. This holds for initialization, training, persisting and loading the components. If a component comes first in a pipeline, its methods will be called first.

E.g. to process an incoming message, the process method of each component will be called. During the processing (as well as the training, persisting and initialization) components can pass information to other components. The information is passed to other components by providing attributes to the so called pipeline context. The pipeline context contains all the information of the previous components a component can use to do its own processing. For example, a featurizer component can provide features that are used by another component down the pipeline to do intent classification.

classmethod required_packages()

Specify which python packages need to be installed to use this component, e.g. ["spacy"].

This list of requirements allows us to fail early during training if a required package is not installed.

classmethod create(cfg)

Creates this component (e.g. before a training is started).

Method can access all configuration parameters.

provide_context()

Initialize this component for a new pipeline

This function will be called before the training is started and before the first message is processed using the interpreter. The component gets the opportunity to add information to the context that is passed through the pipeline during training and message parsing. Most components do not need to implement this method. It’s mostly used to initialize framework environments like MITIE and spacy (e.g. loading word vectors for the pipeline).

train(training_data, cfg, **kwargs)

Train this component.

This is the components chance to train itself provided with the training data. The component can rely on any context attribute to be present, that gets created by a call to components.Component.pipeline_init() of ANY component and on any context attributes created by a call to components.Component.train() of components previous to this one.

process(message, **kwargs)

Process an incoming message.

This is the components chance to process an incoming message. The component can rely on any context attribute to be present, that gets created by a call to components.Component.pipeline_init() of ANY component and on any context attributes created by a call to components.Component.process() of components previous to this one.

persist(model_dir)

Persist this component to disk for future loading.

prepare_partial_processing(pipeline, context)

Sets the pipeline and context used for partial processing.

The pipeline should be a list of components that are previous to this one in the pipeline and have already finished their training (and can therefore be safely used to process messages).

partially_process(message)

Allows the component to process messages during training (e.g. external training data).

The passed message will be processed by all components previous to this one in the pipeline.

classmethod can_handle_language(language)

Check if component supports a specific language.

This method can be overwritten when needed. (e.g. dynamically determine which language is supported.)