Version: Latest

Thinking CALMly

CALM was designed specifically for creating enterprise-grade conversational assistants that scale. If you have previously built LLM-based assistants, this page can be especially helpful in challenging you to re-think how you might approach a problem. This will help you create assistants idiomatically, and build more reliable conversational AI as a result.

Avoid Using an LLM for Deterministic Tasks

Always try to think if the task for the LLM can be done in a deterministic way giving guarantees to the behaviour of an assistant.

As an example, let's consider an assistant for an ecommerce shop. We have the user's most recent transactions:

Product NameOrder IDOrder Date
Vintage Leather Backpack987652024-02-23
Smart Home Speaker987662024-02-20
Limited Edition Vinyl Record987672024-02-18

And we know that users will refer to these products in many different ways (e.g. 'the leather bag', 'my speaker') and the business logic will need all the transaction details to be identified before proceeding.

One approach to identifying what the user is referring to would be to rely on the command generator to set a slot for each attribute of the transaction, i.e. Order ID, Product Name, Order Date. So if the user said 'the leather bag', the expectation can be to have the command generator issue 3 SetSlot commands - SetSlot("product name", "Vintage Leather Backpack"), SetSlot("OrderID", 98765) and SetSlot("Order Date", "2024-02-23"). This needs the LLM to be really powerful at ensuring that the correspondence between attributes of a transaction are exactly accurate.

However, almost always your data to be referred to would have a unique identifier for every entry and having the LLM predict the value for the unique identifier should be enough to figure out the other details.

So, the preferred CALM solution in the above example would be to expect the LLM to output SetSlot("OrderID", 98765) and then write a custom action to fetch other attributes of a transaction.

Use LLMs to Generate Structured Queries

If you want to answer questions based on structured data, generate the arguments you need to run a structured query, rather than using an LLM to reason over the data.

For example, if you have a database of airports, and you want your assistant to answer questions like: "How many Terminals does JFK have?"

NameCityCountryNum TerminalsIATA CodeLatitudeLongitudeLinks CountObjectID
La GuardiaNew YorkUnited States3LGA40.777245-73.8726083163697
John F Kennedy IntlNew YorkUnited States5JFK40.639751-73.7789259113797

Instead of feeding the data to an LLM and asking it to generate an answer, define a flow with slot values for the query you want to run:

flows:
airport_info:
description: answer users' questions about airports
steps:
- collect: iata_code
- collect: airport_name
- collect: attribute_to_query
- action: utter_query

So that your command generator outputs StartFlow("airport_info"), SetSlot("iata_code", "JFK"), SetSlot("attribute_to_query": "num_terminals"). This approach is much more robust when you want to support more complex questions in the future, like: "which New York Airports have more than 4 Terminals?".

Keep Logic out of Custom Actions and Inside Flows

Anyone on your team should be able to understand how your assistant works by looking at your flows, either inspecting YAML files directly or through the Studio UI.

Hiding logic inside of custom actions makes your assistant more difficult to understand. So avoid writing a custom action like this:

class CheckRestaurantAvailability(Action):
def name():
return "check_restaurant_availability"
def run():
has_availability = True # (fetched from an API)
if has_availability:
dispatcher.utter("Yes we have availability today.")
else:
dispatcher.utter("Unfortunately we are fully booked")

And instead write a custom action that returns the relevant data to the flow, and specify the conditions in a next:

class CheckRestaurantAvailability(Action):
def name():
return "check_restaurant_availability"
def run():
has_availability = True # (fetched from an API)
return SlotSet("has_availability", has_availability)
flows:
restaurant_booking:
description: reserve a table at a restaurant
steps:
- action: check_restaurant_availability
next:
- if: has_availability
- action: utter_has_availability
- else:
- action: utter_no_availability

Use Logical Operators Inside Flows to Define Business Logic

In CALM, we use LLMs to understand users, not to guess business logic. If you have a flow that should execute different business logic depending on whether the user is a minor, avoid relying on an LLM to pick the appropriate logic. Instead use conditions with structured operators inside the flow to branch off the logic.

flows:
change_address:
description: Allow a user to change their address.
steps:
- noop: true
next:
- if: slots.age < 18
then:
- action: utter_contact_helpline
- else: "ask_new_address"
- id: "ask_new_address"
collect: "address"
...

Use Deterministic Logic to Restrict Access

If you have built certain capabilities that should be only accessible to a particular category of users, then you should use flow guards to hide that capability for other users rather than relying on an LLM to reason its access.

For example, if your assistant needs to allow only premium category customers to be able to talk to a live agent then avoid adding this information in the description of the flow -

flows:
talk_to_agent:
description: Allows only premium users to talk to a human agent.
steps:
- action: trigger_human_handoff

Instead the CALM way would be to use deterministic logic defined in the flow guard property of the flow -

flows:
talk_to_agent:
description: Allows user to talk to a human agent
if: slots.is_premium_user
steps:
- action: trigger_human_handoff

With the addition of this flow guard, the capability to talk to a live agent will never be visible for to the command generator if a non-premium user is talking to the assistant and hence the users will never be able to access it.