A couple of years ago we introduced conversation-driven development (CDD) as a process for building more user-friendly AI assistants.
That may sound obvious, but we felt compelled to evangelize this approach after seeing dozens of enterprise teams make the same mistake: developing chatbots in a vacuum, without a mature process for learning from real user behavior. Fortunately, things have gotten better. These days, most teams know that adapting to real users, both before and after the initial “go live”, is critical to building successful AI assistants. But knowing that you should isn’t enough - you need a mature process for doing so.
Building for months before first contact with real users is never advisable, but in conversational AI the problem is even starker than when you’re building a web or mobile app. An app may be confusing to users, but there are only so many places they can click and users might eventually find their way. But AI assistants allow free-form text or voice input. If your assistant hasn’t been designed and trained around how people actually talk, your users will hit a dead end. And you cannot sit inside your building and simply guess what your users will say; you have to actively seek out and incorporate this feedback.
We’ve worked with the most advanced teams in the largest enterprises in the world to put CDD into practice at every stage of chat and voice-bot development, and we’ve seen the dramatic results it has on user satisfaction and business KPIs. Based on our experience, we’ve refined & expanded how we think and talk about CDD. We’ve boiled it down to 3 guiding principles, which I’ll introduce here. In the next posts in this series, I’ll illustrate how these apply in every stage of development.
Get out of your bubble
Every progression in the 5 levels of AI assistants reduces the burden on the end user to translate what they want into something your organisation can understand. Great conversational AI allows users to describe their situation in their own words and figures out how the organisation can help them. By default, enterprise teams build from inside their bubble, mapping internal logic / divisions directly to intents & bot behaviour. This consistently leads to a poor experience where users are forced to navigate your org chart. Your customers shouldn’t have to worry about which department they’re speaking to, and shouldn’t have to use the ‘correct’ names for your products and services. They speak from their own reality and perceive your organisation as a single, unified brand.
When you get out of your bubble, you focus on how users think about the world and you break silos to accommodate them.
Don’t just guess
There’s no one-size-fits all metric or KPI to evaluate AI assistants, but whatever you pick it is essential that your team is empowered to directly see how their efforts impact these KPIs. Anyone on your team should be able to say “in our latest release we worked on improving X and the result was Y”.
Don't just guess, because conversations with real users are a rich source of qualitative and quantitative data. You have to leverage both to discover and execute on the changes that will make your AI assistant great.
Too many teams waste cycles debating the best way to improve, say, their NLU model, rather than investing a modest amount of effort to just test these ideas against real data. And too many teams focus exclusively on metrics and forget to actually look at real conversations. There is no substitute for a human reading a conversation and building empathy for the user.
It’s a product, not a project
Companies who think of conversational AI as a project envision an AI assistant being “configured” by a line-of-business team to reflect internal processes, after which it graduates into “maintenance” period of small refinements.
Adopting a product mindset means assembling a multi-disciplinary team to build your assistant. It means investing in prototyping, validation, and conversation design from the outset. It also means bringing in best practices from other kinds of software development, like version control, automated testing, and CI/CD.
An assistant that represents your brand while talking to your customers is a mission-critical product, and your approach should reflect that at every level.
Technology is only one component on the road to level 5 conversational AI. It’s as important that as an organisation, you commit to accommodating your customers’ view of the world. Customers shouldn’t have to research your product catalogue to know what to ask for. Today’s expectation is that you understand what your customers want to achieve, and that you figure out what you can do to help them. CDD is both a mindset and a process. The next posts in this series apply the three principles to the different phases of building a great AI assistant.