Skip to content

June 14th, 2021

L3-AI Speaker Interview: Greg Bennett

  • portrait of Rasa


In the leadup to the L3-AI conference on June 17, we recently caught up with Greg Bennett, speaker and Conversational Design Principal at Salesforce. In this interview, we covered a number of topics around best practices for customers in the conversational AI space and how Salesforce thinks about new features, conversational UX, and where the industry is going next. Enjoy!

Watch Greg's talk - plus many more from NLP researchers, product experts, and machine learning engineers - at L3-AI. Tickets are free andregistration is open now.

What impact do chat and voice assistants have on the business at Salesforce?

Our primary focus since COVID-19 started has been on chat. We did voice for a little while, but there has been an increased priority on customer service automation over chat. A lot of our work goes into customer service and since the pandemic, we have seen a 700% increase in the usage of chatbots. For a good number of customers, having a chatbot has long been on their roadmap for helping to scale customer service offerings, but the pandemic has accelerated the timeline.

As a team, how do you ensure your customers are successful when building out virtual assistants?

I think there are a couple of parts to that. The first is which features we choose to work on to better support our customers. Because I lead Conversation Design at Salesforce, I focus on the experience piece more than the engineering efforts that go into it, but both make it possible to enable customers. On the other side, it is more about ensuring that customers have the education they need to succeed when building out conversational experiences. For example, we have guidelines and best practices that are shared, and we've published a course on Trailhead, Salesforce's free education platform. From time to time, we also review customer's projects to give feedback. Workshops and worksheets are also really helpful when done directly with customer teams to help move them through the process.

How does your team determine which new features to build and prioritize?

So from a conversation design team perspective, we don't necessarily develop new technical features, but we focus more on the designs themselves and conversational experiences. A lot of the direction comes from the product roadmap, but I am always asking myself how we address potentially untapped parts of our total addressable market and what inspiration customers are bringing when I interact with them. Additionally, a great practice I have put into place has been spending time with a customer that is further along in building out their bot implementation. They are usually asking questions and providing feedback that I can expect to hear from other customers when they get to this stage. Having the product roadmap and spending time with customers who are in early stages and advanced stages of their project is really what drives our decision making.

What is your advice to customers as they approach designing their assistant's voice, branding, and look?

It actually starts-before we even get there-with this question: Is conversation the right format for what it is that you're trying to tackle? Customers should always be assessing what the end user's goal is going to be. Having this in mind, customers can create conversations that aren't overly complex or simple and fit the brand of the organization. Creating the right tone around copy and messaging can be really big when setting up an ideal end user experience as well. For customers, it is critical to ensure whatever assistant is created expresses a system persona that reflects the business' brand tone of voice. So if you have customer support over the phone, on Facebook, Twitter, etc. it is important to create a cohesive experience across channels.

What emerging technologies are you most excited about in the conversational AI space?

I struggled with phrasing around this because I don't know if it's necessarily a net new technology or more of advancement on an existing technology, which is again, from my perspective, because I see language and culture as so inseparable. The idea around varieties and dialects of a given language is really important. So how do we expand our training data sets? How do we assess the reliability of these data sets? How do we assess the variety and the scope of the data sets? I think that's the thing that I'm the most excited about because I think that's what's going to open up the space to an even broader population. Customers that are directly building out bots have a tendency to create something that reflects their individual conversational style, which is a totally normal thing to do, but poses a risk in terms of leaving out others who don't share the same conversational ideologies. Technologically, how do we expand past these limitations within our training data to provide a more cohesive experience for everyone?


Thank you again Greg for taking the time to chat with us about your experiences in conversational design. Hear more from Greg at his talk during the L3-AI conference on June 17th, at 12:30 - 13:00 GMT-6. Free tickets are still available! Get yours at