Website Name Change
I have changed the name of this website from Mining Business Data to BotFlo. I am offering a 60% off discount on both my Dialogflow ES and Dialogflow CX courses till April 20th 2021 for people who can help me spread the word about my new website.
As I have been working with Dialogflow, I can see that there are three primary mechanisms (modes) using which the user can converse with the chatbot
The most basic mode, and the one which you can see in all the chatbots I have built with the 1-click web demo integration. Here the user just types into a box and gets a result back.
I don’t have a demo handy but the web demo integration also has a way to input your message by talking (it works in Google Chrome).
Also, you can only interact with your Google Home using voice.
When your chatbot allows you to click on a button (Tap), you are using the third mode of conversation. Typically, the Google Assistant on your Android phone provides you with many interactions where you tap on a button, or a list view or a carousel view etc.
Why not just use an app?
If you suggest building chatbots to solve some problem, there are some people who will ask “Why not just build an app?”
Now, if your chatbot ends up having nothing other than the Tap mode this is a reasonable question to ask. In fact, if your user actually never needs to use anything other than the Tap mode and they can still accomplish their task, odds are you are not having any smarts in your chatbot and you can build an app instead.
The three modes I listed above are quite evident if you are aware of the Dialogflow ecosystem.
But in this article, I am interested in chatbots where we combine these modes. Before going into the details, let us suppose it is possible to create chatbots which combine these modes (for example, this is certainly true for Google Assistant apps).
So let us consider an example.
You have a chatbot which lets people input free form text (type or talk) for intents which contain only system entities (e.g. user needs to enter a date). For this intent, you can use only the typing mode and you will usually be able to capture the input quite well.
Now let us suppose you have defined a developer entity. Also, suppose there are only a handful of values in the entity. You declare an intent which contains this entity (e.g. user needs to input the name of a chemical element to be able to get its atomic number). There are a couple of ways this intent could fail (i.e. it doesn’t get triggered when you expect it to be):
a. User has a typo in the noun
b. User speaks their input, and Dialogflow doesn’t correctly translate it to text
These are situations where the fallback may not be of help either – if the bot comes back with “I am sorry, I didn’t get that”, the user may not always be sure what to do next.
In that case, it might make sense to just present all the possible entity values (given there were only a handful) as a listbox. Now the user can simply tap and select and there wouldn’t be any ambiguity.
Mixed mode can help create better chatbots
As you saw in the example above, having mixed modes in your chatbots can sometimes improve its accuracy quite a bit.
And a mixed mode chatbot can give you the best of both worlds: it will contain enough smarts to be flexible in handling user input, but will also enforce some constraints (e.g. the list box selection when user’s input may not be accurately recognized) to keep the conversation on course.
Have you used the mixed mode? How has your experience been? Let me know in the comments.