Website Name Change
I have changed the name of this website from Mining Business Data to BotFlo. I am offering a 60% off discount on both my Dialogflow ES and Dialogflow CX courses till April 20th 2021 for people who can help me spread the word about my new website.
When you are building chatbots using Dialogflow, it helps to think of the bot as having 4 layers.
These are the layers:
- UI Layer
- Middleware/Integration Layer
- Conversation Layer
- Webhook/Fulfillment layer
Let us look at these in more detail.
Rich Website Chatbot
And the rich website chatbot is a very good example, because I could create this nice little graphic for it which shows all the 4 layers:
The 4 Layers
The UI layer is the one which is closest to the end user. It is usually visual, and it is the layer which the user actually interacts with. It is the layer which is nearest to the end user, as you can see in the graphic.
The middleware or integration layer connects the UI layer to your Dialogflow agent. Usually, this layer will have code which calls the detectIntent API method, which is required for the integration. (It could also have other code in some cases).
This is all the stuff you define inside the Dialogflow app – the intents, entities, contexts etc.
Webhooks are required in Dialogflow to implement even basic business logic, like adding two numbers. This is the fourth layer, and farthest from the user.
Do the layers overlap?
Yes, it is sometimes hard to define where a layer ends and where the next one begins. While this is a useful concept when trying to understand what is going on in a Dialogflow chatbot, you don’t have to assess every single line of code and wonder which layer it belongs to.
Do all layers exists in all chatbots?
For example, in theory, you could create a pure FAQ chatbot which has no business logic at all. Such a bot will not have a fulfillment layer.
You could build an email based interface to a Dialogflow bot – that is, the bot can answer your queries via email. While you might say the UI is your email client, in theory the client doesn’t have to exist for your bot to function. (For example, you could call the API of an email provider like SendGrid with the contents of the email, and your “bot” will work just as well).
In the case of a Voice based app for Google Home, there is no visual interface at all. Although some people do refer to it as a “voice user interface”.
Why does it matter?
Often, when discussing website chatbots, it helps to understand which layer is created by you (the developer), which layer can be hosted by other companies/services etc. The 4 layer approach helps to clarify this.