Doesn’t a larger context lifespan help when conversation goes off track?
I got this question on my YouTube channel:

This is a really good point, and yes, the ability to “come back on track” is the reason that the Dialogflow team has chosen a value more than 1. In other words, if the conversation accidentally goes “off track”, doesn’t a higher context lifespan help?
My opinion, based on having helped many clients build bots which work in a predictable manner, is that this benefit isn’t worth the cost.
An example using followup intents
You can take a look at this video to see an example of the problem caused by context lifespan of 2, which is the default in followup intents.
How high should the context lifespan be?
Suppose you think the context lifespan should be greater than 1. The next question is: how high should it be?
2? 5? 50?
The trouble is, the higher your lifespan is, the more candidate intents you will have at every step in the conversation. This can lead to unpredictable behavior (an example of which you saw in the video above).
How many steps will you allow the user to be able to recover and get back on track?
Suppose you want to bring the conversation back on track. How many steps are you going to allow for the user?
In the question that the reader has asked, for example, if the user says “umm” or adds a typo, unless they can correct themselves in the next message itself, the lifespan of 2 isn’t going to be of much help either.
Well, I just want to handle the common case….
Now you might say:
“Well, I just want to handle the common case. So surely a lifespan of 5 (default value) would be fine?”
The problem is, this extra lifespan will keep your intent which already fired as a candidate for 4 more steps in the conversation. Which means, the more complex your conversation, the more you need to account for the earlier intents which fired, making it a much harder chatbot to design.
OK, so how about giving the user just one more chance? That is definitely reasonable?
Yes, it is, and you should do that. But you don’t need to set the lifespan to 2 to be able to do that. There is a better way.
Design a context-based fallback intent, and give some hints to the user so they are better able to correct themselves. Generally, you will find that this in fact gives you much more predictability in your overall conversation design.
Learning from mistakes
Unless you can build a perfect chatbot, or you have perfectly reasonable users, you will notice that when you first roll out your bot, people will say things to it which you are not yet handling properly. My view, in that case, is to retry once, and then exit the conversation (gracefully, of course) and say “We will get better next time. Please try later” or something to that effect.
When you follow this pattern (that is, context lifespan = 1 and a single retry, and then exiting with an appropriate “unsuccessful” message to the user), you will actually be able to narrow down on the issue and resolve it pretty fast.
In contrast, when you have higher context lifespans two things will happen:
1 intents that shouldn’t fire will sometimes fire and confuse the user into providing more unexpected responses
2 by putting the burden on you to manage all the (unnecessarily) active contexts to understand why the wrong intent fired, it will make it harder to diagnose the specific issue the user is facing and will slow down your bot training workflow
You must be logged in to post a comment.