Home / DialogFlow ES / Adding automated conversation testing to your Dialogflow agent
Conversation Testing | DialogFlow ES

Adding automated conversation testing to your Dialogflow agent

Website Name Change

I have changed the name of this website from Mining Business Data to BotFlo. I am offering a 70% off discount on my Dialogflow CX course (till April 15th 2021) for people who can help me spread the word about my new website.

Recently I got a question from a client. While I cannot provide full details, here is the gist:

  1. sometimes, Dialogflow fires an intent which contains an entity, but the entity is not actually extracted. The webhook gets an intent which doesn’t contain an entity
  2. sometimes, Dialogflow extracts the entity values, gets the intent right, but the Training tab says it didn’t map the intent correctly
  3. sometimes, Dialogflow doesn’t map the correct intent in the simulator even though the actual bot in production is working fine
  4. sometimes, Dialogflow doesn’t map the correct intent and just fails

You don’t really need to worry about 2 and 3, but you want to be notified about 1 and 4.

Is there anything you can do about it?

To the best of my knowledge, these are intermittent Dialogflow issues which automatically rectify themselves and there isn’t any way to fix the problem. What you can do, is defensively create a bunch of automated conversation tests so you can be notified if something is going on.

In addition, having a set of conversation tests in place will also help you make sure you don’t accidentally break your existing bot behavior.

How to create automated conversation tests

First, it helps to follow a systematic process. If you cannot predict what intent be mapped (i.e. your target intents), it isn’t very easy to set up useful tests. Also, if you use slot filling, it is unlikely you can set up good conversation tests because the slot filling feature is inherently unpredictable. Finally, it is very helpful if you can draw a flowchart for your multi-turn dialogs, as that helps test a full conversation flow rather than a single intent.

Start with some simple tests

Use the REST API, and create some simple scripts which use the training phrases already defined in your agent. Check to see the following:

  • the intent id which was mapped
  • the intent name which was mapped
  • the entity types which were extracted
  • the action which will be triggered
  • the output context(s) set
  • the actual response

That’s a total of 6 values you can test just from the JSON coming back from the REST API call. Some of these will change (e.g. intent id) over time if you update/restore agents from ZIP files etc.

So it might be a good idea to

  • show error messages if all 6 values are different
  • warnings if a few are different (this is a bit subjective)
  • success message if every value is an expected value

I will be creating more material on this over the next few weeks.

Similar Posts