The biggest difference between GPT chatbots and Generative Dialogflow CX is the following:
Dialogflow CX allows you to combine intent-based question answering with extractive question answering or generative question answering in a way which is friendly for non-programmers.
What is intent-based question answering?
On the bottom left on this website, there is a demo right now which is intended to showcase the various features of Generative Dialogflow CX.
Intent-based question answering is where the bot provides the response from the Agent Response section in your Route.
What is generative question answering?
Generative question answering is based on constructing a prompt and using the generated content in your response.
What is extractive question answering?
This has not yet been implemented for the demo bot, but extractive question answering is where Dialogflow CX does a semantic search on the content you uploaded previously, and “extracts” answers from articles so it can provide answer “snippets”.
This is a use case where GPT does really well.
How Generative Dialogflow CX is different
The biggest advantage of generative Dialogflow CX is that it allows you to combine intent-based question answering with generative question answering and extractive question answering.
For example, in the demo bot, selecting the options will lead to different types of Flows in the Dialogflow CX agent.
Why it matters
The reason this matters is because right now the GPT API is very code heavy. The tools which make it easy for non-programmers to use GPT API do not implement a lot of things that you just get out-of-the-box with Dialogflow CX:
- ability to customize training phrases for intent-based question answering
- ability to easily see conversation history
- ability to manage state
- ability to add custom entities where it makes sense
- ability to collect user input without providing an input-specific response (this can be very useful for open-ended generative bots)
- ability to define rules based on generative answers
- ability to define rules based on extractive answers
You will likely notice all these limitations only once you start building complex multi-turn conversations using the GPT API.
Unfortunately, at the moment, the workaround in GPT for nearly every such problem is to use a more complicated, and often more expensive prompt. In addition to creating a latency issue, this also means you are often sending a lot of repetitive information to the GPT API to achieve tasks which come “batteries included” with Generative Dialogflow CX (a good example is conversation state management)
The creators of the spaCy NLP library call this approach LLM maximalism, and it is a poor option for non-programmers.