“Writing an OpenAI chatbot is hard! 2 months in and still unsuccessful”
I want to conclude this course by pointing out a recent post I saw in the OpenAI community forum.

You can go and read the full post, but it is a very good example of all the stuff we have discussed till now.
Once you move past the “only Q&A” stuff, things become more complex. And most real world chatbot scenarios do need to go past the Q&A stuff.
For example, it is not easy to maintain conversation context using GPT.
Some entities, like prices, can be handled deterministically quite easily using Dialogflow. Doing the same thing in GPT is a bit like reinventing the wheel.
Here is someone’s response to the question:
Can you share your exact prompt and an example of the output that you’re not liking?
I’m assuming that the core issue is, the model doesn’t know anything about the business so its using its world knowledge to answer questions in the context of what it thinks are similar businesses. You have to ground the model with facts to prevent this. That means using semantic search to pull in relevant facts either scrapped in from their website or given to you via a super detailed questionnaire.
As you can see, there is not a whole lot that GPT can provide you out-of-the-box once you go past the Q&A aspect. This means a lot of custom development work. And the tools to automate this, or to make it a more non-programmer-friendly user-interface, simply do not exist (yet).
The person who asked the question is a programmer, so I do expect them to eventually get all this fully sorted out over the next few days.
But this still gives us a lot of clues about how the amazing Q&A capabilities of GPT are still only a small part of the complete solution.
You must be logged in to post a comment.