This will let you do some quick experimentation and find out if this could work for some of your own use cases.
The next step towards generating helpful results is to create a map from the intent to the smart content (Gadgets) you wish to show. The way you do this will depend on the languages and frameworks you are using, but it’s generally a data structure to associate string names from Dialogflow with the response you want to give.
With this system you can key off the intent’s action or the name to get the right results in front of the right people. Every intent has a unique name and a corresponding action, which can be associated with any number of intents. So if needed, multiple intents could map to one action, such as one gadget or one piece of HTML. You can also allow intents to bind parameters, so that you could give better results for “flights to <<airport>>”, for example.
What if we want to make the results even stronger and more specialized to our audience?
Tweaking the specifics
Dialogflow ES allows you to tune the matching threshold of intent matches in the settings screen. If its confidence value is below the threshold then it will return a default match. When you see the default intent in the search context, you simply do nothing extra.
To prevent over-matching (because Dialogflow is primarily designed as a conservation agent, it really wants to find something to tell the user), we’ve found it is helpful to seed the default intent with a lot of common generic terms. For example, if we have an intent for “returning laptop”, it may help to have things like “return”, “return on investment”, “returning intern”, and “c++ return statement” in the default to keep it from over-indexing on common terms like “return”.
This is only necessary if your people are likely to use your search interface for looking for information on other kinds of “returns”. You don’t have to plan for this up front and can adjust incrementally with feedback and testing.
To support debugging and to make updating intents easier, we monitor for near misses and periodically review matches around the triggering threshold. One way to make this faster and help with debugging is to relax Dialogflow’s intent matching threshold.
Instead of setting the confidence at 0.85, for example, we set it to say, 0.6. However, we still only show the user something if there is an intent match AND the confidence is over the real threshold of 0.85 (Dialogflow reports its confidence in its response so this is really only one more line of code). This way, we can inspect the results and see the cases where nothing extra was shown, what Dialogflow thought the closest match would be, if anything, and how close it was. This helps guide how to tune the training phrases.
Close the feedback loop
To evaluate smart content promoted by our Dialogflow-based system, we simply look at the success rate (or interaction rate) compared to the best result the search produced. We want to provide extra answers that are relevant, which we evaluate based on clicks.
If we are systematically doing better than the organic search results (having higher interaction rates), then providing this content at the top of the page is a clear win. Additionally, we can look at the reporting from the support teams who would have otherwise had to field these requests, and verify that we are mitigating the need for staffed-support loads–for example, by reducing the number of tickets being filed for help with work-from-home expenses.
We’ve closed the feedback loop!
Starting with the first step of the process, identify what issues have high support costs. Look for places where people should be able to solve a problem on their own. And finally measure improvements in search quality, support load, and user satisfaction
Regularly review content
After all that it is also good to create some process to review the smart content that you are pushing to the top of the search results every few months. It’s possible that a policy has changed or results need to be updated based on new circumstances. You can also see if the success rate of your content is dropping or the amount of staffed-support load is increasing; both signal that you should review this content again. Another valuable tool is providing a feedback mechanism for searchers to explicitly flag smart content as incorrect or a poor match for the query, triggering review.
Go on, do it yourself!
So how can you put this to use now?
It’s pretty fast to get Dialogflow up and running with a handful of intents, and use the web interface to test out your matching.
Google’s Cloud APIs allow applications to talk to Dialogflow and incorporate its output. Think of each search as a chat interaction, and keep adding new answers and new intents over time. We also found it useful to build a “diff tool” to pass popular queries to a testing agent, and help us track where answers change when we have a new version to deploy.
The newer edition of Dialogflow, Dialogflow CX, has advanced features for creating more conversational agents and handling more complex use cases. Its visual flow builder makes it easier to create and visualize conversations and handle digressions. It also offers easy ways to test and deploy agents across channels and languages. If you want to build an interactive chat or audio experience, check out Dialogflow CX.
First time using these tools? Try out building your own virtual agent with this quickstart for Dialogflow ES. And start solving more problems faster! If you’d like to read more about how we’re solving problems like these inside Google, check out our collection of Corp Eng posts.