Your Assistant must be monitored to continuously improve its performance. Here are a few tips on the key indicators to keep a close eye on.
1. How to monitor your Assistant’s performance?
1.1. Your objectives: a crucial lens for interpretation
First and foremost, we encourage you to analyze every indicator (not only those related to your Assistant) in light of the objectives defined by your company and your conversational strategy.
Each strategy can vary greatly in terms of goals, resources deployed, or even conversation volume; the exact same score may be considered good for one strategy, or poor for another. Your objectives should always be kept in mind when reading your metrics, as they are the essential key to interpreting your indicators!
1.2. Monitoring frequency
We recommend reviewing these indicators at least once a week, or even more frequently if your Assistant handles a very large volume of conversations.
If you need to update your Assistant or its knowledge to improve performance, we recommend making all changes at once and avoiding multiplying versions of your Assistant, to ensure maximum consistency and clarity in results.
Go to the Reports > Advanced reports > AI Shopping Assistant tab
2. Efficiency
2.1. AI share of conversations
This metric shows what proportion of all your conversations were handled by your Copilot. It should be analyzed alongside the productivity objectives you set at the beginning of your project:
Do you want your Assistant to handle the majority of conversations?
Or only a specific subset of interactions with your visitors?
If you have chosen a response strategy involving several bots (some of which may not use generative AI), it may be useful to compare the AI share of conversations with the automated conversation share, available in the Automation report. This metric shows the contribution of your Assistant and other bots, if any, and helps you better assess the value delivered.
2.2. Transfer rate
This metric indicates the share of conversations that started with your Assistant and were then transferred to a human advisor. It should also be interpreted in light of your overall response strategy: are you aiming for a fully autonomous Assistant, or do you want to facilitate escalation to human advisors?
In the latter case, a high transfer rate is a sign that your Assistant is fulfilling its role.
However, if your goal is a fully autonomous Assistant, you should aim for a very low, or even zero, transfer rate. In this scenario, if you have configured custom behaviors that redirect to an advisor in edge cases and notice transfers are too frequent, we recommend reviewing your quality indicators.
3. Response quality
3.1. CSAT
This classic metric provides an immediate snapshot of the overall customer experience related to conversations involving an Assistant, as it is based on a rating given at the end of the conversation.
The report distinguishes CSAT for conversations fully handled by your Assistant from those partially handled by your Assistant (i.e., those that resulted in a transfer to an advisor and typically receive a higher rating).
A low CSAT (below 60% for an Assistant) should prompt you to take a closer look at other quality indicators.
3.2. Visitor feedback on your Assistant’s responses
In the Shopping Panel, within an open conversation on your site, visitors can provide feedback on each response given by the Assistant. They never evaluate all responses, which is why feedback rates rarely reach 100% (our teams typically observe a maximum of 10% feedback).
A link below negative feedback allows you to review the responses that received negative ratings. We recommend reading them carefully and comparing them with your knowledge base content to improve it.
Here is the calculation:
Positive = number of positive feedback / number of feedback requested
Negative = number of negative feedback / number of feedback requested
3.3. Response rate
This is the percentage of conversations in which your Assistant was able to provide a response to visitors. Indeed, your Assistant may not always have an answer to everything:
- either the information in your knowledge bases is incomplete
- or visitors ask questions that are too individual or contextual (for example, containing a name or an email address), which your Assistant cannot answer using general knowledge
- or visitors ask questions that do not match the intended use case (for example, order tracking questions when your Assistant is not designed to handle them)
-
or your Assistant self-censors due to internal settings or configurations you have defined.
A low response rate (60% is considered the minimum target) means that your Assistant is not fully delivering the expected value and may negatively impact CSAT: visitors are rarely satisfied when told that no answer can be provided.
Adjustments are almost systematically required in projects involving an Assistant, especially at the beginning.
To increase your response rate, we provide guidance in this article.