If you signed up before Aug 9, 2021, please click Previous plans to view your applicable plans.
Once the Email AI Agent has been configured and enabled, ongoing management becomes key to unlocking its full potential. This article provides admins with a comprehensive operational guide for testing, monitoring behavior through Analytics, and troubleshooting the Email AI Agent across the support lifecycle.
This article contains: |
Test the Email AI Agent
Before rolling out the Email AI Agent to live customers, you can validate its responses with test data. This helps you ensure accuracy, consistency, and customer satisfaction before going live.
To test Email AI Agent responses:
- Log in to your account as an admin.
- Go to Admin > Freddy > Email AI Agent.
- Click Configure.
- Configure the rules for testing:
- Set up trigger rules: Define the conditions for which you want to test AI responses.
- Customize the response template: Use the default template or customize it as needed.
- Specify the languages: Add the responses in languages based on your customer base and available knowledge base content. To add more languages, click Manage Languages.
- Set AI response limit: Define a daily limit for the number of emails the AI Agent can respond to.
- Click Save response.
- Click Save and enable.
- In the Preview Responses tab on the right pane, enter test email addresses, subject, and questions in the email body.
- Click Send.
What happens next?
The preview pane shows how responses will be generated, which includes:
- A response summary based on Knowledge Base content, with reference article links.
- A feedback button for customers:
- Yes, close my ticket: Marks the issue resolved.
- Not, really: Creates a ticket if the response is not satisfactory.
- A feedback button for agents to mark responses as Helpful or Not helpful.
If the Email AI Agent fails to interpret the query, you will see the following response. You can try again or retry with another query.
Best practices for testing
- Ensure that you add only test email addresses (e.g., abc@example.com) to prevent the Email AI Agent from replying to real customer tickets during testing.
- For consistent results, structure the email like a real customer query.
Example: The Email AI Agent should generate a full reply, including:- A summary of the customer's request
- One or more related solution articles
- The embedded feedback widget
- To assess accuracy and consistency, send at least 10–15 test emails covering different intents and article matches. Use diverse phrasings and topics to simulate real-world variability.
Note: After testing, ensure that you remove or update the trigger condition that restricts the requester to your test address. If left unchanged, the Email AI Agent will only respond to the test user and remain inactive for real customers.
View past queries
The Past Queries option lets you track and analyze how the Email AI Agent handled your test runs. This helps you validate accuracy, identify gaps, and refine your configuration before going live.
Here’s what you can see:
- Query Status: Check if the Email AI Agent responded to or unanswered the query.
- Query preview: Preview of the query asked.
- Feedback given: Shows the feedback given by agents for the response generated.
Email AI Agents Performance Report
Email AI Agent's effectiveness can be measured using the curated Email AI Agents Performance Report available. This curated dashboard is designed to help admins and leaders track adoption, engagement, and ROI metrics over time.
Tab 1: Absolute Metrics
Understand the number of email tickets the Email AI Agent is handling, how effective its responses are, and where improvements can be made.
This tab gives you a direct view of the Email AI Agent’s contribution to support operations, from engagement volume to helpfulness and deflection trends.
Filters Available:
- Date Range: Focus your analysis on specific weeks or months.
Key Widgets and Their Insights:
- Tickets Given to AI Agent
Track the number of email tickets passed to the Email AI Agent after satisfying trigger conditions, which is also bounded by the configured daily response limit. Use this to measure total Email AI Agent reach over the selected time range. - Tickets Answered by AI Agent
View how many of those tickets were successfully answered by the Email AI Agent. This helps you understand delivery success and configuration gaps. Choose how you want the data to be displayed - as Numbers or in Tabular format. - AI Agent Answer Rate
Analyze what percentage of tickets were answered by the Email AI Agent. - AI Agent Customer Feedback
Review how many Email AI Agent responses were rated Helpful by customers. This metric is key to evaluating the quality and impact of responses. Feedback can originate from two sources—Email, when customers rate responses directly from their inbox, and Answers, when feedback is submitted via the support portal. - Ticket Deflection Trend
Visualize how many tickets were auto-resolved without agent involvement. - AI Agent Response Rate Breakdown
Understand the distribution of Total tickets received with source as email, Tickets given to Email AI Agent, and Tickets answered by the Email AI Agent. The graph highlights areas that may need better templates or content tuning.
Tab 2: Sessions Comparison
Measure how many sessions the Email AI Agent consumes and when. This is useful for quota tracking and usage trend analysis.
Session usage is critical for licensing and planning. Each AI Agent-triggered reply counts as one session, and this tab helps monitor consumption over time.
Filters Available:
- Date Range: Pinpoint usage spikes and surges.
Key Widgets and Their Insights:
- Sessions Consumed
See the total number of Email AI Agent sessions used in the selected time period. Use this to track remaining quota or investigate overages. - Sessions Consumption Trend
View a trendline of session consumption over weeks or months. Use grouping filters (e.g., language or trigger) to diagnose usage drivers.
Use The Analytics Insights
Utilize the Email AI Agent Performance Report to proactively fine-tune your AI deployment strategy. These insights can help you:
- Identify underperforming templates, languages, or KB articles.
- Detect high-volume automation opportunities through deflection trends.
- Justify ROI and AI investment by comparing engagement vs resolution.
- Monitor whether you're reaching session thresholds earlier than expected.
Note: The performance report is available only on select Pro and Enterprise plans with Freddy Copilot enabled. Data is refreshed every 24 hours.
Troubleshooting Common Issues
Admins may occasionally encounter configuration or runtime issues. Here’s how to identify and resolve them quickly.
- Email AI Agent Not Responding
Check that:- Email AI Agent is enabled
- Trigger rules are valid
- Daily limit isn’t exhausted - Admins receive an email notification when the limit is reached, and they can choose to increase the limit or disable the Email AI Agent.
- Ticket meets the rule conditions
- Templates Not Saving
Common causes:- Missing feedback widget
- Unresolved placeholders
- Offline connection
- Missing Languages or Placeholder Errors
- Ensure the language is supported and enabled
- Validate all placeholders are correctly formatted with double curly braces
- Email AI Agent Auto-Disabled
Happens when:
- Templates are deleted
- All supported languages are disabled