Conversations Overview
Learn how to review user interactions with the HFIM chatbot, analyze feedback, and use conversation data to improve the chatbot's performance.
What is the Conversations Section?β
The Conversations section lets you review every interaction between users and the chatbot. You can see:
- π Questions users asked
- π¬ Responses the chatbot generated
- ππ User feedback (positive, negative, or none)
- β±οΈ Response times
- π Sources used in responses
- π Session information
Why Review Conversations?β
Reviewing conversations helps you:
1. Identify Problems Quicklyβ
Spot issues before they escalate:
- β Incorrect answers
- β Unhelpful responses
- β Missing information
- β Technical errors
Example: If multiple users give negative feedback about the same topic, you know there's a problem that needs fixing.
2. Improve Cache Entriesβ
Find opportunities to enhance the cache:
- π‘ Common questions that aren't cached
- π‘ Variations users actually type
- π‘ Topics with negative feedback
- π‘ High-quality answers worth caching
Impact: One good conversation can become a cache entry that helps hundreds of future users.
3. Understand User Needsβ
Learn what users care about:
- What questions do they ask most?
- How do they phrase questions?
- What information are they seeking?
- What time of day/year do they ask?
Result: Data-driven decisions about what to cache and how to improve the chatbot.
4. Measure Chatbot Performanceβ
Track quality metrics:
- Response time trends
- Feedback ratios (positive vs. negative)
- Source accuracy
- Common failure patterns
Goal: Continuously improve the user experience.
Conversations Tableβ
The main view shows a table with all conversations:
Table Columnsβ
| Column | What It Shows | Example |
|---|---|---|
| ID | Unique conversation number | 21 |
| Session | Session ID (groups related questions) | abc123... |
| Question | What the user asked | "specific" |
| Feedback | User rating (π positive, π negative, β none) | π with comment |
| Response Time | How long the response took (milliseconds) | 14716ms (14.7 seconds) |
| Timestamp | When the conversation occurred | Jan 11, 2026 |
| Actions | View details or edit feedback | View, Edit buttons |
- π Positive - User found the answer helpful
- π Negative - User was unsatisfied (may include comment)
- β No feedback - User didn't rate the response
Viewing Conversation Detailsβ
Opening the Detail Modalβ
To see full conversation details:
- Locate the conversation in the table
- Click the "View" button (ποΈ icon)
- A modal window opens with complete information
What's in the Detail Modalβ
Session Information:
- Session ID (identifies related questions from same user)
- Conversation ID
Question:
- Exact text the user typed
- Shown as it was received (no processing)
Answer:
- Full response the chatbot generated
- Formatting preserved (markdown, bullets, etc.)
Feedback:
- Rating (Positive/Negative/None)
- User comment (if provided)
Performance Data:
- Response time in milliseconds
- Timestamp (date and time)
Sources:
- JSON array of documents used to generate the response
- Shows which files and pages were referenced
Understanding Feedbackβ
Types of Feedbackβ
Positive Feedback (π)β
What it means: User found the answer helpful
What to do:
- β Review periodically to confirm quality remains high
- β Consider converting excellent answers to cache entries
- β Use as examples when creating new cache entries
Don't: Spend too much time hereβfocus on negative feedback
Negative Feedback (π)β
What it means: User was dissatisfied with the response
May include: User comment explaining why
Common reasons:
- Answer was incorrect or outdated
- Response didn't address the question
- Information was incomplete
- Answer was too vague or too detailed
- Technical error or formatting issue
What to do:
- π΄ HIGH PRIORITY - Review immediately
- π Investigate the issue
- π οΈ Fix cache entries or system prompts
- π Create new cache entries if needed
Goal: Zero negative feedback by addressing issues proactively
No Feedback (β)β
What it means: User didn't rate the response
Why users don't give feedback:
- They found the answer acceptable (not great, not bad)
- They left before rating
- They didn't notice the feedback buttons
What to do:
- β οΈ Lower priority than negative feedback
- π Use aggregate data to spot patterns
- π Spot-check occasionally
Feedback Commentsβ
Users can add optional comments with negative feedback:
Example Comments:
- "This doesn't answer my question"
- "Information is outdated"
- "Too vague, need more details"
- "Wrong course listed"
Value: Comments provide specific, actionable feedback
Action: Always read and act on commentsβthey tell you exactly what to fix
Understanding Response Timeβ
Response Time = How long it took the chatbot to generate and return the answer
Interpreting Response Timesβ
| Response Time | Performance | Likely Cause |
|---|---|---|
| 50-500ms | π₯ Excellent | Cache hit (instant) |
| 2,000-5,000ms | β Good | RAG search + generation (normal) |
| 5,000-10,000ms | β οΈ Acceptable | Complex query or multiple sources |
| 10,000ms+ | β Slow | Potential issue or very complex query |
What Affects Response Timeβ
Fast responses (cache hits):
- Question matched a cache entry
- No document search needed
- Instant retrieval
Medium responses (RAG):
- Document search (vector search)
- Source retrieval
- AI generation
- Normal processing
Slow responses:
- Very long or complex questions
- Multiple follow-up searches
- System load or network latency
- Potential backend issues
Taking Actionβ
If you see consistently slow responses (> 10 seconds):
- Check if specific questions are always slow
- Consider caching those questions
- Contact support if it's a systemic issue
Understanding Sourcesβ
The Sources field shows which documents the chatbot used to generate its response.
Source Formatβ
[
{
"filename": "HFIM_Handbook_2026.pdf",
"page": 12,
"section": "Admission Requirements",
"relevance": 0.95
},
{
"filename": "Course_Catalog.pdf",
"page": 34,
"section": "HFIM 3000",
"relevance": 0.88
}
]
Why Sources Matterβ
Transparency: Users can verify information Accuracy: Shows if chatbot used reliable sources Debugging: Helps identify wrong sources causing bad answers
Reviewing Sourcesβ
Check for:
- β Relevant documents (related to question)
- β Current documents (not outdated)
- β Correct page numbers
- β Irrelevant sources (may cause wrong answers)
- β Old/superseded documents
Action: If sources are wrong, the answer is likely wrong too. Create a cache entry with correct information.
Common Conversation Patternsβ
Pattern 1: High Negative Feedback on Specific Topicβ
Observation: Multiple conversations about "HFIM 3000 prerequisites" have negative feedback
Interpretation: Current answer (cache or RAG) is inadequate
Action:
- Review the topic in detail
- Verify correct information
- Create or update cache entry
- Monitor for improvement
Pattern 2: Same Question Asked Repeatedlyβ
Observation: 15 conversations ask variations of "What is HFIM?"
Interpretation: Common question, high-value caching opportunity
Action:
- Find the best answer from these conversations
- Create cache entry
- Add variations matching how users ask
- Monitor "Times Served" metric
Pattern 3: Long Response Timesβ
Observation: Questions about "career paths" consistently take 8-12 seconds
Interpretation: Complex query requiring extensive document search
Action:
- Create cache entry for common career-related questions
- Provide comprehensive answer upfront
- Reduce response time from 10s to 0.5s
Pattern 4: Positive Feedback on Specific Answerβ
Observation: Conversation #45 has positive feedback and excellent response
Interpretation: High-quality answer worth preserving
Action:
- View conversation details
- Click "Convert to Cache"
- Save as cache entry
- Help future users get this great answer instantly
Typical Workflowβ
Weekly Conversation Review (15-30 minutes)β
Step 1: Filter by Negative Feedback (5 minutes)
- Identify all conversations with π ratings
- Read user comments
- Note common issues
Step 2: Investigate Issues (10 minutes)
- Review problematic answers
- Identify root causes (wrong info, outdated data, missing context)
- Decide on fixes
Step 3: Take Action (10 minutes)
- Update cache entries
- Create new cache entries
- Note system issues for support
- Document findings
Step 4: Monitor (ongoing)
- Check if negative feedback decreases
- Verify fixes worked
- Adjust as needed
What to Look Forβ
Red Flags π¨β
Immediate attention needed:
- Multiple negative feedbacks on same topic
- Responses with no sources
- Consistently slow response times (> 15 seconds)
- Error messages in responses
- Blank or cut-off responses
Yellow Flags β οΈβ
Investigate when time allows:
- Questions asked repeatedly (5+ times)
- Mixed feedback on same topic
- Responses from outdated sources
- Questions that seem relevant but have no answer
Green Flags β β
Good signs (but still monitor):
- Positive feedback
- Fast response times
- Diverse questions with good answers
- Appropriate sources cited
Privacy and Data Handlingβ
What Data is Collectedβ
Stored:
- β Questions (anonymous)
- β Responses
- β Feedback ratings
- β Response times
- β Timestamps
- β Session IDs (for context)
NOT stored:
- β User names or personal info (beyond session ID)
- β User IP addresses
- β Identifying information
Purpose: Improve chatbot quality, not track individual users
Conversations may be retained for analysis purposes. Check with your institution's data retention policies for specific timelines.
Next Stepsβ
Now that you understand conversations:
- Learn to view conversations - Navigate the interface
- Filter by feedback - Find specific conversations
- Edit incorrect feedback - Fix mistakes
- Convert good conversations to cache - Preserve excellent answers
- Analyze interaction patterns - Use data strategically
- Troubleshoot issues - Solve common problems
Remember: Conversations are a goldmine of insights. Spend 15-30 minutes weekly reviewing them, and you'll dramatically improve your chatbot's performance!