AI Chat Experience

Better understand whether a student is asking a new question or follow-ups to provide high quality response
This project was about solving technical limitations through design. It was crucial to fully understand the problem as a designer. Plus, instead of solely focusing on achieving the business / technical goals, it was important for me to devise a solution that did not compromise user experience. To achieve this, I collaborated closely with engineers and leveraged user testing to strike the right balance between business objectives and usability.
OUTCOME
Enhance the AI accuracy by 1.36%
Increase the button CTR by 6%
PLATFORM
Responsive Web
ROLE
Design Lead
TEAM
1 Product Manager
1 Front-End Engineer
4 Analytics
2 ML Engineers
TIMELINE
Apr 24 – Jun 24
CONTEXT
Chegg offers an educational AI chat where students ask homework-help questions, get step-by-step solutions, and ask follow-up questions.
Chegg is an EdTech company with 5M+ paid subscribers globally. One of Chegg's core products is a Q&A platform where an AI chatbot provides step-by-step solutions when students ask homework help questions. Additionally, based on the solutions they receive, students can ask follow-up questions, which are answered instantly by Chegg AI.
PROBLEM
To deliver high-quality responses, the AI classifier — which determines whether a user is asking a new question or a follow-up — was crucial. However, the classifier's accuracy was below industry standards.
To provide the correct response, it was important to understand the student's intent, whether they were asking a new homework question or a follow-up. To achieve this, we used an AI classifier to determine the user's intent. Based on the results, the system would either search archived data or generate an instant solution. However, the problem was that our AI classifier's accuracy was not high enough compared to industry standards, which could be a serious issue as it would affect the quality of our answers and the user experience. Therefore, the goal of our project was to improve the AI classifier's accuracy to ensure answer quality.
PROCESS
I initiated a workshop with ML engineers to understand the problem and come up with multiple ideas.
I conducted user testing to evaluate top 3 promising ideas based on user perspective.
To make this goal clear, I needed to fully understand the problem first. So, I initiated a design workshop with ML engineers. As a result, I reframed our goal to be more actionable: How might we get an explicit signal from students about their intent? I then came up with five tangible ideas, compared the pros and cons of each, and selected three ideas for user testing.
SOLUTION
Improved the presence of the ‘New question’ button to nudge students into directly labeling it, which reduces error rates while also serving as training data.
During the user testing, I used four decision matrices — reaction, frequency, visibility, and value. Based on these metrics, I selected the most promising design.
IMPACT
Enhanced the AI accuracy by 1.36%.
Increased user engagement with the button by 6%
AI Accuracy
+ 1.36%
Button Click Through Rate
+ 6.0%