Large online programs depend on stable platforms, but demand for support spikes at unexpected moments. Instructors request help minutes before class. Learners encounter submission issues at deadlines. Unfortunately, humans cannot respond instantly even when answers already exist.
I coded an AI-powered support system that provided **immediate, reliable guidance** while preserving human oversight. The system resolved common issues instantly and reduced support load. More importantly, it was designed to improve with use rather than decay over time.
- **Zero-minute response time** for common technical questions
- **Reduced repetitive requests** to human support staff
- **Progressively improving answer quality**, driven by real user feedback
- **Preserved human control**, rather than replacing support staff
### The Problem
Technical support requests often carry a sense of urgency. The answers already exist—in FAQs, help articles, or guides—but finding them in the moment is slow and frustrating. Fully generative chatbots introduce a different risk. At the time of this project, hallucinations were common, and incorrect technical guidance is unacceptable when instructors and learners depend on accuracy.
_How do you provide instant support without sacrificing reliability or human accountability?_
### Constraints
- Requests were time-sensitive and high-stakes
- Answers needed to be accurate and sourceable
- Support staff had to remain in the loop
- The system needed to improve over time, not stagnate
These constraints ruled out open-ended generation. I needed to create a system that retrieves known answers instead of inventing new ones.
### Design Decisions
#### 1. Prioritizing retrieval over generation
Instead of generating responses from scratch, I designed a **retrieval-based chatbot** trained on existing documentation for various tools—Canvas, Zoom, and Turnitin.
User queries were matched against a structured knowledge database derived from official guides and FAQs. Each response was grounded in source material, reducing the risk of hallucinations.
#### 2. Optimizing for speed and relevance
The system indexed documentation using classic search techniques (inverted index, TF-IDF, cosine similarity). Queries were ranked by relevance, and the highest-confidence results were returned first.
By favoring **predictability over novelty**, users received concise, actionable guidance rather than exhaustive documentation or speculative explanations.
#### 3. Making answers usable in the moment
Raw documentation is rarely written for urgency and is often overwhelming. I added an AI summarization layer to convert dense help articles into short, instructional responses optimized for immediate action.
#### 4. Designing a human-in-the-loop feedback system
Being accurate was important, but the system also needed to learn from real failures. Every user interaction with the AI was logged. Users rated how the AI responded. Low-rated answers were automatically escalated to human support staff, who resolved the issue and fed the corrected response back into the AI.
Over time, the chatbot accumulated higher-quality answers tied directly to real user queries. Human expertise became training data rather than lost effort.
The system reframed technical support as a **living knowledge layer** rather than a ticket queue. AI handled speed and retrieval. Humans handled judgment and edge cases.
This pattern of retrieval first, generation second, and humans always in the loop proved far more reliable than fully autonomous support and scaled cleanly across platforms and programs.