Conversational AI for Enterprise

Case Study 2

Sentry: Humanizing Enterprise Support with Warmth, Precision, and Flexibility.

Sentry is a human-centered AI platform built for intelligent enterprise support.

The vision for Sentry was to design and deploy a conversational AI system that wouldn’t just handle tickets, but foster trust – even in moments of complexity. We aimed to deliver highly effective, human-like conversations across internal and external use cases: from IT and HR support to customer-facing queries and proactive engagement.

Navigating Complexity

Project Overview

Sentry was built to reimagine enterprise support—less like a help desk, more like a trusted teammate.

The product vision was to design a conversational AI system that could handle ambiguity, carry context across channels, and earn user trust in high-stakes environments. From HR to IT to customer operations, we aimed to replace rigid chatbot flows with intelligent, multi-turn interactions that felt natural—without losing the precision enterprises demand.

Sentry wasn’t just another bot. It was a responsive, multi-modal assistant that could
triage, guide, and gracefully step aside when human help was needed.

Outcome & Project Impact

Sentry launched across 3 departments (HR, IT, Customer Ops) and was immediately embraced by users and agents alike. Within 90 days, it had reduced internal support backlog by 38%, while AI-handled tickets outperformed human-only CSAT scores for Tier 1 issues.

Where legacy bots stumbled, Sentry delivered clarity, empathy, and efficiency—at scale.


"Aaargh!! %#*# a simple ticket request feel like navigating an endless maze?".  

My Role: Lead Product Designer / Manager

As Lead Product Designer, I owned the end-to-end UX for Sentry—from vision to polish:

  • Product Direction: Collaborated with Product and AI leadership to define a scalable design system across enterprise surfaces
  • User Research & Personas: Interviewed and synthesized input from 15 customers, internal support staff, HR/IT leads, and agents
  • Conversation Design Leadership: Created nuanced conversation flows across multiple domains (text + voice), including fallback, error, and escalation handling
  • Multi-Modal Design: Crafted interfaces for web chat, Slack/Teams, mobile, IVR systems, and embedded widgets
  • Agent Assist Tools: Designed a handoff-aware dashboard for support agents with full AI context visibility, fallback override, and AI co-pilot suggestions
  • Prototyping & Testing: Built rich conversation prototypes in Figma and Bolt, Vercel (V0), Lovable; ran iterative usability tests with 10 participants
  • Trust & Explainability: Focused on signaling AI presence, clarifying boundaries, and building recoverable experiences
  • Collaboration with NLP/NLU: Worked closely with engineers to define intents, disambiguation flows, and improve AI comprehension
  • Accessbility: Ensured all channels met WCAG 2.1 AA standards, including visual chat and voice-first interfaces

Framing the Challenge

In enterprise environments, users rarely ask simple questions. They cite invoices, reference contracts, jump between systems, and ask compound questions. Legacy tools forced them into narrow flows or returned generic results.

Sentry needed to bridge the gap between functional automation and conversations that felt intelligent, relevant, and human.

Most chatbots either oversimplify or overwhelm. They script surface-level replies, expose backend jargon, or force users to rephrase into machine-speak. That’s how trust erodes. Support gets slower. Frustration builds.

"Smart" doesn’t mean verbose. It means the agent anticipates the next move, offers relevant shortcuts, and gets out of the way once resolution begins.".  

The Challenge

Design an interface that recognizes enterprise intent, adapts to real-world constraints, and still feels like a capable human partner—not a rule-based form with a personality.

Key pain points:

  1. Ambiguous & Complex Intent – Skip the jargon. Every message, prompt, and output needed to feel legible and purposeful – no mental gymnastics required.
  2. Rigid Scripts – Whether it was a file name from earlier or a timestamp mentioned in passing, the agent kept track – so users didn’t have to repeat themselves.
  3. Lost Context – No dead ends. Users could switch to a human, get a summary, or redirect a thread – without starting from scratch.
  4. Poor Handoffs – Human agents had no visibility into prior AI interaction, creating rework and frustration

Visually, we opted for a hybrid layout: user queries on the left, agent replies on the right. Actions like clause extraction or escalation routing were embedded directly into the thread; so users didn’t have to pause, switch context, or dig for options. The goal was continuity, not choreography.

  • Item 1
  • Item 2
  • Item 3
Operational Complexity, Not Casual Queries.
  • 1
    Handle layered asks without forced simplification
  • 2
    Read documents and map to live context
  • 3
    Track continuity across chats and teammates

User Needs + Research Insights

What they asked for

That was the ask—short, deceptively simple.But when we listened between the lines, the actual need was less about conversation and more about coordination.

  • Users weren’t “chatting” — they were triaging.
    The dialogue wasn’t casual. It was clause validation, vendor status, or policy lookup—all deeply contextual, often buried in documents or tied to upstream systems.
  • Legacy chatbots added friction, and obstacles to the end, desired outcome.
    Users were either asked to repeat themselves, or simplify complex requests into rigid templates. The result? Abandonment or escalation.
  • Teams didn’t want a “friendlier” interface.
    They wanted one that could think in threads, understand documents on the fly, and keep the operational context alive without needing handholding.

In essence, they weren’t asking for a chatbot.

  • Automate common intake flows—without forcing users to write prompts
  • Extract contract terms on the fly—without schema training
  • Respond in context—without breaking security or protocol



Bottom line: They weren’t asking for a chatbot.They were asking for a teammate—one who reads the brief, understands the file, and knows what to do next.

  • Item 1
  • Item 2
  • Item 3
"We want a conversational interface for support".  
Designed for Enterprise Muscle, not Small Talk.
  • 1
    Auto-triage support queries with document aware logic
  • 2
    Parse contract terms without custom schema setup
  • 3
    Reduce agent load while improving user confidence

Smart + Human

Interface Logic & Agent Design

Designing the agent meant balancing two tensions: machine intelligence and human intuition.

The challenge wasn’t to seem smart—it had to act smart. That meant interpreting intent, handling ambiguity, and knowing when to defer. Just as important, it needed to earn trust: no uncanny mimicry, no overreach, no pretense.

We anchored the agent experience around three working principles:

  1. Conversational Clarity – Skip the jargon. Every message, prompt, and output needed to feel legible and purposeful—no mental gymnastics required.
  2. Contextual Memory – Whether it was a file name from earlier or a timestamp mentioned in passing, the agent kept track—so users didn’t have to repeat themselves.
  3. Fail-Safe Logic – No dead ends. Users could switch to a human, get a summary, or redirect a thread—without starting from scratch.

Visually, we opted for a hybrid layout: user queries on the left, agent replies on the right. Actions like clause extraction or escalation routing were embedded directly into the thread—so users didn’t have to pause, switch context, or dig for options. The goal was continuity, not choreography.

  • Item 1
  • Item 2
  • Item 3

Enterprise Guardrails

Compliance without the Coldness

In enterprise environments, compliance isn’t a feature—it’s table stakes. But most tools treat it like a tax: rigid, brittle, and bolted on after the fact.
We designed compliance into the conversational layer from the start—not to restrict users, but to give them clarity and control mid-dialogue.



Our agent respected:

  • Role-based access – Users only saw and triggered what their role permitted.
  • Audit traceability – Every document, prompt, and action logged and time-stamped.
  • Data boundaries – No bleed across departments, no hallucinated access outside allowed scopes.


Instead of burying these mechanics deep in backend logs, we surfaced them:

  • Agents quoted their sources directly in chat.
  • Users could inspect logic chains mid-conversation.
  • Controls existed to pause, override, or redirect sensitive queries.

By making guardrails visible—but never disruptive—we built an agent that felt accountable, not cold.

  • Item 1
  • Item 2
  • Item 3

In Context

System Feedback & States

AI tools often crumble under context drift—forgetting users, misplacing threads, or resetting logic mid-task.

In fast-moving enterprise settings, that’s not a UX bug—it’s an operational cost.

So, we designed the agent to be state-aware and state-retentive. At any given moment, it knew:

  • Who initiated the interaction
  • What the ongoing task thread was
  • Which file, query, or system entity was being referenced

The interface made this visible: context summaries, embedded variables, and inline previews let users confirm what the agent was using and why.No guesswork. No memory holes. No toggling between tabs to re-educate the system.

We embedded live context markers across the UI—showing active variables, recall triggers, and linked task metadata. Instead of pulling users out of the moment to rephrase or reorient, the agent kept the thread warm.

This meant fewer resets, less rework, and smoother handoffs—even across departments or time zones.
The result: a thread that stayed coherent—even when the user switched topics, paused mid-task, or resumed later from another device.

  • Item 1
  • Item 2
  • Item 3

Conversational DNA

Voice & Tone Layering

Tone isn’t cosmetic—it’s operational.

In high-stakes environments, a misfired response can trigger confusion, mistrust, or escalations. So we didn’t aim for one voice. We built a tone system—modular, adaptive, and precise.


We built a tone system that was situational, adaptive, and branded.

  • Onboarding interactions were warm but direct. Just enough charm to lower friction, without feeling scripted.
  • Escalation threads used clipped syntax and minimal surface language—prioritizing clarity and auditability.
  • Task transitions leaned instructional, with focused verbs and no empty pleasantries.


To avoid the “chatbot glaze,” we mapped tone to intent—not just the surface domain. That meant scripting fallback responses for ambiguity, modeling confidence thresholds, and distinguishing what needed to sound human from what needed to sound official.

We also built in tone toggles—letting teams switch between formal, neutral, or friendly registers depending on their org culture or compliance needs.

This wasn’t about personality. It was about precision: sounding like a partner when appropriate, and like an SLA policy when necessary.

  • Item 1
  • Item 2
  • Item 3
Tone is not decoration—it’s operational trust. It tells the user: “This system speaks your language.
  • 1
    Adaptive and Task-aware tone shifts
  • 2
    Formal vs. friendly toggles
  • 3
    Structured, not scripted

Results

Designing for Trust

This wasn’t about showing off what the AI could do. It was about removing the burden from the user—especially when things got messy.

Trust came from precision. From transparency. From tone. But also from the way the interface behaved under pressure—in moments of ambiguity, escalation, or reset.

What made the design successful wasn’t a single feature. It was a feeling:

  • Of clarity, when a user uploaded a complex document and saw key entities surfaced in seconds.
  • Of relief, when an agent reply resolved an issue in one turn instead of five.
  • Of confidence, when every action was traceable and reversible.

It worked because it felt less like a chatbot, and more like an operational teammate with common sense and boundaries.

  • Item 1
  • Item 2
  • Item 3

Learnings

Realizations + Reframes

Invisible complexity is still complexity:Just because users can’t see the logic scaffolding doesn’t mean it isn’t critical. Systems need to be both smart and explainable.
Agentic design ≠ automation:This wasn’t about replacing humans. It was about augmenting ops teams—giving them a thinking layer that scaled without getting in the way.
Voice is a design surface:Tone isn’t a copywriting afterthought—it’s part of the interface, the trust model, and the emotional contract with the user.
Enterprise UX is operational design:Every pixel has business weight. Design choices had to balance speed, traceability, and psychological safety.

More Works

Other Projects

Shaping the future of Enterprise in Ai.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.