top of page
Image by PiggyBank

Timeline

June 2023 - Present

Scotiabank

Search got 30% smarter (accuracy) and 5% more loved (adoption).
It now knows when you need answers… and when you just need a
human.

Problem

Customers often turned to the in-app Help & Support search to solve issues, but were met with generic, rule-based results that didn’t reflect what they were actually asking.


For example, a query like “my debit card isn’t working abroad” would return an article on “how to apply for a new card.”


Because the system couldn’t interpret natural language or intent, customers lost trust in search, abandoned self-serve, and flooded the contact centre for support that should’ve been automated.

Solution

As the Product Manager, I led the design and rollout of an LLM-powered search understanding model that improved how Scotiabank interprets customer intent.

​

We developed a semantic search model that used vectorized embeddings to understand meaning beyond exact keywords.


This model powered a Retrieval-Augmented Generation (RAG) framework that:

​

  • Retrieved relevant results from Scotiabank’s internal FAQs and knowledge base,

  • Generated precise responses grounded in approved content, and

  • Determined the next best step — whether to display an answer or seamlessly escalate to chat for more complex support.

​

This created a single intelligent routing layer that connected search → chat → live support, ensuring customers always reached the right place without friction.

Impact

The new system improved search accuracy by 30% and drove a 5% month-over-month increase in adoption.


Customers found the experience faster, more relevant, and genuinely helpful — reducing unnecessary calls and making self-serve feel natural.


Beyond the numbers, it became a foundation for Scotiabank’s conversational AI strategy, setting a new standard for how support channels understand and assist customers.

1

Research + Problem Definition

1. Generic, Rule-Based Search

Customers relied on the Help & Support search to resolve issues — but the rule-based system could only match keywords, not meaning.

This led to irrelevant results (e.g., “how to apply for a card” instead of “why my card isn’t working abroad”) and increased call volumes as users lost trust in self-serve.

The search engine was built for data precision, not human expression — leading to broken intent detection and higher call volumes.

62% of customer queries were classified as “irrelevant or partially matched

1 in 4 users abandoned search and escalated to phone support within minutes

<50% of users reported satisfaction with search accuracy

2. Broken Connection Between Search and Chat

Search and chat operated as two disconnected systems, creating friction when customers moved between them.

Users had to restate their issue once redirected to chat, losing context and trust in digital support.

The disconnect between systems erased continuity and trust — customers expected escalation, not déjà vu.

30% of chat sessions began with users repeating the same query

18% of users dropped off during hand-off from search to chat

12 seconds average delay before a chat session started, increasing frustration

3. Limited Intent Understanding

Without a semantic layer, the system couldn’t distinguish between information requests, troubleshooting, or transactional tasks.

This made it difficult to prioritize high-intent queries or guide users efficiently.

The lack of semantic understanding created a loop of trial-and-error for customers just trying to get help.

41% of “high-effort” searches could have been resolved automatically

23% of queries were misclassified, sending users to the wrong help path

15% of support calls came from customers unable to find answers online

70% of agents cited unclear intent as the main cause of chat escalation

1.png

2

Ideation

Core Principles

3.png
4.png
2.png

understand meaning

connecting journeys

ground in trust

reduce effort

Choosing the right AI model + architecture

Before landing LLM + RAG approach, I evaluated multiple approaches to make search smarter.
Each option was assessed for accuracy, scalability, data governance, and maintenance cost

We approached the solution in two phases — each building toward a smarter, more context-aware search experience.
Phase 1 focused on retrieval. Phase 2 focused on reasoning.

Rule-Based / Keyword Search (Baseline)

What it is:
Static keyword matching and ranking logic — retrieves pre-tagged FAQ results.

Why not:
Too rigid for natural language; can’t interpret intent or context.

Accuracy: Low
Scalability: High
Compliance: Strong
Context Understanding: None

Fine-Tuned LLM

What it is:
Trains a proprietary LLM on Scotiabank-specific data (FAQs, chat logs).

Why not:
Expensive to retrain, hard to control for compliance, and prone to “drift” when content changes.


Accuracy: High
Scalability: Low
Compliance: Risky
Context Understanding: Strong

RAG with LLM

What it is:
Combines semantic retrieval (from internal KBs) with contextual generation via an LLM.

Why we chose for phase 2:

Delivers without retraining the model, making it scalable, explainable and trustworthy. 

Accuracy: High
Scalability: High
Compliance: Strong
Context Understanding: Strong

Vector Search

What it is:
Encodes queries and documents as embeddings to find similar meanings, not exact words.


Why we chose for phase 1:
Improves relevance but can’t generate responses or make decisions about next steps 


Accuracy: Medium
Scalability: High
Compliance: Easy
Context Understanding: None

Agent Based Workflow (Agentic AI)

What it is:
Uses an orchestrated set of LLM agents- one retrieves info, another reasons, another decides routing


Why not:
Adds complexity, and latency. For future state


Accuracy: High
Scalability: Low (complex orchestration)
Compliance: Weaker
Context Understanding: Strong

Phase 1 - Semantic Vector Search (Retrieval Layer)

Goal: Improve relevance and accuracy without changing the existing chatbot or search UI.

 

We replaced rule-based keyword matching with vector embeddings, allowing the system to understand semantic similarity between queries and articles.
This let search find results that meant the same thing, even if the words didn’t match exactly.

​

Outcome:

  • Search results became significantly more relevant to customer phrasing.

  • Early validation showed a 30% boost in accuracy and increased user trust in self-serve search.

7.png

Phase 2 - LLM + RAG Integration (Reasoning Layer)

Goal: Make search understand, generate, and guide.

Building on the vector foundation, we added a Retrieval-Augmented Generation (RAG) layer powered by an LLM.


This allowed the system not only to retrieve relevant documents but to synthesize responses, explain steps, and decide routing based on query intent.

 

​

Outcome:

  • Created an end-to-end intelligent support flow connecting search → chat → advisor.

  • Customers received contextual, verified answers faster — and adoption rose 5% MoM.

  • Became a reusable AI foundation for future Scotiabank conversational tools.

8.png

4

Prototype + Testing + Validation

User Stories + Acceptance Criteria

Chat Entry for Transaction Limit Change Errors

 

User Story:
As a mobile banking customer, I want to receive an option to chat with support when I encounter an error changing my e-Transfer limit, so I can quickly understand the issue and resolve it.

​

Acceptance Criteria:

  •  If a user encounters an error when changing their e-Transfer limit, display a “Chat for Help” button on the error screen.

  •  The chat entry point should only appear for errors related to fraud flags or exceeding the 12-change limit in 90 days.

  •  Clicking the chat button should open the chat interface within the app.

  •  Pre-fill chat with relevant context (e.g., “I am trying to change my e-Transfer limit and received an error”).

  •  Users should be placed in a queue and shown an estimated wait time if no advisor is available immediately.

Advisor Awareness of User's Error Context

 

User Story:
As a chat advisor, I want to see relevant details about the user’s transaction limit error, so I can quickly diagnose and provide a solution without asking redundant questions.

 

Acceptance Criteria:

  •  Advisors should receive a session summary that includes:

    • User’s attempted new limit.

    • The exact error message encountered

  •  Advisors should be able to access the customer’s account history related to e-Transfer changes.

  •  Advisors should have predefined responses or troubleshooting steps to guide users effectively.

Queue & Wait Time Transparency

​

User Story:
As a customer waiting in line for chat support, I want to see my estimated wait time so that I can decide whether to stay in the queue.

​

Acceptance Criteria:

  •  Show a wait time estimate (e.g., “Your expected wait time is 5 minutes”), and  If the queue time exceeds 10 minutes, then show agent is unavailable and provide alternative support channel

  •  If the queue is too long, provide an option to request a callback instead.

  •  Users should receive a notification if their position in the queue changes significantly (e.g., faster than expected).

Chat Usability & Accessibility Enhancements

​

User Story:
As a user with accessibility needs, I want to ensure that chat support is easy to read and interact with so that I can get help without difficulty.

​

Acceptance Criteria:

  •  The chat interface should support screen readers and text-to-speech.

  •  Font sizes and contrast should meet WCAG accessibility standards.

  •  Users should be able to resize the chat window for better readability.

5

Reflection

What I enjoyed?

Working on this initiative to enhance the chat experience within the banking app was both exciting and impactful. I particularly enjoyed the strategic problem-solving aspect—digging into customer pain points, identifying where users were getting stuck, and collaborating with different teams to create a seamless support experience. Seeing the direct impact of our efforts, with increased chat adoption and reduced call center reliance, was incredibly rewarding.

 

What I learned?​

  1. Meeting customers where they are, drives engagement
    Customers don’t always seek help proactively; making chat support visible at the right moments significantly increased adoption. Designing solutions around user behavior rather than expecting them to find support on their own was key.

  2. Balancing business goals with customer needs is critical
    While reducing call center costs was a core business objective, the real success came from delivering a better customer experience. Ensuring that chat support was helpful, understanding how advisors were capable to help and context-aware (instead of just another generic support channel) made all the difference.

  3. Cross-functional collaboration is essential for execution
    This initiative required tight collaboration between product, design, engineering, and support teams. Aligning on priorities, user stories, and technical feasibility early on helped streamline implementation and avoid roadblocks. And most importantly documentations and building good processes. 

​

​

About the Designs

 

While this case study focuses on the product strategy, problem definition, and impact, the actual interface designs were created by an amazing product designer on my team. Their work brought these ideas to life and ensured a seamless, user-friendly experience in the app.

 

This project reinforced the value of customer-centric problem-solving and teamwork, and I’m excited to apply these learnings to future initiatives. 

bottom of page