case study 01 — UX + Research

Global Navigation Redesign & Search A/B Testing

The help center's navigation was designed for a smaller product suite — when Webex grew to dozens of products, I redesigned the structure to scale and ran an A/B test that proved the new search pattern was 38% faster.

RoleLead — Interaction Design + Research
Duration~2 quarters · FY25 Q1–Q2
StackFigma · Momentum Design System · Amplitude
Outcome38% faster search completion · 84% misclick reduction

The Situation

help.webex.com's global navigation had 8 items. The Webex product suite had grown well beyond what 8 items could represent — AI Agent, Contact Center, Devices, Control Hub, and a constellation of sub-products all competing for discoverability. Users landing on the help center had to already know what they were looking for, because the navigation wasn't structured to help them browse.

Search was similarly constrained. The existing pattern was a text input field embedded in the global nav bar — functional, but it navigated users away from their current page to a dedicated results page. For a help center where users are often deep in product-specific documentation, that context switch was friction. And the suggested queries were based on common search strings, not actual article titles — so the suggestions pointed users toward popularity, not relevance.

Competitive research across 10+ enterprise help sites confirmed these were solvable problems. 100% of competitors maintained search as a primary or secondary item in their global navigation. 70% provided categorized, filterable search results. 80% paginated results to prevent cognitive overload. The patterns existed — we just needed to apply them within our design system and content architecture.

Annotated Figma spec of the redesigned global navigation bar in the signed-out state — nav items (Get started, Help by product, Administration, What's new, Resources, Search, Ctrl+K, English) are measured with spacing callouts; a sticky note references the RKB page for desktop/mobile breakpoints

The redesigned global navigation — signed-out state annotated with layout and spacing properties for engineering handoff

My Role & Scope

I led the interaction design for both the global navigation restructure and the search experience redesign, including competitive research, Figma design specifications, interactive prototyping, A/B test design and facilitation, data analysis, and the stakeholder presentation of findings. I worked alongside a colleague who owned the front-of-house areas like the homepage and landing pages — my scope was the search experience and article pages specifically. Duration: roughly FY25 Q1 through Q2, with the A/B test results presented in January 2025.

The Approach

Competitive research

I audited 10+ competitor help sites to establish baseline patterns and identify opportunities. Key findings:

  • Search visibility: 100% maintained search as a primary/secondary nav item — persistent and always accessible
  • Result categorization: 70% provided multiple orders of search results tied to content structure, with category-based filtering
  • Pagination: 80% segmented results into paginated lists to prevent cognitive overload (though explicit page numbering was split 50/50)
  • Rich media in results: Only 20% surfaced visual assets alongside result items — an outlier worth noting but not prioritizing
  • Sorting vs. filtering: Only 20% surfaced sorting options separate from filtering — another outlier that confirmed filtering was the higher-value pattern

This research established that our search experience was behind baseline expectations, not ahead of them. The redesign wasn't aspirational — it was catching up to table stakes.

Navigation restructure: 8 to 23 options

The global nav redesign expanded available navigation options from 8 to 23 — a 180% increase — while maintaining usability. The key structural changes:

  • Addition of the Webex logo in the header for consistent brand identity
  • "Help by product" mega-menu — a dedicated dropdown exposing the full product suite (Webex App, Meetings, Calling, Webinars, Slido, Webex Suite, Events, Devices, Contact Center, Control Hub) with product-specific iconography I worked with our director of product design to define
  • "For administrators" → "Administration" — relabeling to align with actual user mental models
  • "Adoption" → "Resources" menu — restructured to house adoption materials, support links, and community resources under a clearer umbrella
  • Search button placement and interaction change — repositioned search and redesigned the interaction pattern (which led to the A/B test)
  • "What's new" placement change — adjusted positioning for better discoverability

The challenge was fitting 23 options into a navigation that didn't feel overwhelming. The mega-menu pattern solved this by organizing products into a visual grid with iconography — users could scan by icon or label without reading a long vertical list.

The redesigned Webex Help Center 'Help by product' page at first load — dark-themed page with 'What product can we help you with?' heading, three category buttons (Webex Suite, Devices, Customer Experience), and a product grid showing Meetings, Messaging, Calling, Events, Webinars, Video Messaging, Polling, and Whiteboarding with product icons

The new 'Help by product' destination page — products organized into a scannable icon grid, exposing the full Webex suite as structured navigation rather than a flat list

The Webex Help Center homepage with the Resources dropdown menu open — three columns labeled Support (Join a meeting, Developer Tools, Contact Support), Learn (Webex Academy, Live Events and Webinars, Webex Blog), and Programs (Webex Community, Webex Insider, App Hub), alongside a Cisco AI Assistant promotional card; sticky notes indicate the resources menu will always use light theme and reference an interactive prototype

The redesigned Resources menu — adoption materials, support links, and community resources reorganized under a single umbrella with three distinct content columns

Search interaction A/B test

With the navigation structure defined, the open question was the search interaction pattern. I designed and ran an A/B test comparing two approaches that differed in how and where search results were surfaced:

Stakeholder slide titled 'Background and setup' — describes two search interaction patterns tested: Design A (search button reveals text input field, suggested queries based on common entries) and Design B (search button or Ctrl+K/Cmd+K reveals a pop-over menu, suggested queries are actual article page search results). Lists four metrics captured: Search Completion Rate, Time to Access Search, Time to First Search Result, and user engagement. Notes test facilitation: 4 users, 6 tasks, 3 per design, self-driven with interactive Figma prototype.

A/B test background and setup — two interaction patterns, four metrics, four participants, six tasks

Design A — Search via input field (existing pattern)

A search button reveals a text input field within the global navigation. Users type their query and are navigated to a separate search results page. Suggested queries are based on common search strings — popular terms, not actual articles.

Design A results slide from the stakeholder presentation — titled 'Design A results: Search via input field'. Four metric cards: Search completion rate 45 seconds average (highlighted in pink); Time to access search 3.5 seconds; Time to first search result 4.8 seconds; User engagement with suggestions showing 1.2 average interactions per Design A task, with 25% of those interactions being misclicks. A screenshot of the existing nav with a cursor arrow is shown at left. A footnote defines misclicks as any recorded user click that did not bring them closer to completing their search task.

Design A results — search via input field. 45-second average completion time, 3.5-second time to access search, 25% misclick rate.

Design B — Search via popover (proposed pattern)

A search button with a visible keyboard shortcut (Ctrl+K / Cmd+K) reveals a popover menu. Users type directly into the popover and see results immediately without page navigation. Suggested queries are actual article page search results — real content, not popularity metrics.

Design B's search popover in the active state — overlaid on the Webex Help Center 'Help by product' page. A centered popover contains a 'Search documentation...' input field with an 'Esc' keyboard shortcut badge at right. Below the input, a 'Recommended for you' section lists five article suggestions: Getting Started with Control Hub, Join a Webex Meeting, Sign in Issues with Webex, Use a virtual background in Webex Meetings and Webex Webinars, and Webex Calling Dedicated Instance Best Practices. A pink annotation callout at the top reads 'Search button active state reveals a popover menu — see interactive prototype example'.

Design B's search popover — triggered by the search button or Ctrl+K, surfacing article-based suggestions in-place without navigating away from the current page

Design B results slide from the stakeholder presentation — titled 'Design B results: Search via popover'. Four metric cards: Search completion rate 28 seconds average (highlighted in green); Time to access search 2.3 seconds; Time to first search result 3.2 seconds; User engagement with suggestions showing 2.5 average interactions per Design B task, with 4% of those interactions being misclicks. A screenshot shows the popover open with 'Search...' input and the ⌘K keyboard shortcut badge visible. A footnote notes that despite a greater number of interactions recorded, there are far fewer misclicks being made by users in Design B tasks on average.

Design B results — search via popover. 28-second average completion time, 2.3-second time to access search, 4% misclick rate — every metric ahead of Design A.

Test methodology

4 Webex Experts recorded themselves completing 6 search-related tasks — 3 using Design A, 3 using Design B. Tests were self-paced, guided by task prompts in an interactive Figma prototype. I captured screen recordings and analyzed four named metrics:

  • Search completion rate — Average time from first accessing the interface to reaching the intended destination
  • Time to access search — Time to find and access the search interface element
  • Time to first search result — Time from query submission to first result displayed
  • User engagement with suggestions — Number of interactions (clicks, misclicks, keypresses) with suggested queries

Results

Design B (popover) won across every metric:

MetricDesign ADesign BDelta
Search completion time45 seconds28 seconds38% faster
Time to access search3.5 seconds2.3 seconds34% faster
Time to first search result4.8 seconds3.2 seconds33% faster
Avg interactions per task1.22.5More engagement
Misclick rate25%4%84% reduction

The interaction data told a clear story: users engaged more with Design B (2.5 interactions vs. 1.2) but made far fewer errors (4% misclick rate vs. 25%). More engagement with less friction — the popover pattern was both more inviting and more forgiving.

Comparative bar chart slide titled "Measuring Design A against Design B" — two side-by-side charts. Left: average time required for users to complete their search task: Design A ~45 seconds (tall pink bar), Design B ~28 seconds (shorter teal bar). Right: average time needed to first access search (per task): Design A ~3.5 seconds (tall pink bar), Design B ~2.3 seconds (shorter teal bar).

Completion time and time-to-access-search — Design B outperforms Design A on both

Comparative bar chart slide titled "Measuring Design A against Design B" — two side-by-side charts. Left: average time to first search result per task: Design A ~4.8 seconds (tall pink bar), Design B ~3.2 seconds (shorter teal bar). Right: average count of user interactions recorded per task with misclick percentage overlay: Design A 1.2 interactions, 25% misclicks (pink bar); Design B 2.5 interactions, 4% misclicks (teal bar). Annotation notes that despite more interactions, Design B has far fewer misclicks on average.

Time to first result and interaction quality — Design B users engaged more and erred less

Participant feedback

The qualitative feedback reinforced the numbers:

Stakeholder quote slide labeled 'Test participant feedback on Design A: Search via input field' — large display quote reading: '[the search input pattern] feels like you have to go one step further, because you will come to a completely different page versus seeing the search results in the pop-up.' Key phrase 'you have to go one step further' is highlighted in pink.

Test participant feedback on Design A — the page-navigation penalty was the most consistently cited friction point

Stakeholder quote slide labeled 'Test participant feedback on Design B: Search via popover menu' — large display quote reading: 'I like sites where you can just start writing and it will begin a search, and here we can just press CTRL + K.' Key phrase 'you can just start writing and it will begin a search' is highlighted in teal/green.

Test participant feedback on Design B — participants explicitly called out the keyboard shortcut as a valued efficiency affordance

Multiple users specifically called out the keyboard shortcut as a valued feature — a signal that the Webex Experts audience (power users, administrators) appreciated efficiency affordances. The popover pattern met an expectation that participants already brought from other tools they used daily.

Design specifications

The full interaction spec and visual design were delivered in Figma with detailed annotations covering:

  • Interactive states — hover, focus, active, and keyboard navigation states for every nav element
  • Search popover behavior — open/close triggers, keyboard shortcut registration, suggested query population, result item interaction
  • Responsive behavior — how the navigation collapses and adapts across viewport sizes
  • Visual design — updated typography aligned with the Momentum Design System, product iconography, color treatments for the hero sections
  • Layout and spacing properties — precise padding, margin, and alignment values for engineering handoff
  • Accessibility — keyboard navigation paths, focus ring styles, ARIA attributes, screen reader announcement patterns
Search button interaction states spec for light and dark themes — six nav bar variants shown in two rows of three. Top row (light theme): default state showing 'Search... Ctrl+K' button; hover state with pink annotation callout 'Search button hover state'; pressed state with pink annotation callout 'Search button pressed state'. Bottom row (dark theme): same three states mirrored. Each variant shows the full nav bar context with the search button highlighted.

Search button interaction states — default, hover, and pressed states specified for both light and dark themes

Resources menu layout and spacing specification — annotated Figma frame showing the full homepage with the Resources dropdown open. Pink measurement annotations call out: global navigation container height, resources menu first grid container height, resources menu category column widths (1fr 128px 128px 128px), inter-column gaps (1rem, 8px), and CTA tile widths. A sticky note references the RKB page for resources menu grid logic across desktop breakpoints. Additional annotations specify typography: primary section header (font-size 14/Team, justify-content flex-start, flex-shrink: 0), category caption (font properties), category row item, and CTA CTa tile typography properties.

Resources menu layout specification — column widths, spacing values, and typography properties annotated for engineering handoff

Outcomes

  • Navigation expanded from 8 to 23 options (180% increase) while maintaining usability — validated through the A/B test showing users could access search faster with the new structure
  • Search popover pattern validated with 38% faster task completion, 33% faster time to first result, and 84% reduction in misclicks — clear enough signal to move to implementation
  • Keyboard shortcut (Ctrl+K) validated as a valued power-user feature by the Webex Experts participant group
  • Article-based search suggestions (Design B) outperformed popularity-based suggestions (Design A) — surfacing real content trumped surfacing common queries
  • Competitive parity established — the redesign brought help.webex.com's navigation and search patterns in line with enterprise help site best practices
  • The search engagement data from Amplitude (882k searches Aug–Nov, +22.1% QoQ) reflects the cumulative impact of these navigation and search improvements alongside the AI search features documented in case study 02
Stakeholder slide titled 'Main findings: Wrapping up' — summary of the A/B test data. Body text reads: 'The data collected from these 4 users across the competing design patterns suggests that the Popover interaction pattern significantly outperforms the Input Field interaction pattern in terms of: search completion rates (28 seconds vs. 45 seconds), time to access search (2.3 seconds vs. 3.5 seconds), time to first search result (3.2 seconds vs. 4.8 seconds), number of interactions relative to % of misclicks (7/2, 25% vs. 2.5, 4%).' Four finding cards: Users preferred the popover pattern; Popover lends to higher engagement with search suggestions; Popover pattern helped prevent user error; Keyboard shortcuts were appreciated.

Main findings — the popover pattern outperformed the input field pattern across all four measured dimensions; keyboard shortcuts were a standout qualitative finding

Reflection

The test confirmed what the competitive research had already made plain: we weren't solving a novel UX problem. We were closing the gap between how Webex behaved and how users already expected navigation to work, based on every other enterprise tool they opened every day.

That's a quieter kind of win. But users don't experience a product in isolation. They bring expectations from everything else they use. Removing the friction between those expectations and what Webex actually offered turned out to be more valuable than any novel design pattern could have been.

← all case studiesnext: agentic search →