Index of Investigations

B2B E-Commerce • Taxonomy & Trust Architecture

OptimaSurg: Redefining Medical Procurement

The Brief (Context)

OptimaSurg, a B2B e-commerce platform for surgical tools, was experiencing a highly fragmented user journey. Medical professionals were abandoning carts at high rates, and overall engagement was stagnating. The objective of this investigation was to uncover the behavioral root causes of this friction and architect a data-driven, trustworthy solution.

The Evidence (Quantitative Discovery)

To establish a baseline, I deployed targeted surveys to the existing user base. The data revealed a stark reality:

  • 70% rated the current platform's functionality as inadequate for their workflow.
  • 90% required highly specific technical data (specs/demo videos) to make purchasing decisions.
  • 85% indicated a strong desire for personalized workflows and recommendations.

The numbers told us what was failing—users couldn't find the information they needed and felt the site lacked personalization. Next, I needed to understand why.

The Interrogation (Qualitative Deep-Dive)

To uncover the psychological friction, I conducted in-depth user interviews with surgeons and procurement officers. Utilizing Empathy Mapping, I synthesized the qualitative data into three core behavioral insights:

The Intervention (Design Strategy)

Guided by the mixed-methods data, I led the redesign of the platform's architecture:

  • Taxonomy Overhaul: Grouped tools logically by medical specialty to match the user's mental model.
  • Progressive Disclosure: Cleaned up product pages, surfacing critical data first and allowing users to expand technical specs as needed.
  • Trust Architecture: Prominently integrated verifiable certifications and secure checkout indicators.
  • Smart Search: Advocated for a search engine logic update to support autocomplete and medical terminology filters.

The Verdict (Business Impact)

During evaluative research, users praised the logical navigation but hesitated on complex filters, leading me to immediately simplify the filtering architecture to reduce cognitive load further. Post-launch, task completion times for browsing and checkout dropped significantly. As one user summarized: "The site finally feels like it understands what surgeons actually need."

↑ Back to Index
Consumer App • Algorithmic Curation

FlavorFind: Solving the Paradox of Choice

The Brief (Context)

FlavorFind aimed to revolutionize food discovery, but initial user analytics showed a massive drop-off at the search screen. Users were overwhelmed by thousands of options and abandoned the app before completing a recipe selection. The mission was to eliminate decision fatigue and create a highly personalized, empathetic discovery loop.

The Evidence (Quantitative Discovery)

Reviewing session duration and click-heatmaps revealed that users spent an average of 4.5 minutes scrolling without clicking a single recipe. The bounce rate on the primary generic "Categories" page was 62%. The data indicated that having more choices was actively paralyzing the user base.

The Interrogation (Qualitative Deep-Dive)

I initiated contextual inquiries, observing users as they attempted to plan a weeknight dinner using the app. The primary pain point wasn't a lack of food options, but a lack of contextual relevance. Users didn't just want "Chicken"; they wanted "Spicy, dairy-free dinner under 30 minutes." The app's existing taxonomy didn't support complex, mood-based queries.

The Intervention (Design Strategy)

To combat decision fatigue, I transitioned the architecture from a "Directory" model to a "Guided Assistant" model:

  • Empathetic Onboarding: Designed a gamified, visual quiz during account creation to establish baseline dietary profiles and flavor preferences.
  • Dynamic Tagging System: Re-architected the database tagging to allow multi-variable filtering (e.g., "Savory" + "Vegan" + "Quick").
  • Algorithmic Curation: Reduced the initial homepage view from 50 generic options to 5 highly personalized "Daily Matches" to dramatically lower cognitive load.

The Verdict (Business Impact)

A/B testing the new "Guided Assistant" flow against the old "Directory" flow yielded clear results. The intervention successfully mitigated choice paralysis. Average time-to-selection decreased by 55%, and weekly active retention increased as users felt the app "learned" their specific cravings over time.

↑ Back to Index
Enterprise SaaS • Human-Computer Interaction

Gmail + AI: Engineering Trust in Automation

The Brief (Context)

Integrating Large Language Models (LLMs) into legacy enterprise communication tools like Gmail presents a unique UX challenge. While AI can drastically reduce the time spent triaging emails, early user sentiment showed resistance. Users felt a loss of control and feared AI "hallucinations" in professional communications. The goal was to seamlessly integrate AI without sacrificing user autonomy.

The Evidence (Quantitative Discovery)

Initial telemetry of prototype AI auto-draft features showed a low adoption rate (under 15%). Furthermore, among those who used the feature, the edit-rate before sending was near 98%. Users were spending almost as much time fixing AI drafts as they would writing the email from scratch.

The Interrogation (Qualitative Deep-Dive)

Through semi-structured interviews regarding AI sentiment, three major themes emerged:

The Intervention (Design Strategy)

I shifted the design paradigm from "AI as a replacement" to "AI as a collaborative assistant."

  • Human-in-the-Loop Safeguards: Removed auto-send capabilities. The UI explicitly places the AI draft into a visual "Review Mode," forcing human approval before execution.
  • Tone Control Sliders: Implemented simple UI toggles allowing users to adjust the generated draft's tone (e.g., "More Formal," "More Concise") in real-time.
  • Privacy Indicators: Added clear, micro-copy tooltips explaining exactly what context the AI was utilizing to generate the draft, fostering systemic trust.

The Verdict (Business Impact)

By prioritizing user control over total automation, trust in the feature skyrocketed. Adoption rates of the drafting tool increased by over 300%, and the time spent triaging daily emails was reduced by an average of 40% per user. The investigation proved that in AI design, transparency is just as important as capability.

↑ Back to Index