Designing AI in Healthcare Without Falling for the Hype

How I helped LeanTaaS separate AI excitement from real product value.

Project Snapshot

Impact

(different kind)

This project didn’t produce huge revenue metrics. Its impact was:

Avoided clinical and reputational risk

Shaped company AI direction

Built internal literacy around AI

Validated contained AI use cases

Prevented wasteful over-investment

KEY TAKEAWAY

Sometimes impact is what you ship. Sometimes it’s what you prevent.

“Ask iQueue pulls it so much faster than me clicking through all my locations… It pinpointed exactly what I needed. I didn’t expect it to be that helpful and it was amazing.”

Operations Leader at UC San Francisco

"Ask iQueue was pretty quick and accurate. If I want to know what was our monthly drop in the census, I could see that without having to go and navigate through metrics. I put the thumbs up on every question and answer that I received.

Operations Manager at Stanford

Customer
feedback

Organizational impact

Customer Success teams were initially weary of AI. I led sessions to explain:

What hallucinations are

How we mitigate risk

Where AI fits (and doesn't)

KEY TAKEAWAY

This built internal confidence and reduced fear around AI adoption.

Context

  • Company-wide push to deliver customer-facing AI

Constraints

  • No dedicated AI team

  • 1 PM, 1 designer, 2 engineers

  • One quarter timeline

  • High hallucination risk

My Role

  • Product designer shaping AI direction, scoping, and validation

Core Challenge

  • Deliver AI value without damaging trust or credibility

When AI exploded, every software company rushed to prove they had AI.

LeanTaaS was no exception. Leadership wanted visible AI features fast — something customers could see, demo, and associate with innovation.

The challenge? We specialize in healthcare operations, where wrong answers don’t just look bad — they can undermine trust, ROI, and even patient safety.

KEY TAKEAWAY

This project wasn’t about shipping flashy AI. It was about deciding what AI should and shouldn’t do.

The hype vs reality

Leadership envisioned ChatGPT-level support: conversational, fast, and broadly capable.

But we lacked resources, infrastructure, and time. More importantly, we faced real risk.

If AI gave wrong answers, it could undermine our ROI claims, contradict operational guidance, and confuse customers about best practices.

KEY TAKEAWAY

In healthcare, trust is everything. We couldn't allow AI to undermine that.

Blindly shipping flashy AI could lead to:

Incorrect staffing guidance

Misinterpreted performance metrics

Hospitals questioning our credibility

Customer churn from perceived lack of ROI

Clinical staff fearing replacement or loss of human judgment

KEY TAKEAWAY

The risk wasn’t embarrassment — it was trust erosion. In healthcare, a wrong AI answer isn’t just incorrect — it can change clinical decisions.

My position: Start small, stay contained

Instead of broad AI, I proposed small, contained use cases with low hallucination risk. I created a mini-roadmap outlining:

  • What we could realistically build

  • How risk scaled with complexity

  • How to grow responsibly

We lacked long-term direction and I wanted a clear path forward.

AI for Navigation

AI-generated explanations for nurse recommendations

Automated staffing and assignment requests

We initially launched a modest feature: AI for navigation

Users could ask metric questions and AI would:

Apply filters

Surface the right view

KEY TAKEAWAY

It wasn’t flashy, it was practical. Some stakeholders initially doubted its value. So I proposed a limited pilot with trusted customers and clear expectations.

A moment of validation: Pilot feedback showed real utility

Ask iQueue pulls it so much faster than me clicking through all my units. It pinpointed exactly what I needed. I didnt expect it to be that helpful.

Ask iQueue was pretty quick and accurate. If I want to know what was our monthly drop in the census, I could see that without having to go and navigate through metrics. I put the thumbs up on every question and answer that I received.

KEY TAKEAWAY

It also confirmed something important: AI doesn't need to be magical to be valuable. Reducing friction is real value.

Pushing toward higher value

Next, I proposed AI-generated explanations for nurse recommendations. Why this mattered:

Customers frequently questioned assignments

Our assignment algorithm had clear logic

Hallucination risk was low

Value was high

User testing was very positive. Unfortunately, shifting priorities pulled engineering away before launch.

A user could ask AI to explain nurse recommendation — something customers questioned daily.
AI provides reasons based on the prioritization score and built-in logic and automates manual tasks.

Result: Improved trust in our recommendations and automated manual workflows.

KEY TAKEAWAY

A good idea doesn’t always mean the right timing.

Shaping AI strategy company-wide

Over time, our approach evolved:

Instead of duplicating AI work across products, we moved toward:
  • Shared AI platform strategy

  • Contained use cases

  • Incremental learning

&

I worked with our CTO to align on:
  • Responsible scoping

  • Hallucination risk

  • Trust preservation

&

I led sessions across the company to build confidence and reduce fear of AI:
  • What are hallucinations

  • How we mitigate risk

  • Where AI fits (and doesn't)

What this project says about my approach

I don’t chase trends. I evaluate them and ask:

Is this valuable?

Is this safe?

Is this scalable?

Is this worth building?

KEY TAKEAWAY

Good design isn’t just about shipping more. It’s about protecting users and the business.

Reflection

AI in healthcare isn’t just a capability question. It’s a responsibility question. This project taught me:

Hype fades, trust remains

Small wins beat big risks

Responsible AI design is a product decision, not a feature