Beyond Dashboards: Measuring AI's Human Impact
to
Key Learnings
- Establish principles and metrics that guard the human experience
- Treat AI models as evolving products and track their behavior
- Embed AI evaluation in design rituals and research rhythms
Speakers
Speaker: Stacy All
Profession: Design & Research Leader
Workplace: Independent Consultant
Description
Teams are shipping AI faster than they know how to measure it. Dashboards look healthy while the human experience quietly degrades. This talk presents a practical approach to measuring AI success in human terms, grounded in real-world case studies and cautionary examples from organizations that got it right and wrong. Teams that succeed treat measurement differently. They define success upfront using experience principles and guardrail metrics, build shared understanding of what their AI models do well and where they struggle as those models evolve, and embed evaluation into design and research practice over time. Transparency about system limitations and measurement choices turns metrics from scorekeeping into trust. Whether you’re a designer, researcher, or product leader, you’ll leave better equipped to define success earlier, interpret model behavior more realistically, and make human impact part of ongoing practice.