Shaping AI Agent Behavior: Why UX Must Own AI Evaluation Frameworks
to
Key Learnings
- How to Design Effective AI Evaluators
- How to Integrate AI Evaluation Into the UX Workflow
- How to Lead Teams Through the Shift
Speakers
Speaker: Melissa Wittmayer
Profession: Director, User Experience
Workplace: Ontra
Description
As AI agents move from novelty to core product functionality, UX teams are stepping into an unexpected new responsibility: designing and managing evaluation systems that shape how AI behaves. In this talk, Melissa Wittmayer shares practical insights from building multi-layered evaluators—accuracy, momentum, and sentiment—to monitor and improve AI agent experiences in production. She breaks down how designers can create meaningful evaluation frameworks, integrate them into existing UX workflows, collaborate with cross-functional partners, and support teams through the mindset shift this work requires. Attendees will leave with a clear picture of why UX is uniquely positioned to guide this evolution, how to overcome the initial overwhelm, and how these evaluators open up entirely new avenues for design impact. The result: a more accountable, measurable, and human-centered AI ecosystem.