Designing Trust in AI & Robotics
to
Key Learnings
- Design interfaces that communicate AI confidence, uncertainty, and risk without overwhelming users
- Apply UX principles to increase trust and adoption in autonomous and AI-assisted systems
- Identify common failure modes in AI usability that lead to ethical or operational risk
Speakers
Speaker: Nyekachi Wihioka
Profession: Head of Product Design
Workplace: RideScan AI
Description
As AI and robotic systems move into high-risk, real-world environments, usability failures are no longer cosmetic; they become safety, ethical, and governance risks. This session explores how UX and product design principles can be applied to AI and robotics to make system behavior interpretable, reliable, and accountable. Drawing from real-world deep tech systems, the talk reframes UX as a core trust infrastructure for intelligent machines. This session focuses on concepts and methods, not tools or product promotion: 1. Human-centered design for probabilistic and autonomous systems 2. Trust, explainability, and transparency in AI-driven decision-making 3. UX patterns for uncertainty, risk, and system confidence 4. Human-robot interaction design in operational and enterprise contexts 5. Design’s role in AI ethics, governance, and compliance