Links

Tools

Export citation

Search in Google Scholar

Evaluating User-Adaptive Systems: Lessons from Experiences with a Personalized Meeting Scheduling Assistant.

This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

We discuss experiences from evaluating the learning perfor- mance of a user-adaptive personal assistant agent. We dis- cuss the challenge of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. Reflections on negative and positive expe- riences point to the challenges of evaluating user-adaptive AI systems. Lessons learned concern early consideration of eval- uation and deployment, characteristics of AI technology and domains that make controlled evaluations appropriate or not, holistic experimental design, implications of "in the wild" evaluation, and the effect of AI-enabled functionality and its impact upon existing tools and work practices.