Published in

2013 IEEE 14th International Conference on Mobile Data Management

DOI: 10.1109/mdm.2013.28

Links

Tools

Export citation

Search in Google Scholar

Understanding Sequential Decisions via Inverse Reinforcement Learning

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way that these functions are weighted is usually unknown even to the decision maker. But if we were able to understand the individual motivations and compare such motivations among individuals, then we would be able to actively change the environment so as to increase satisfaction and/or improve performance. In this work, we approach the problem of providing high-level and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in our work). A novel algorithm for the analysis of observational records is proposed. We also present a methodology that allows researchers to converge towards a summary description of an agent's behaviors, through the minimization of an error measure between the current description and the observed behaviors. This work was validated using not only a synthetic dataset representing the motivations of a passenger in a public transportation network, but also real taxi drivers' behaviors from their trips in an urban network. Our results show that our method is not only useful, but also performs much better than the previous methods, in terms of accuracy, efficiency and scalability.