Dissemin is shutting down on January 1st, 2025

Published in

Volume 2A: 40th Design Automation Conference

DOI: 10.1115/detc2014-35440

Links

Tools

Export citation

Search in Google Scholar

Improving Preference Prediction Accuracy With Feature Learning

Proceedings article published in 2014 by Alex Burnap, Yi Ren, Honglak Lee, Richard Gonzalez ORCID, Panos Y. Papalambros
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Motivated by continued interest within the design community to model design preferences, this paper investigates the question of predicting preferences with particular application to consumer purchase behavior: How can we obtain high prediction accuracy in a consumer preference model using market purchase data? To this end, we employ sparse coding and sparse restricted Boltzmann machines, recent methods from machine learning, to transform the original market data into a sparse and high-dimensional representation. We show that these ‘feature learning’ techniques, which are independent from the preference model itself (e.g., logit model), can complement existing efforts towards high-accuracy preference prediction. Using actual passenger car market data, we achieve significant improvement in prediction accuracy on a binary preference task by properly transforming the original consumer variables and passenger car variables to a sparse and high-dimensional representation.