11 pages, 4 figures, 3 tables ; Head-pose estimation has many applications, such as social-event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging because it must cope with changing illumination conditions, face orientation and appearance variabilities, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment problems. We propose a mixture of linear regression method that learns how to map high-dimensional feature vectors (extracted from bounding-boxes of faces) onto both head-pose parameters and bounding-box shifts, such that at runtime they are simultaneously predicted. We describe in detail the mapping method that combines the merits of manifold learning and of mixture of linear regression. We validate our method with three publicly available datasets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.