Wide-Range, Person- and Illumination-Insensitive Head Orientation Estimation Ying Wu, Kentaro Toyama abstract We present an algorithm for estimation of head orientation, given cropped images of a subject's head from any viewpoint. Our algorithm handles dramatic changes in illumination, applies to people with varying appearances without per-user initialization, and covers a wider range (side and back views) of head orientations than previous algorithms. The algorithm builds an ellipsoidal model of the head, where points on the model maintain probablistic information about local head characteristics. These characteristics are not specified by our framework, but we find that feature vectors based on edge density are effective. To collect data for each point on the model, local features are extracted from hand-annotated training images and projected onto the model. Each model point learns a probability density function from the training observations. During pose estimation, features are extracted from input images; then, using the maximum {\em a posteriori} criterium, the algortihm finds the pose that is most likely given the current observation. We perform experiments using several training strategies, and run them on people with varying appearances. Results confirm the algorithm's robustness to visual perturbations, including scaling, blurring, ghosting, and illumination effects.