From Few to Many: Generative Models for Recognition Under Variable Pose and Illumination Athinodoros S. Georghiades, Peter N. Belhumeur, David J. Kriegman abstract We present recent work on appearance-based methods for modeling the variability in the images of objects due to pose and illumination. Unlike previous appearance-based approaches, the presented method requires only a small set of training images to generate a rich representation that models this variability. From as few as three images of an object in fixed pose with varying but unknown lighting, a surface and an albedo map are reconstructed and then synthetic images are rendered under variable pose and lighting. A representation useful for recognition is thus generated. The methods have been tested within the domain of face recognition on a database of 4050 images of 10 faces with variable pose and lighting; their performance exceeds that of popular existing methods.