Learning a Generative Model of Images by Factoring Appearance and Shape

  • Nicolas Le Roux ,
  • Nicolas Heess ,
  • Jamie Shotton ,

Neural Computation | , Vol 23: pp. 593-650

Publication

Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system’s extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape.We then propose a generativemodel of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.