Gates

Advances in Neural Information Processing Systems 21 |

(This is a shortened version of “Gates: A graphical notation for mixture models” https://www.microsoft.com/en-us/research/publication/gates-a-graphical-notation-for-mixture-models/)

Gates are a new notation for representing mixture models and context-sensitive independence in factor graphs. Factor graphs provide a natural representation for message-passing algorithms, such as expectation propagation. However, message passing in mixture models is not well captured by factor graphs unless the entire mixture is represented by one factor, because the message equations have a containment structure. Gates capture this containment structure graphically, allowing both the independences and the message-passing equations for a model to be readily visualized. Different variational approximations for mixture models can be understood as different ways of drawing the gates in a model. We present general equations for expectation propagation and variational message passing in the presence of gates.