Neural Network Languages

Established: January 1, 2013

The goals of this project are to develop a neural network language that:

  • is easy to use and understand
  • can be compiled to very efficient code
  • allows derivatives of any order
  • makes it easy for the end user to keep track of complex tensor expressions

In particular, the derivatives of matrix expressions in deep neural networks result in complex tensor expressions. These can be difficult for the average programmer to understand and manipulate and also can be difficult to fully optimize. As a consequence, many existing neural network libraries hide complex sequences of operations in larger functions, e.g., combining a backward gradient computation with some aspect of the optimization algorithm. This makes it more difficult for end users to quickly and easily test out novel optimization strategies.

The first version of the neural network language formed the basis of CNTK, the Computational Network Tool Kit. This language only allowed for gradient computation, and certain aspects of the gradient computation were unnecessarily entwined with some aspects of the optimization. The new language is being designed to expose every aspect of the computation to the end user so nothing will be hidden in black box code.