Recursive Neural Network
Recursive neural networks represent yet another generalization of recurrent networks, with a different kind of computational graph, which is structured as a deep tree, rather than the chain-like structure of RNNs. The typical computational graph for a recursive network is illustrated in figure below. Recursive neural networks were introduced by Pollack (1990) and their potential use for learning to reason was described by Bottou (2011). Recursive networks have been successfully applied to processing data structures as input to neural nets (Frasconi et al., 1997, 1998), in natural language processing (Socher et al., 2011a,c, 2013a) as well as in computer vision (Socher et al., 2011b). One clear advantage of recursive nets over recurrent nets is that for a sequence of the same length $τ$, the depth (measured as the number of compositions of nonlinear operations) can be drastically reduced from $τ$ to $O(log τ )$, which might help deal with long-term dependencies. An open question is h...