Deep neural networks usually act on fixed dimensional items. However, many real-world problems are formulated as learning mappings from sets of items to outputs. Such problems include multiple-instance learning, visual scene understandings, few-shot classifications, and even generic Bayesian inference procedures. Recently, several methods have been proposed to construct neural networks taking sets as inputs. The key properties required for those neural networks are permutation invariance and equivariance, meaning that intermediate outputs and final values of a network should remain unchanged with respect to the processing order of items in sets. This talk discusses recent advances in permutation invariant and equivariant neural networks, and discuss their theoretical properties, especially their universalities. The later part of the talk will also introduce interesting applications of the permutation invariant/equivariant neural networks.
|