Partial differential equations such as heat equations have traditionally been our main tool to study physical systems. However, physical systems are affected by randomness (noise). Thus, stochastic partial differential equations have gained popularity as an alternative. In this talk, we first consider what “noise” means mathematically and then consider stochastic heat equations perturbed by space-time white noise such as parabolic Anderson model and stochastic reaction-diffusion equations (e.g., KPP or Allen-Cahn equations). Those stochastic heat equations have similar properties as heat equations, but exhibit different behavior such as intermittency and dissipation, especially as time increases. We investigate in this talk how the long-time behaviors of the stochastic heat equations are different from heat equations.
Recently, deep learning approaches have become the main research frontier for image reconstruction and enhancement problems thanks to their high performance, along with their ultra-fast inference times. However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data. In particular, self-supervised learning and generative models have been successfully used for various inverse problem applications. In this talk, we overview these approaches from a coherent perspective in the context of classical inverse problems and discuss their various applications. In particular, the cycleGAN approach and a recent Noise2Score approach for unsupervised learning will be explained in detail using optimal transport theory and Tweedie’s formula with score matching.
Many of real-world data, e.g., the VGGFace2 dataset, which is a collection of multiple portraits of individuals, come with nested structures due to grouped observation. The Ornstein auto-encoder (OAE) is an emerging framework for representation learning from nested data, based on an optimal transport distance between random processes. An attractive feature of OAE is its ability to generate new variations nested within an observational unit, whether or not the unit is known to the model. A previously proposed algorithm for OAE, termed the random-intercept OAE (RIOAE), showed an impressive performance in learning nested representations, yet lacks theoretical justification. In this work, we show that RIOAE minimizes a loose upper bound of the employed optimal transport distance. After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. PSOAE alleviates the instability of RIOAE and provides more flexible representation of nested data. We demonstrate the high performance of PSOAE in the three key tasks of generative models: exemplar generation, style transfer, and new concept generation. This is a joint work with Dr. Youngwon Choi (UCLA) and Sungdong Lee (SNU).