Department Seminars & Colloquia




2021-09
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 1 18
19 20 21 22 23 24 25
26 27 28 29 30    
2021-10
Sun Mon Tue Wed Thu Fri Sat
          1 1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 1 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            

When you're logged in, you can subscribe seminars via e-mail

Partial differential equations such as heat equations have traditionally been our main tool to study physical systems. However, physical systems are affected by randomness (noise). Thus, stochastic partial differential equations have gained popularity as an alternative. In this talk, we first consider what “noise” means mathematically and then consider stochastic heat equations perturbed by space-time white noise such as parabolic Anderson model and stochastic reaction-diffusion equations (e.g., KPP or Allen-Cahn equations). Those stochastic heat equations have similar properties as heat equations, but exhibit different behavior such as intermittency and dissipation, especially as time increases. We investigate in this talk how the long-time behaviors of the stochastic heat equations are different from heat equations.
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-10-01 11:17:28
Recently, deep learning approaches have become the main research frontier for image reconstruction and enhancement problems thanks to their high performance, along with their ultra-fast inference times. However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data. In particular, self-supervised learning and generative models have been successfully used for various inverse problem applications. In this talk, we overview these approaches from a coherent perspective in the context of classical inverse problems and discuss their various applications. In particular, the cycleGAN approach and a recent Noise2Score approach for unsupervised learning will be explained in detail using optimal transport theory and Tweedie’s formula with score matching.
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-09-23 18:23:27
Many of real-world data, e.g., the VGGFace2 dataset, which is a collection of multiple portraits of individuals, come with nested structures due to grouped observation. The Ornstein auto-encoder (OAE) is an emerging framework for representation learning from nested data, based on an optimal transport distance between random processes. An attractive feature of OAE is its ability to generate new variations nested within an observational unit, whether or not the unit is known to the model. A previously proposed algorithm for OAE, termed the random-intercept OAE (RIOAE), showed an impressive performance in learning nested representations, yet lacks theoretical justification. In this work, we show that RIOAE minimizes a loose upper bound of the employed optimal transport distance. After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. PSOAE alleviates the instability of RIOAE and provides more flexible representation of nested data. We demonstrate the high performance of PSOAE in the three key tasks of generative models: exemplar generation, style transfer, and new concept generation. This is a joint work with Dr. Youngwon Choi (UCLA) and Sungdong Lee (SNU).
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-09-07 15:10:29