Department Seminars & Colloquia




2022-11
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 1 12
13 14 15 16 17 18 19
20 21 22 23 24 25 1 26
27 28 29 30      
2022-12
Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 1 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

When you're logged in, you can subscribe seminars via e-mail

In this talk we shall first review our recent results about the equivalence of non-linear Fokker-Planck equations and McKean Vlasov SDEs. Then we shall recall our results on existence of weak solutions to both such equations in the singular case, where the measure dependence of the coefficients are of Nemytskii-type. The main new results to be presented are about weak uniqueness of solutions to both nonlinear Fokker-Planck equations and the corresponding McKean-Vlasov SDEs in the case of (possibly) degenerate diffusion coefficients . As a consequence of this and one obtains that the laws on path space of the solutions to the McKean-Vlasov SDEs form a nonlinear Markov process in the sense of McKean.
Host: 확률 해석 및 응용 연구센터     Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     English     2022-09-21 16:13:23
Deep generative models (DGM) have been an intersection between the probabilistic modeling and the machine learning communities. Particularly, DGM has impacted the field by introducing VAE, GAN, Flow, and recently Diffusion models with its capability to learn the data density of datasets. While there are many model variations in DGM, there are also common fundamental theories, assumptions and limitations to study from the theoretic perspectives. This seminar provides such general and fundamental challenges in DGMs, and later we particularly focus on the key developments in diffusion models and their mathematical properties in detail.
Host: 확률 해석 및 응용 연구센터     Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     Korean     2022-09-21 16:12:19
Stochastic finite-sum optimization problems are ubiquitous in many areas such as machine learning, and stochastic optimization algorithms to solve these finite-sum problems are actively studied in the literature. However, there is a major gap between practice and theory: practical algorithms shuffle and iterate through component indices, while most theoretical analyses of these algorithms assume uniformly sampling the indices. In this talk, we talk about recent research efforts to close this theory-practice gap. We will discuss recent developments in the theoretical convergence analysis of shuffling-based optimization methods. We will first consider minimization algorithms, mainly focusing on stochastic gradient descent (SGD) with shuffling; we will then briefly talk about some new progress on minimax optimization methods.
Host: 확률 해석 및 응용 연구센터     Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     Korean English if it is requested     2022-09-21 16:10:55