# 학과 세미나 및 콜로퀴엄

구글 Calendar나 iPhone 등에서 구독하면 세미나 시작 전에 알림을 받을 수 있습니다.

In mathematics, every mathematical object is generated along with a set of processes setting up boundaries and relationships as recently emphasized in Prof. June Huh's public lecture (July 13, 2022), commemorating his Fields Medal award. These days we live in the era of the 4th industrial revolution in which the advent of “the era of expanding technological super-gap on a global scale” is expected. More than ever including the era of Gauss (German: Gauß; 30 April 1777 – 23 February 1855) when he emphasized, "Mathematics is the queen of sciences, often condescending to render service to other sciences, but in all relations, she is entitled to the first rank," the role of mathematics is apparently getting much more important as time goes by in the era of the digital revolution. The importance of raising awareness of this cannot be overemphasized.
In this talk according the above, three concrete examples are introduced to show how mathematics can practically contribute to the improvement of the human digital civilization in view of the processes setting up boundaries and relationships: 1) mathematics and "the smallest object" in physics, 2) first-principles(ab initio) in physics and mathematics, and 3) building up and utilizing our own first-principles allowing to flexibly cross boundaries between academic fields, which often makes it much easier for us to deal with various important problems. As for the practical examples, some of our recent works are briefly introduced as well, including mathematical conceptualizaiton of metaverse, construction of "physical system for linguistic data" with its ab initio-based utilization, etc; we might as well say that a sort of "Academic Continuation (analogous to analytic continuation)" is applied in each case. From this talk, we learn to boldly seek out useful mathematical connections crossing boundaries as above, more enriching the digital revolution; various academic/theoretical fields considered different from each other actually share an amount of common/similar mathematical structures.

Unlike Green's functions for elliptic equations in divergence form, Green's function for elliptic operators in nondivergence form do not possess nice pointwise bounds even in the case when the coefficients are uniformly continuous.
In this talk, I will describe how to construct and get pointwise estimates for elliptic PDEs in non-divergence form with coefficients satisfying the so called Dini mean oscillation condition.
I will also mention the parallel result for parabolic equations in non-divergence form.

We study the problem of maximizing a continuous DR-submodular function that is not necessarily smooth. We prove that the continuous greedy algorithm achieves a [(1-1/e)OPT-ε] guarantee when the function is monotone and Hölder-smooth, meaning that it admits a Hölder-continuous gradient. For functions that are non-differentiable or non-smooth, we propose a variant of the mirror-prox algorithm that attains a [(1/2)OPT-ε] guarantee. We apply our algorithmic frameworks to robust submodular maximization and distributionally robust submodular maximization under Wasserstein ambiguity. In particular, the mirror-prox method applies to robust submodular maximization to obtain a single feasible solution whose value is at least [(1/2)OPT-ε]. For distributionally robust maximization under Wasserstein ambiguity, we deduce and work over a submodular-convex maximin reformulation whose objective function is Hölder-smooth, for which we may apply both the continuous greedy method and the mirror-prox method. This is joint work with Duksang Lee, a fifth-year Ph.D. student at KAIST Math, and Nam Ho-Nguyen from the University of Sydney.

Order types are a combinatorial classification of finite point sets used in discrete and computational geometry. This talk will give an introduction to these objects and their analogue for the projective plane, with an emphasis on their symmetry groups.
This is joint work with Emo Welzl:
https://arxiv.org/abs/2003.08456

This lecture explores a list of topics and areas that have led my research in computational mathematics and deep learning in recent years. Numerical approaches in computational science are crucial for understanding real-world phenomena, and deep neural networks have achieved state-of-the-art performance in a variety of fields. The exponential growth and the extreme success of deep learning and scientific computing have seen application across a multitude of disciplines. In this lecture, I will focus on recent advancements in scientific computing and deep learning such as adversarial examples, nanophotonics, and numerical PDEs.

This series of talks is intended to be a gentle introduction to the random walk theory on infinite groups and hyperbolic spaces. We will touch upon keywords including hyperbolicity, stationary measure, boundaries and limit laws. Those who are interested in geometric group theory or random walks are welcomed to join.

This is a casual seminar among TARGET students, but other graduate students are also welcomed.

This is a casual seminar among TARGET students, but other graduate students are also welcomed.

This is a casual seminar among TARGET students, but other graduate students are also welcomed.

We consider a deep generative model for nonparametric distribution estimation problems. The true data-generating distribution is assumed to possess a certain low-dimensional structure. Under this assumption, we study convergence rates of estimators obtained by likelihood approaches and generative adversarial networks (GAN). The convergence rate depends only on the noise level, intrinsic dimension and smoothness of the underlying structure. The true distribution may or may not possess the Lebesgue density, depending on the underlying structure. For the singular case (no Lebesgue density), the convergence rate of GAN is strictly better than that of the likelihood approaches. Our lower bound of the minimax optimal rates shows that the convergence rate of GAN is close to the optimal rate. If the true distribution allows a smooth Lebesgue density, an estimator obtained by a likelihood approach achieves the minimax optimal rate.

A family of surfaces is called mean curvature flow (MCF) if the velocity of surface is equal to the mean curvature of the surface at that point. Even starting from smooth surface, the MCF typically encounters some singularities and various generalized notions of MCF have been proposed to extend the existence past singularities. They are level set flow, Brakke flow and BV flow, just to name a few. In my talk I explain a recent global-in-time existence result of a particular generalized solution which has some desirable properties. I describe a basic outline of how to construct the solution.

Over recent years, data science and machine learning have been the center of attention in both the scientific community and the general public. Closely tied to the ‘AI-hype’, these fields are enjoying expanding scientific influence as well as a booming job market. In this talk, I will first discuss why mathematical knowledge is important for becoming a good machine learner and/or data scientist, by covering various topics in modern deep learning research. I will then introduce my recent efforts in utilizing various deep learning methods for statistical analysis of mathematical simulations and observational data, including surrogate modeling, parameter estimation, and long-term trend reconstruction. Various scientific application examples will be discussed, including ocean diffusivity estimation, WRF-hydro calibration, AMOC reconstruction, and SIR calibration.

Polarization is a technique in algebra which provides combinatorial tools to study algebraic invariants of monomial ideals. Depolarization of a square free monomial ideal is a monomial ideal whose polarization is the original ideal. In this talk, we briefly introduce the depolarization and related problems and introduce the new method using hyper graph coloring.

We define the notion of infinity-categories and Kan complex using observations from the previous talk. A process, called the nerve construction, producing infinity-categories from usual categories will be introduced and we will set dictionaries between them. Infinity-categories of functors will be introduced as well.