Department Seminars & Colloquia
When you're logged in, you can subscribe seminars via e-mail
Modern machine learning (ML) has achieved unprecedented empirical success in many application areas. However, much of this success involves trial-and-error and numerous tricks. These result in a lack of robustness and reliability in ML. Foundational research is needed for the development of robust and reliable ML. This talk consists of two parts. The first part will present the first mathematical theory of physics informed neural networks (PINNs) -one of the most popular deep learning frameworks for solving PDEs. Linear second-order elliptic and parabolic PDEs are considered. I will show the consistency of PINNs by adapting the Schauderapproach and the maximum principle.
The second part will focus on some recent mathematical understanding and development of neural network training.
Specifically, two ML phenomena are analyzed --"Plateau Phenomenon" and "Dying ReLU."New algorithms are developed based on the insights gained from the mathematical analysis to improve neural network training.
ZOOMID 832 222 6176 (password: saarc)
ZOOMID 832 222 6176 (password: saarc)
The purpose of this reading seminar is to study the following:
(1) Bourgain's invariant measure argument in stochastic PDE,
(2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity,
(3) Exponential converence to the Gibbs equilibrium.
This seminar is mainly based on [1, 2, 3].
Thursday, February 18, 2021 - 10:00 to 12:00
Exponential converence to the Gibbs equilibrium, and Poincare inequality for Gauss-
ian measures (Gibbs measures).
The purpose of this reading seminar is to study the following:
(1) Bourgain's invariant measure argument in stochastic PDE,
(2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity,
(3) Exponential converence to the Gibbs equilibrium.
This seminar is mainly based on [1, 2, 3].
The purpose of this reading seminar is to study the following:
(1) Bourgain's invariant measure argument in stochastic PDE,
(2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity,
(3) Exponential converence to the Gibbs equilibrium.
This seminar is mainly based on [1, 2, 3].
Tuesday, February 16, 2021 - 10:00 to 12:00
Main structure theorems of the set of invariant measures, the uniqueness of the
invariant measure and its ergodicity.
최근의 딥러닝 연구는 효율적인 알고리즘 설계, 더 높은 성능 도출, 알고리즘의 작동원리 분석등에 수학적 방법론을 적용하려는 시도들이 늘어나고 있지만, 아직 많은 수학자들에게는 조금은 낯선 영역이다.
이번 세미나에서는 수학을 연구하는 학생과 연구자들을 대상으로, Deep Learning Research에서 관심있는 주제와 연구 대상, 그리고 연구 방법들에 대한 일반적인 내용들을 소개하고, 최신 연구 동향에 대해 살펴봄으로써, 딥러닝 연구에 대해 이해하고, 수학이 이러한 연구에 어떻게 기여할 수 있을지에 대해 고민해 볼 수 있는 시간을 가져보려 한다. 특히 주로 이미지 데이터들을 처리하는 알고리즘 및 방법론과, 좀 더 빠르고 정확한 영상 인식알고리즘을 설계하기 위한 연구에 대해 소개하고, 관련 분야에서 최근 관심 있어 하는 연구 주제들은 무엇이 있는지에 대해서도 설명한다.
자연과학동 (E6-1) Room 4415
Algebraic Geometry
박진형 (서강대학교)
Positivity properties of double point divisors
자연과학동 (E6-1) Room 4415
Algebraic Geometry
The double point divisor of an embedded smooth projective variety is an effective divisor that is (the divisorial component of) the non-isomorphic locus of a general projection to a hypersurface. Some positivity properties of double point divisors were studied by Mumford, Ilic, Noma, etc. in a variety of flavors. In this talk, we study the very-ampleness of double point divisor from outer projection and the bigness of double point divisor from inner projection.
Introduction: In this lecture series, we'll discuss algebro-geometric study on fundamental problems concerning tensors via higher secant varieties.
We start by recalling definition of tensors, basic properties and small examples and proceed to discussion on tensor rank, decomposition, and X-rank
for any nondegenerate variety $X$ in a projective space. Higher secant varieties of Segre (resp. Veronese) embeddings will be regarded as a natural
parameter space of general (resp. symmetric) tensors in the lectures. We also review known results on dimensions of secants of Segre and Veronese,
and consider various techniques to provide equations on the secants.
In the end, we'll finish the lectures by introducing some open problems related to the theme such as syzygy structures and singularities of higher secant varieties.
The purpose of this reading seminar is to study the following:
(1) Bourgain's invariant measure argument in stochastic PDE,
(2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity,
(3) Exponential converence to the Gibbs equilibrium.
This seminar is mainly based on [1, 2, 3].
Tuesday, February 9, 2021 - 14:00 to 16:00
Bourgain's invariant measure argument in stochastic PDE and almost sure global
existence of stochastic Gross-Pitaevskii equation
A famous theorem of Green and Tao says there are arbitrarily long arithmetic progressions consisting of prime numbers. In that 2008 paper, they predicted that similar statements should hold for prime elements of other number fields and the case of the Gaussian integers $Z[i]$ was subsequently settled by Tao.
In the first of my two talks, I would like to share my (limited)
knowledge about the background and history underlying their work.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
In the latter one hour, I will explain our generalization of their work to the general number fields based on my joint work with my Tohoku colleagues Masato Mimura, Akihiro Munemasa, Shin-ichiro Seki and Kiyoto Yoshino (arxiv:2012.15669).
Time permitting, I will also touch upon its positive characteristic analog (arxiv:2101.00839). The case of the polynomial rings had also been conjectured by Green-Tao in the same paper and settled by Lê in 2011.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
The purpose of this reading seminar is to study the following:
(1) Bourgain's invariant measure argument in stochastic PDE,
(2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity,
(3) Exponential converence to the Gibbs equilibrium.
This seminar is mainly based on [1, 2, 3].
Monday, February 8, 2021 - 14:00 to 16:00
Parabolic stochastic quantization, canonical stochastic quantization, Gaussian eld
and Gibbs measures.
Zoom ID: 958 459 198, Pw: 098359
Zoom ID: 958 459 198, Pw: 098359
For a scheme with a group scheme action and an equivariant perfect obstruction theory we shall outline a proof of a virtual equivariant Riemann-Roch theorem. This is an extension of a result of Fantechi-Göttsche to the equivariant context. We shall further discuss a non-abelian virtual localization theorem, relating the virtual classes of the stack and virtual classes of the associated inertia stack for a naturally induced perfect obstruction theory. This talk is based on joint work with Charanya Ravi.
The zoom ID is 352 730 6970 and the password is 9999. All times in KST=UTC+9.
The zoom ID is 352 730 6970 and the password is 9999. All times in KST=UTC+9.
Geometric and topological structures can aid statistics in several ways. In high dimensional statistics, geometric structures can be used to reduce dimensionality. High dimensional data entails the curse of dimensionality, which can be avoided if there are low dimensional geometric structures. On the other hand, geometric and topological structures also provide useful information. Structures may carry scientific meaning about the data and can be used as features to enhance supervised or unsupervised learning.
In this talk, I will explore how statistical inference can be done on geometric and topological structures. First, given a manifold assumption, I will explore the minimax rate for estimating the dimension of the manifold. Second, also under the manifold assumption, I will explore the minimax rate for estimating the reach, which is a regularity quantity depicting how a manifold is smooth and far from self-intersecting. Third, I will investigate inference on persistent homology of a density function, where the persistent homology quantifies salient topological features that appear at different resolutions of the data. Fourth, I will explore how persistent homology can be further applied in machine learning.
We will discuss recent work, where we study the properties of the spectral sequence induced by the birational tower introduced in the 1st talk. In particular, we will show that this spectral sequence is strongly convergent.
Zoom ID: 352 730 6970, PW: 9999 All times are in Korean Standard Time KST = UTC+9
Zoom ID: 352 730 6970, PW: 9999 All times are in Korean Standard Time KST = UTC+9
We consider the incompressible fluid equations including the Euler and SQG equations in critical Sobolev spaces, which are Sobolev spaces with the same scaling as the Lipschitz norm of the velocity. We show that initial value problem for the equations are ill-posed at critical regularity. Such an ill-posedness result can be used to prove enhanced dissipation for the dissipative counterpart.
This is based on joint works with Tarek Elgindi, Tsuyoshi Yoneda, and Junha Kim.
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. This is a part of herding problem, which considers the relationship between shepherd dogs and sheep. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region. Numerical simulations of such models quickly become unfeasible for a large number of interacting agents, as the number of interactions grows $O(N^2)$ for $N$ agents. For reducing the computational cost to $O(N)$, we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. In this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics.
In this talk, we introduce continuous-time deterministic optimal control problems with entropy regularization. Applying the dynamic programming principle, we derive a novel class of Hamilton-Jacobi-Bellman (HJB) equations and prove that the optimal value function of the maximum entropy control problem corresponds to the unique viscosity solution of the HJB equation. Our maximum entropy formulation is shown to enhance the regularity of the viscosity solution and to be asymptotically consistent as the effect of entropy regularization diminishes. A salient feature of the HJB equations is computational tractability. Generalized Hopf-Lax formulas can be used to solve the HJB equations in a tractable grid-free manner without the need for numerically optimizing the Hamiltonian. We further show that the optimal control is uniquely characterized as Gaussian in the case of control affine systems and that, for linear-quadratic problems, the HJB equation is reduced to a Riccati equation, which can be used to obtain an explicit expression of the optimal control. Lastly, we discuss how to extend our results to continuous-time model-free reinforcement learning by taking an adaptive dynamic programming approach.
Distributed convex optimization has received a lot of interest from many researchers since it is widely used in various applications, containing wireless network sensor and machine learning. Recently, A. S. Berahas et al (2018) introduced a variant of the distributed gradient descent called the Near DGD+ which combines nested communications and gradient descent steps. They proved that this scheme finds the optimum point using a constant step size when the target function is strongly convex and smooth function. In the first part, we show that the scheme attains O(1/t) convergence rate for convex and smooth function. In addition we obtain a convergence result of the scheme for quasi-strong convex function. In the second part, we use the idea of Near DGD+ to design a variant of the push-sum gradient method on directed graph. This talk is based on a joint work with Doheon Kim and Seok-bae Yun.
Deep neural networks have achieved state-of-the-art performance in a variety of fields. The exponential growth of machine learning models and the extreme success of deep learning have seen application across a multitude of disciplines. Recent works observe that a class of widely used neural networks can be viewed as the Euler method of numerical discretization. From the numerical discretization perspective, Total Variation Diminishing (TVD) Runge-Kutta methods are more advanced techniques than the explicit Euler method that produce both accurate and stable solutions. Motivated by the TVD property and a generalized Runge-Kutta method, we proposed new networks which improve robustness against adversarial attacks. If time permits, we explore a deep learning methodology that can be applied to the data-driven discovery of numerical PDEs.
Since a ground breaking work by Cazenave-Lions in 1982, showing uniqueness (up to symmetries) of variationally constructed solutions to Hamiltonian PDEs has played an indispensable role for verifying their orbital stability. In this talk, we discuss how to obtain the uniqueness of a family of binary star solutions to the Euler-Poisson equations, variationally constructed by McCann in 2006. Main methodology is based on perturbation arguments crucially relying on the exact asymptotic behaviors of solutions.
In the context of Voevodsky’s triangulated category of motives, we will describe a tower of triangulated functors which induce a finite filtration on the Chow groups. For smooth projective varieties, this finite filtration is a good candidate for the (still conjectural) Bloch-Beilinson filtration.
Zoom ID: 352 730 6970, PW: 9999 All time is in Korean Standard Time KST= UTC+9h.
Zoom ID: 352 730 6970, PW: 9999 All time is in Korean Standard Time KST= UTC+9h.
We consider the Cauchy problem of the self-dual Chern-Simons-Schrödinger equation (CSS) under equivariance symmetry. It is $L^2$-critical, has the pseudoconformal symmetry, and admits a soliton $Q$ for each equivariance index $m \geq 0$. An application of the pseudoconformal transform to $Q$ yields an explicit finite-time blow-up solution $S(t)$ which contracts at the pseudoconformal rate $|t|$. In the high equivariance case $m \geq 1$, the pseudoconformal blow-up for smooth finite energy solutions in fact occurs in a codimension one sense; it is stable under a codimension one perturbation, but also exhibits an instability mechanism. In the radial case $m=0$, however, $S(t)$ is no longer a finite energy blow-up solution. Interestingly enough, there are smooth finite energy blow-up solutions, but their blow-up rates differ from the pseudoconformal rate by a power of logarithm. We will explore these interesting blow-up dynamics (with more focus on the latter) via modulation analysis. This talk is based on my joint works with Soonsik Kwon and Sung-Jin Oh.