Department Seminars & Colloquia




2021-02
Sun Mon Tue Wed Thu Fri Sat
  1 4 2 3 4 1 5 2 6
7 8 2 9 4 10 11 12 13
14 15 16 3 17 1 18 1 19 1 20
21 22 23 24 25 26 27
28            
2021-03
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 1 5 6
7 8 1 9 10 11 3 12 1 13
14 15 16 1 17 18 1 19 2 20
21 22 23 24 25 1 26 2 27
28 29 30 31      

When you're logged in, you can subscribe seminars via e-mail

Many types of diffusion equations have been used to describe diverse natural phenomena. The classical heat equation describes the heat propagation in homogeneous media, and the heat equation with fractional time derivative describes anomalous diffusion, especially sub-diffusion, caused by particle sticking and trapping effects. On the other hand, space-fractional diffusion equations are related to diffusion of particles with long range jumps. In this talk, I will introduce the following: 1. Elementary notion of stochastic parabolic equations 2. Stochastic processes with jumps and their related PDEs and Stochastic PDEs 3. Some regularity results of PDEs and Stochastic PDEs with non-local operators
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-03-22 10:18:13
Introduction: In this lecture series, we'll discuss algebro-geometric study on fundamental problems concerning tensors via higher secant varieties. We start by recalling definition of tensors, basic properties and small examples and proceed to discussion on tensor rank, decomposition, and X-rank for any nondegenerate variety $X$ in a projective space. Higher secant varieties of Segre (resp. Veronese) embeddings will be regarded as a natural parameter space of general (resp. symmetric) tensors in the lectures. We also review known results on dimensions of secants of Segre and Veronese, and consider various techniques to provide equations on the secants. In the end, we'll finish the lectures by introducing some open problems related to the theme such as syzygy structures and singularities of higher secant varieties.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-03-17 14:17:41
Abstract: Millions of individuals track their steps, heart rate, and other physiological signals through wearables. This data scale is unprecedented; I will describe several of our apps and ongoing studies, each of which collects wearable and mobile data from thousands of users, even in > 100 countries. This data is so noisy that it often seems unusable and in desperate need of new mathematical techniques to extract key signals used in the (ode) mathematical modeling typically done in mathematical biology. I will describe several techniques we have developed to analyze this data and simulate models, including gap orthogonalized least squares, a new ansatz for coupled oscillators, which is similar to the popular ansatz by Ott and Antonsen, but which gives better fits to biological data and a new level-set Kalman Filter that can be used to simulate population densities. My focus applications will be determining the phase of circadian rhythms, the scoring of sleep and the detection of COVID with wearables.
Host: 김재경     English     2021-03-16 16:40:02
Deep neural networks have shown amazing success in various domains of artificial intelligence (e.g. vision, speech, language, medicine and game playing). However, classical tools for analyzing these models and their learning algorithms are not sufficient to provide explanations for such success. Recently, the infinite-width limit of neural networks has become one of key breakthroughs in our understanding of deep learning. This limit is unique in giving an exact theoretical description of large scale neural networks. Because of this, we believe it will continue to play a transformative role in deep learning theory. In this talk, we will first review some of the interesting theoretical questions in the deep learning community. Then we will review recent progress in the study of the infinite-width limit of neural networks focused around Neural Network Gaussian Process (NNGP) and Neural Tangent Kernel (NTK). This correspondence allows us to understand wide neural networks as different kernel based machine learning models and provides 1) exact Bayesian inference without ever initializing or training a network and 2) closed form solution of network function under gradient descent training. We will discuss recent advances, applications and remaining challenges of the infinite-width limit of neural networks.
To be announced     2021-03-17 11:15:26
Infinity-category theory is a generalization of the ordinary category theory, where we extend the categorical perspective into the homotopical one. Putting differently, we study objects of interest and "mapping spaces" between them. This theory goes back to Boardman and Vogt, and more recently, Joyal, Lurie, and many others laid its foundation. Despite its relatively short history, it has found applications in many fields of mathematics. For example, number theory, mathematical physics, algebraic K-theory, and derived/spectral algebraic geometry: more concretely, p-adic Hodge theory, Geometric Langlands, the cobordism hypothesis, topological modular forms, deformation quantization, and topological quantum field theory, just to name a few. The purpose of this series of talks on infinity-categories is to make it accessible to those researchers who are interested in the topic. We’ll start from scratch and try to avoid (sometimes inevitable) technical details in developing the theory. That said, a bit of familiarity to the ordinary category theory is more or less necessary. Overall, this series has an eye toward derived/spectral algebraic geometry, but few experience in algebraic geometry would hardly matter. Therefore, everyone is welcome to join us. This is the first in the series. We’ll catch a glimpse of infinity-category theory through some motivational examples.
Zoom ID: 352 730 6970, password: 9999
Host: 박진현     Contact: 박진현 (2734)     Korean English if it is requested     2021-02-23 15:48:18
It is a gentle introduction to the mean curvature flow and its application to knot theory for undergraduate students. J.W.Alexander discovered a knotted sphere embedded in 3-dimensional Euclidean space in 1924. This example has provoked curiosity to find simple conditions under which embedded spheres are unknotted. In this talk we will sketch theorems and conjectures in the mean curvature flow for the knot theory, in analogy to the Ricci flow for the smooth 4-dimensional Poincare conjecture.
Host: 백형렬     Contact: 김규식 (042-350-2702)     Korean English if it is requested     2021-03-03 19:02:33
We will discuss about “Synthetic multistability in mammalian cells”, Zhu et al., bioRxiv (2021) In multicellular organisms, gene regulatory circuits generate thousands of molecularly distinct, mitotically heritable states, through the property of multistability. Designing synthetic multistable circuits would provide insight into natural cell fate control circuit architectures and allow engineering of multicellular programs that require interactions among cells in distinct states. Here we introduce MultiFate, a naturally-inspired, synthetic circuit that supports long-term, controllable, and expandable multistability in mammalian cells. MultiFate uses engineered zinc finger transcription factors that transcriptionally self-activate as homodimers and mutually inhibit one another through heterodimerization. Using model-based design, we engineered MultiFate circuits that generate up to seven states, each stable for at least 18 days. MultiFate permits controlled state-switching and modulation of state stability through external inputs, and can be easily expanded with additional transcription factors. Together, these results provide a foundation for engineering multicellular behaviors in mammalian cells.
Host: 김재경     Korean     2021-04-14 07:44:54
In this talk, I start by giving a brief overview of the practice in deep learning with focus on learning (optimization) and model selection (hyperparameter optimization). In particular, I will describe and discuss black-box optimization approaches to model selection, followed by discussion on how these two stages in deep learning can be collapsed into a single optimization problem, often referred to as bilevel optimization. This allows us to extend the applicability of gradient-based optimization into model selection, although existing gradient-based model selection, or hyperparameter optimization, approaches have been limited because they require an extensive numbers of so-called roll-out. I will then explain how we can view gradient-based optimization as a recurrent network and how this enables us to view hyperparameter optimization as training a recurrent network. This is an insight that leads to a novel paradigm of online hyperparameter optimization which does not require any simulated roll-out.
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-03-11 11:44:01
Abstract: The large deviation problem for the spectrum of random matrices has attracted immense interest. It was first studied for GUE and GOE, which are exactly solvable, and subsequently studied for Wigner matrices with general distributions. Once the sparsity is induced (i.e. each entry is multiplied by the independent Bernoulli distribution, Ber(p)), eigenvalues can exhibit a drastically different behavior. For a large class of Wigner matrices, including Gaussian ensembles and the adjacency matrix of Erdos-Renyi graphs, dense behavior ceases to hold near the constant average degree of sparsity, p~1/n (up to a poly-logarithmic factor). In this talk, I will talk about the spectral large deviation for Gaussian ensembles with a sparsity p=1/n. Joint work with Shirshendu Ganguly.
ZOOM회의정보 link:https://zoom.us/j/94727585394?pwd=QlBSRUNTQi9UWXNLSTlPOTgrRnhhUT09 회의 ID: 947 2758 5394 암호: saarc
Host: 이지운     Contact: 이슬기 (8111)     To be announced     2021-03-05 14:30:10
Introduction: In this lecture series, we'll discuss algebro-geometric study on fundamental problems concerning tensors via higher secant varieties. We start by recalling definition of tensors, basic properties and small examples and proceed to discussion on tensor rank, decomposition, and X-rank for any nondegenerate variety $X$ in a projective space. Higher secant varieties of Segre (resp. Veronese) embeddings will be regarded as a natural parameter space of general (resp. symmetric) tensors in the lectures. We also review known results on dimensions of secants of Segre and Veronese, and consider various techniques to provide equations on the secants. In the end, we'll finish the lectures by introducing some open problems related to the theme such as syzygy structures and singularities of higher secant varieties.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-03-04 17:30:35
The compressible Euler system was first formulated by Euler in 1752, and was complemented by Laplace and Clausius in the 19th century, by introducing the energy conservation law and the concept of entropy based on thermodynamics. The most important feature of the Euler system is the finite-time breakdown of smooth solutions, especially, appearance of a shock wave as severe singularity to irreversibility(-in time) and discontinuity(-in space). Therefore, a fundamental question (since Riemann 1858) is on what happens after a shock occurs. This is the problem on well-posedness (that is, existence, uniqueness, stability) of weak solutions satisfying the 2nd law of thermodynamics, which is called entropy solution. This issue has been conjectured as follows: Well-posedness of entropy solutions for CE can be obtained in a class of vanishing viscosity limits of solutions to the Navier-Stokes system. This conjecture for the fundamental issue remains wide open even for the one-dimensional CE. This talk will give an overview of the conjecture, and recent progress on it.
Host: 백형렬     Contact: 김규식 (042-350-2702)     Korean English if it is requested     2021-03-03 18:59:34
The Hybridizable discontinuous Galerkin (HDG) methods retain the advantage of the discontinuous Galerkin (DG) methods such as flexibility in meshing and preserving local conservation of physical quantities and overcome to shortcomings of the DG by reducing the globally coupled degree of freedom. I will design a multiscale method within the HDG framework. The main concept of the multiscale HDG method is to deriving upscale structure of the method and to generate multiscale spaces defined on the coarse edges that provide a reduced dimensional approximation for numerical traces. Eigenvalue problems plays a significant role in generating a multiscale space. Also, error analysis and a representative number of numerical examples will be given.
Host: 이창옥     To be announced     2021-03-03 14:19:53
Introduction: In this lecture series, we'll discuss algebro-geometric study on fundamental problems concerning tensors via higher secant varieties. We start by recalling definition of tensors, basic properties and small examples and proceed to discussion on tensor rank, decomposition, and X-rank for any nondegenerate variety $X$ in a projective space. Higher secant varieties of Segre (resp. Veronese) embeddings will be regarded as a natural parameter space of general (resp. symmetric) tensors in the lectures. We also review known results on dimensions of secants of Segre and Veronese, and consider various techniques to provide equations on the secants. In the end, we'll finish the lectures by introducing some open problems related to the theme such as syzygy structures and singularities of higher secant varieties.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-02-26 15:55:38
Modern machine learning (ML) has achieved unprecedented empirical success in many application areas. However, much of this success involves trial-and-error and numerous tricks. These result in a lack of robustness and reliability in ML. Foundational research is needed for the development of robust and reliable ML. This talk consists of two parts. The first part will present the first mathematical theory of physics informed neural networks (PINNs) -one of the most popular deep learning frameworks for solving PDEs. Linear second-order elliptic and parabolic PDEs are considered. I will show the consistency of PINNs by adapting the Schauderapproach and the maximum principle. The second part will focus on some recent mathematical understanding and development of neural network training. Specifically, two ML phenomena are analyzed --"Plateau Phenomenon" and "Dying ReLU."New algorithms are developed based on the insights gained from the mathematical analysis to improve neural network training.
ZOOMID 832 222 6176 (password: saarc)
Contact: 이슬기 (8111)     To be announced     2021-02-05 16:05:50
The purpose of this reading seminar is to study the following: (1) Bourgain's invariant measure argument in stochastic PDE, (2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity, (3) Exponential converence to the Gibbs equilibrium. This seminar is mainly based on [1, 2, 3]. Thursday, February 18, 2021 - 10:00 to 12:00 Exponential converence to the Gibbs equilibrium, and Poincare inequality for Gauss- ian measures (Gibbs measures).
Contact: 이슬기 (8111)     To be announced     2021-02-05 10:21:49
The purpose of this reading seminar is to study the following: (1) Bourgain's invariant measure argument in stochastic PDE, (2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity, (3) Exponential converence to the Gibbs equilibrium. This seminar is mainly based on [1, 2, 3].
Contact: 이슬기 (8111)     To be announced     2021-02-05 10:20:16
The purpose of this reading seminar is to study the following: (1) Bourgain's invariant measure argument in stochastic PDE, (2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity, (3) Exponential converence to the Gibbs equilibrium. This seminar is mainly based on [1, 2, 3]. Tuesday, February 16, 2021 - 10:00 to 12:00 Main structure theorems of the set of invariant measures, the uniqueness of the invariant measure and its ergodicity.
Contact: 이슬기 (8111)     To be announced     2021-02-05 10:18:57
최근의 딥러닝 연구는 효율적인 알고리즘 설계, 더 높은 성능 도출, 알고리즘의 작동원리 분석등에 수학적 방법론을 적용하려는 시도들이 늘어나고 있지만, 아직 많은 수학자들에게는 조금은 낯선 영역이다. 이번 세미나에서는 수학을 연구하는 학생과 연구자들을 대상으로, Deep Learning Research에서 관심있는 주제와 연구 대상, 그리고 연구 방법들에 대한 일반적인 내용들을 소개하고, 최신 연구 동향에 대해 살펴봄으로써, 딥러닝 연구에 대해 이해하고, 수학이 이러한 연구에 어떻게 기여할 수 있을지에 대해 고민해 볼 수 있는 시간을 가져보려 한다. 특히 주로 이미지 데이터들을 처리하는 알고리즘 및 방법론과, 좀 더 빠르고 정확한 영상 인식알고리즘을 설계하기 위한 연구에 대해 소개하고, 관련 분야에서 최근 관심 있어 하는 연구 주제들은 무엇이 있는지에 대해서도 설명한다.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-02-11 00:53:21
The double point divisor of an embedded smooth projective variety is an effective divisor that is (the divisorial component of) the non-isomorphic locus of a general projection to a hypersurface. Some positivity properties of double point divisors were studied by Mumford, Ilic, Noma, etc. in a variety of flavors. In this talk, we study the very-ampleness of double point divisor from outer projection and the bigness of double point divisor from inner projection.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-02-11 00:55:48
Introduction: In this lecture series, we'll discuss algebro-geometric study on fundamental problems concerning tensors via higher secant varieties. We start by recalling definition of tensors, basic properties and small examples and proceed to discussion on tensor rank, decomposition, and X-rank for any nondegenerate variety $X$ in a projective space. Higher secant varieties of Segre (resp. Veronese) embeddings will be regarded as a natural parameter space of general (resp. symmetric) tensors in the lectures. We also review known results on dimensions of secants of Segre and Veronese, and consider various techniques to provide equations on the secants. In the end, we'll finish the lectures by introducing some open problems related to the theme such as syzygy structures and singularities of higher secant varieties.
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2021-02-05 14:29:16
The purpose of this reading seminar is to study the following: (1) Bourgain's invariant measure argument in stochastic PDE, (2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity, (3) Exponential converence to the Gibbs equilibrium. This seminar is mainly based on [1, 2, 3]. Tuesday, February 9, 2021 - 14:00 to 16:00 Bourgain's invariant measure argument in stochastic PDE and almost sure global existence of stochastic Gross-Pitaevskii equation
Contact: 이슬기 (8111)     To be announced     2021-02-05 10:17:30
A famous theorem of Green and Tao says there are arbitrarily long arithmetic progressions consisting of prime numbers. In that 2008 paper, they predicted that similar statements should hold for prime elements of other number fields and the case of the Gaussian integers $Z[i]$ was subsequently settled by Tao. In the first of my two talks, I would like to share my (limited) knowledge about the background and history underlying their work.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
Host: 박진현     Contact: 박진현 (2734)     English     2021-01-25 23:26:07
In the latter one hour, I will explain our generalization of their work to the general number fields based on my joint work with my Tohoku colleagues Masato Mimura, Akihiro Munemasa, Shin-ichiro Seki and Kiyoto Yoshino (arxiv:2012.15669). Time permitting, I will also touch upon its positive characteristic analog (arxiv:2101.00839). The case of the polynomial rings had also been conjectured by Green-Tao in the same paper and settled by Lê in 2011.
Zoom ID : 352 730 6970, PW: 9999 All times in KST = UTC+9. This is also identical to the Japan Standard Time.
Host: 박진현     Contact: 박진현 (2734)     English     2021-01-25 23:28:22
The purpose of this reading seminar is to study the following: (1) Bourgain's invariant measure argument in stochastic PDE, (2) Uniqueness of the invariant measure (Gibbs measure) and its ergodicity, (3) Exponential converence to the Gibbs equilibrium. This seminar is mainly based on [1, 2, 3]. Monday, February 8, 2021 - 14:00 to 16:00 Parabolic stochastic quantization, canonical stochastic quantization, Gaussian eld and Gibbs measures.
Zoom ID: 958 459 198, Pw: 098359
Contact: 이슬기 (8111)     To be announced     2021-02-05 10:15:59
For a scheme with a group scheme action and an equivariant perfect obstruction theory we shall outline a proof of a virtual equivariant Riemann-Roch theorem. This is an extension of a result of Fantechi-Göttsche to the equivariant context. We shall further discuss a non-abelian virtual localization theorem, relating the virtual classes of the stack and virtual classes of the associated inertia stack for a naturally induced perfect obstruction theory. This talk is based on joint work with Charanya Ravi.
The zoom ID is 352 730 6970 and the password is 9999. All times in KST=UTC+9.
Host: 박진현     Contact: 박진현 (2734)     English     2021-01-26 21:52:38
Geometric and topological structures can aid statistics in several ways. In high dimensional statistics, geometric structures can be used to reduce dimensionality. High dimensional data entails the curse of dimensionality, which can be avoided if there are low dimensional geometric structures. On the other hand, geometric and topological structures also provide useful information. Structures may carry scientific meaning about the data and can be used as features to enhance supervised or unsupervised learning. In this talk, I will explore how statistical inference can be done on geometric and topological structures. First, given a manifold assumption, I will explore the minimax rate for estimating the dimension of the manifold. Second, also under the manifold assumption, I will explore the minimax rate for estimating the reach, which is a regularity quantity depicting how a manifold is smooth and far from self-intersecting. Third, I will investigate inference on persistent homology of a density function, where the persistent homology quantifies salient topological features that appear at different resolutions of the data. Fourth, I will explore how persistent homology can be further applied in machine learning.
Host: Ji Oon Lee     English     2021-02-01 18:18:30
We will discuss recent work, where we study the properties of the spectral sequence induced by the birational tower introduced in the 1st talk. In particular, we will show that this spectral sequence is strongly convergent.
Zoom ID: 352 730 6970, PW: 9999 All times are in Korean Standard Time KST = UTC+9
Host: 박진현     Contact: 박진현 (2734)     English     2021-01-20 19:08:33
We consider the incompressible fluid equations including the Euler and SQG equations in critical Sobolev spaces, which are Sobolev spaces with the same scaling as the Lipschitz norm of the velocity. We show that initial value problem for the equations are ill-posed at critical regularity. Such an ill-posedness result can be used to prove enhanced dissipation for the dissipative counterpart. This is based on joint works with Tarek Elgindi, Tsuyoshi Yoneda, and Junha Kim.
Host: 강문진     Korean     2021-02-03 10:16:18
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. This is a part of herding problem, which considers the relationship between shepherd dogs and sheep. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region. Numerical simulations of such models quickly become unfeasible for a large number of interacting agents, as the number of interactions grows $O(N^2)$ for $N$ agents. For reducing the computational cost to $O(N)$, we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. In this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics.
Host: 강문진     Korean     2021-01-31 22:16:19
In this talk, we introduce continuous-time deterministic optimal control problems with entropy regularization. Applying the dynamic programming principle, we derive a novel class of Hamilton-Jacobi-Bellman (HJB) equations and prove that the optimal value function of the maximum entropy control problem corresponds to the unique viscosity solution of the HJB equation. Our maximum entropy formulation is shown to enhance the regularity of the viscosity solution and to be asymptotically consistent as the effect of entropy regularization diminishes. A salient feature of the HJB equations is computational tractability. Generalized Hopf-Lax formulas can be used to solve the HJB equations in a tractable grid-free manner without the need for numerically optimizing the Hamiltonian. We further show that the optimal control is uniquely characterized as Gaussian in the case of control affine systems and that, for linear-quadratic problems, the HJB equation is reduced to a Riccati equation, which can be used to obtain an explicit expression of the optimal control. Lastly, we discuss how to extend our results to continuous-time model-free reinforcement learning by taking an adaptive dynamic programming approach.
Host: 강문진     Korean     2021-01-31 22:19:35
Distributed convex optimization has received a lot of interest from many researchers since it is widely used in various applications, containing wireless network sensor and machine learning. Recently, A. S. Berahas et al (2018) introduced a variant of the distributed gradient descent called the Near DGD+ which combines nested communications and gradient descent steps. They proved that this scheme finds the optimum point using a constant step size when the target function is strongly convex and smooth function. In the first part, we show that the scheme attains O(1/t) convergence rate for convex and smooth function. In addition we obtain a convergence result of the scheme for quasi-strong convex function. In the second part, we use the idea of Near DGD+ to design a variant of the push-sum gradient method on directed graph. This talk is based on a joint work with Doheon Kim and Seok-bae Yun.
Host: 강문진     Korean     2021-01-31 22:21:29
Deep neural networks have achieved state-of-the-art performance in a variety of fields. The exponential growth of machine learning models and the extreme success of deep learning have seen application across a multitude of disciplines. Recent works observe that a class of widely used neural networks can be viewed as the Euler method of numerical discretization. From the numerical discretization perspective, Total Variation Diminishing (TVD) Runge-Kutta methods are more advanced techniques than the explicit Euler method that produce both accurate and stable solutions. Motivated by the TVD property and a generalized Runge-Kutta method, we proposed new networks which improve robustness against adversarial attacks. If time permits, we explore a deep learning methodology that can be applied to the data-driven discovery of numerical PDEs.
Host: 강문진     Korean     2021-01-31 22:23:37