Department Seminars & Colloquia




2023-10
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 1 5 2 6 2 7
8 9 10 11 12 2 13 1 14
15 16 17 18 19 1 20 21
22 23 24 25 1 26 1 27 28
29 30 3 31        
2023-11
Sun Mon Tue Wed Thu Fri Sat
      1 2 4 3 4
5 6 7 8 9 2 10 1 11
12 13 1 14 1 15 16 1 17 1 18
19 20 1 21 1 22 2 23 2 24 25
26 27 1 28 1 29 30 1    

When you're logged in, you can subscribe seminars via e-mail

(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 18:00:56
Host: 곽시종     Contact: 김윤옥 (5745)     To be announced     2023-11-25 15:01:01

심사위원장: 이지운, 심사위원: 남경식, 황강욱, 양홍석(전산학부), 폴정(Fordham University)
To be announced     2023-11-14 11:15:31
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 18:00:09
In the analysis of singularities, uniqueness of limits often arises as an important question: that is, whether the geometry depends on the scales one takes to approach the singularity. In his seminal work, Simon demonstrated that Lojasiewicz inequalities, originally known in real algebraic geometry in finite dimensions, can be applied to show uniqueness of limits in geometric analysis in infinite dimensional settings. We will discuss some instances of this very successful technique and its applications.
Host: 백형렬     English     2023-09-08 16:05:34
Finite path integral is a finite version of Feynman’s path integral, which is a mathematical methodology to construct TQFT’s (topological quantum field theories) from finite gauge theory. It was introudced by Dijkgraaf and Witten in 1990. We study finite path integral model by replacing finite gauge theory with homological algebra based on bicommutative Hopf algebras. It turns out that Mayer-Vietoris functors such as homology theories extend to TQFT which preserves compositions up to a scalar. This talk concerns the second cohomology class of cobordism (more generally, cospan) categories induced by such scalars. In particular, we will explain that the obstruction class is described purely by homological algebra, not via finite path integral.
Contact: 백형렬 ()     To be announced     2023-11-17 15:21:45
Zeta functions and zeta values play a central role in Modern Number Theory and are connected to practical applications in codes and cryptography. The significance of these objects is demonstrated by the fact that two of the seven Clay Mathematics Million Dollar Millennium Problems are related to these objects, namely the Riemann hypothesis and the Birch and Swinnerton-Dyer conjecture. We first recall results and well-known conjectures concerning these objects over number fields. If time permits, we will present recent developments in the setting of function fields. This is a joint work with Im Bo-Hae and Kim Hojin among others.
There will be a tea time at 15:30 before the lecture.
Contact: Professor Bo-Hae Im ()

https://mathsci.kaist.ac.kr/bk21four/index.php/boards/view/board_seminar/3/
Host: 임보해     Contact: 김윤옥 (5745)     To be announced     2023-11-08 10:17:24

심사위원장: 김용정, 심사위원: 권순식, 강문진, 김재경, 윤창욱(충남대학교)
To be announced     2023-11-13 10:12:34
The mapping class group Map(S) of a surface S is the group of isotopy classes of diffeomorphisms of S. When S is a finite-type surface, the classical mapping class group Map(S) has been well understood. On the other hand, there are recent developments on mapping class groups of infinite-type surfaces. In this talk, we discuss mapping class groups of finite-type and infinite-type surfaces and elements of these groups. Also, we define surface Houghton groups, which are subgroups of mapping class groups of certain infinite-type surfaces. Then we discuss finiteness properties of surface Houghton groups, which is a joint work with Aramayona, Bux, and Leininger.
Host: 백형렬     To be announced     2023-11-17 15:19:23
In this talk, we consider a group-sparse matrix estimation problem. This problem can be solved by applying the existing compressed sensing techniques, which either suffer from high computational complexities or lack of algorithm robustness. To overcome the situation, we propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness. Specifically, we map the original iterative shrinkage thresholding algorithm (ISTA) into an unrolled recurrent neural network (RNN), thereby improving the convergence rate and computational efficiency through end-to-end training. Moreover, the proposed algorithm unrolling approach inherits the structure and domain knowledge of the ISTA, thereby maintaining the algorithm robustness, which can handle non-Gaussian preamble sequence matrix in massive access. We further simplify the unrolled network structure with rigorous theoretical analysis by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolled deep neural network structures enjoy a linear convergence rate. Extensive simulations based on various preamble signatures show that the proposed unrolled networks outperform the existing methods regarding convergence rate, robustness, and estimation accuracy.
Host: 김동환 (Donghwan Kim)     Contact: 설윤창 (Yunchang Seol) (010-8785-5872)     To be announced     2023-11-08 12:35:19
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 17:59:23

심사위원장: 백상훈, 심사위원: 곽시종, 김완수, 이용남, 쾨니히 요아힘(한국교원대학교)
To be announced     2023-10-25 09:36:24

심사위원장: 임보해, 심사위원: 김완수, 박진형, 쾨니히 요아힘(한국교원대학교), 조재현(UNIST)
To be announced     2023-11-01 14:49:30
In this talk, I will introduce the use of deep neural networks (DNNs) to solve high-dimensional evolution equations. Unlike some existing methods (e.g., least squares method/physics-informed neural networks) that simultaneously deal with time and space variables, we propose a deep adaptive basis approximation structure. On the one hand, orthogonal polynomials are employed to form the temporal basis to achieve high accuracy in time. On the other hand, DNNs are employed to create the adaptive spatial basis for high dimensions in space. Numerical examples, including high-dimensional linear parabolic and hyperbolic equations and a nonlinear Allen–Cahn equation, are presented to demonstrate that the performance of the proposed DABG method is better than that of existing DNNs. zoom link: https://kaist.zoom.us/j/3844475577 zoom ID: 384 447 5577
https://kaist.zoom.us/j/3844475577 회의 ID: 384 447 5577
Host: Youngjoon Hong     Contact: Youngjoon Hong ()     English     2023-10-27 10:59:05
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 17:58:35
In this talk, we address a question whether a mean-field approach for a large particle system is always a good approximation for a large particle system or not. For definiteness, we consider an infinite Kuramoto model for a countably infinite set of Kuramoto oscillators and study its emergent dynamics for two classes of network topologies. For a class of symmetric and row (or columm)-summable network topology, we show that a homogeneous ensemble exhibits complete synchronization, and the infinite Kuramoto model can cast as a gradient flow, whereas we obtain a weak synchronization estimate, namely practical synchronization for a heterogeneous ensemble. Unlike with the finite Kuramoto model, phase diameter can be constant for some class of network topologies which is a novel feature of the infinite model. We also consider a second class of network topology (so-called a sender network) in which coupling strengths are proportional to a constant that depends only on sender's index number. For this network topology, we have a better control on emergent dynamics. For a homogeneous ensemble, there are only two possible asymptotic states, complete phase synchrony or bi-cluster configuration in any positive coupling strengths. In contrast, for a heterogeneous ensemble, complete synchronization occurs exponentially fast for a class of initial configuration confined in a quarter arc. This is a joint work with Euntaek Lee (SNU) and Woojoo Shim (Kyungpook National University).
Host: 강문진     Contact: 김규식 (T2702) ()     English     2023-09-08 15:29:31

심사위원장:이지운, 심사위원:남경식, 황강욱, 윤세영(김재철AI대학원), 서성미(충남대학교)
To be announced     2023-09-07 13:20:48
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 17:57:50
(KAI-X Distinguished Lecture Series) We have multiple approaches to vanishing theorems for the cohomology of Shimura varieties, via either algebraic geometry or automorphic forms. Such theorems have been of interest with either complex or torsion coefficients. Recently, results have been obtained under various genericity hypotheses by Caraiani-Scholze, Koshikawa, Hamann-Lee et al. I will survey different approaches. If time permits, I may discuss an ongoing project with Koshikawa to understand the non-generic case.
Host: 김완수     English     2023-10-17 14:36:40
: For a translation surface, the associated saddle connection graph has saddle connections as vertices, and edges connecting pairs of non-crossing saddle connections. This can be viewed as an induced subgraph of the arc graph of the surface. In this talk, I will discuss both the fine and coarse geometry of the saddle connection graph. We show that the isometry type is rigid: any isomorphism between two such graphs is induced by an affine diffeomorphism between the underlying translation surfaces. However, the situation is completely different when one considers the quasi-isometry type: all saddle connection graphs form a single quasi-isometry class. We will also discuss the Gromov boundary in terms of foliations. This is based on joint work with Valentina Disarlo, Huiping Pan, and Anja Randecker.
To be announced     2023-10-27 16:02:46

심사위원장: 김동환, 심사위원: 이창옥, 홍영준, 곽도영, 오덕순(충남대학교)
To be announced     2023-10-06 14:11:17

심사위원장: 김동환, 심사위원: 이창옥, 홍영준, 곽도영, 오덕순(충남대학교)
To be announced     2023-10-06 14:11:18
The Gauss-Bonnet theorem implies that the two dimensional torus does not have nonnegative Gauss curvature unless it is flat, and that the two dimensional sphere does not a metric which has Gaussian curvature bounded below by one and metric bounded below by the standard round metric. Gromov proposed a series of conjectures on generalizing the Gauss-Bonnet theorem in his four lectures. I will report my work with Gaoming Wang (now Tsinghua) on Gromov dihedral rigidity conjecture in hyperbolic 3-space and scalar curvature comparison of rotationally symmetric convex bodies with some simple singularities.
Host: 박지원     To be announced     2023-09-12 15:21:33

심사위원장: 변재형, 심사위원 : 강문진, 김용정, 배명진, 권오상(충북대학교)
To be announced     2023-09-13 13:42:03
With the success of deep learning technologies in many scientific and engineering applications, neural network approximation methods have emerged as an active research area in numerical partial differential equations. However, the new approximation methods still need further validations on their accuracy, stability, and efficiency so as to be used as alternatives to classical approximation methods. In this talk, we first introduce the neural network approximation methods for partial differential equations, where a neural network function is introduced to approximate the PDE (Partial Differential Equation) solution and its parameters are then optimized to minimize the cost function derived from the differential equation. We then present the approximation error and the optimization error behaviors in the neural network approximate solution. To reduce the approximation error, a neural network function with a larger number of parameters is often employed but when optimizing such a larger number of parameters the optimization error usually pollutes the solution accuracy. In addition to that, the gradient-based parameter optimization usually requires computation of the cost function gradient over a tremendous number of epochs and it thus makes the cost for a neural network solution very expensive. To deal with such problems in the neural network approximation, a partitioned neural network function can be formed to approximate the PDE solution, where localized neural network functions are used to form the global neural network solution. The parameters in each local neural network function are then optimized to minimize the corresponding cost function. To enhance the parameter training efficiency further, iterative algorithms for the partitioned neural network function can be developed. We finally discuss the possibilities in this new approach as a way of enhancing the neural network solution accuracy, stability, and efficiency by utilizing classical domain decomposition algorithms and their convergence theory. Some interesting numerical results are presented to show the performance of the partitioned neural network approximation and the iteration algorithms.
Host: 이창옥     Contact: 설윤창 (010-8785-5872)     To be announced     2023-10-03 20:07:53
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 17:56:08
Maximal functions of various forms have played crucial roles in harmonic analysis. Various outstanding open problems are related to Lp boundedness (estimate) of the associated maximal functions. In this talk, we discuss Lp boundedness of maximal functions given by averages over curves.
Host: 권순식     English     2023-09-08 15:22:49
"어떻게 하면 더 좋은 제품을 더 빠르게 개발할 수 있을까?"라는 문제는 모든 제조업이 안고 있는 숙제입니다. 최근 DX를 통해 많은 데이터들이 디지털화되고, AI의 급격한 발전을 통해 제품개발프로세스를 혁신하려는 시도가 일어나고 있습니다. 과거의 시뮬레이션 기반 설계에서 AI 기반 설계로의 패러다임 전환을 통해 제품개발 기간을 단축함과 동시에 제품의 품질을 향상시킬 수 있습니다. 본 세미나는 딥러닝을 통해 제품 설계안을 생성/탐색/예측/최적화/추천할 수 있는 생성형 AI 기반의 설계 프로세스(Deep Generative Design)를 소개하고, 모빌리티를 비롯한 제조 산업에 적용된 다양한 사례들을 소개합니다.
Host: 홍영준     Contact: 설윤창 (010-8785-5872)     To be announced     2023-09-24 22:03:17
In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표.
Host: 홍영준     Contact: 이명수 ()     Korean     2023-09-25 15:35:17
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
Host: Andreads Holmsen     English     2023-09-13 17:54:08
In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표로, 본 시간에는 주로 9월 14일 내용의 리뷰를 주로 다룸.
Host: 홍영준     Contact: 이명수 ()     Korean     2023-09-25 15:29:33