Department Seminars & Colloquia
When you're logged in, you can subscribe seminars via e-mail
In the analysis of singularities, uniqueness of limits often arises as an important question: that is, whether the geometry depends on the scales one takes to approach the singularity. In his seminal work, Simon demonstrated that Lojasiewicz inequalities, originally known in real algebraic geometry in finite dimensions, can be applied to show uniqueness of limits in geometric analysis in infinite dimensional settings. We will discuss some instances of this very successful technique and its applications.
Finite path integral is a finite version of Feynman’s path integral, which is a mathematical methodology to construct TQFT’s (topological quantum field theories) from finite gauge theory. It was introudced by Dijkgraaf and Witten in 1990. We study finite path integral model by replacing finite gauge theory with homological algebra based on bicommutative Hopf algebras. It turns out that Mayer-Vietoris functors such as homology theories extend to TQFT which preserves compositions up to a scalar. This talk concerns the second cohomology class of cobordism (more generally, cospan) categories induced by such scalars. In particular, we will explain that the obstruction class is described purely by homological algebra, not via finite path integral.
Zeta functions and zeta values play a central role in Modern Number Theory and are connected to practical applications in codes and cryptography. The significance of these objects is demonstrated by the fact that two of the seven Clay Mathematics Million Dollar Millennium Problems are related to these objects, namely the Riemann hypothesis and the Birch and Swinnerton-Dyer conjecture. We first recall results and well-known conjectures concerning these objects over number fields. If time permits, we will present recent developments in the setting of function fields. This is a joint work with Im Bo-Hae and Kim Hojin among others.
There will be a tea time at 15:30 before the lecture.
Contact: Professor Bo-Hae Im ()
https://mathsci.kaist.ac.kr/bk21four/index.php/boards/view/board_seminar/3/
There will be a tea time at 15:30 before the lecture.
Contact: Professor Bo-Hae Im ()
https://mathsci.kaist.ac.kr/bk21four/index.php/boards/view/board_seminar/3/
The mapping class group Map(S) of a surface S is the group of isotopy classes of diffeomorphisms of S. When S is a finite-type surface, the classical mapping class group Map(S) has been well understood. On the other hand, there are recent developments on mapping class groups of infinite-type surfaces. In this talk, we discuss mapping class groups of finite-type and infinite-type surfaces and elements of these groups. Also, we define surface Houghton groups, which are subgroups of mapping class groups of certain infinite-type surfaces. Then we discuss finiteness properties of surface Houghton groups, which is a joint work with Aramayona, Bux, and Leininger.
산업경영학동(E2-1) 세미나실 (2216)
ACM Seminars
Hayoung Choi (Dept. of Mathematics, Kyungpook National Univ.)
Solving group-sparse problems via deep neural networks with theoretical guarantee
산업경영학동(E2-1) 세미나실 (2216)
ACM Seminars
In this talk, we consider a group-sparse matrix estimation problem. This problem can be solved by applying the existing compressed sensing techniques, which either suffer from high computational complexities or lack of algorithm robustness. To overcome the situation, we propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness. Specifically, we map the original iterative shrinkage thresholding algorithm (ISTA) into an unrolled recurrent neural network (RNN), thereby improving the convergence rate and computational efficiency through end-to-end training. Moreover, the proposed algorithm unrolling approach inherits the structure and domain knowledge of the ISTA, thereby maintaining the algorithm robustness, which can handle non-Gaussian preamble sequence matrix in massive access. We further simplify the unrolled network structure with rigorous theoretical analysis by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolled deep neural network structures enjoy a linear convergence rate. Extensive simulations based on various preamble signatures show that the proposed unrolled networks outperform the existing methods regarding convergence rate, robustness, and estimation accuracy.
In this talk, I will introduce the use of deep neural networks (DNNs) to solve high-dimensional evolution equations. Unlike some existing methods (e.g., least squares method/physics-informed neural networks) that simultaneously deal with time and space variables, we propose a deep adaptive basis approximation structure. On the one hand, orthogonal polynomials are employed to form the temporal basis to achieve high accuracy in time. On the other hand, DNNs are employed to create the adaptive spatial basis for high dimensions in space. Numerical examples, including high-dimensional linear parabolic and hyperbolic equations and a nonlinear Allen–Cahn equation, are presented to demonstrate that the performance of the proposed DABG method is better than that of existing DNNs.
zoom link:
https://kaist.zoom.us/j/3844475577
zoom ID: 384 447 5577
https://kaist.zoom.us/j/3844475577 회의 ID: 384 447 5577
https://kaist.zoom.us/j/3844475577 회의 ID: 384 447 5577
In this talk, we address a question whether a mean-field approach for a large particle system is always a good approximation for a large particle system or not. For definiteness, we consider an infinite Kuramoto model for a countably infinite set of Kuramoto oscillators and study its emergent dynamics for two classes of network topologies. For a class of symmetric and row (or columm)-summable network topology, we show that a homogeneous ensemble exhibits complete synchronization, and the infinite Kuramoto model can cast as a gradient flow, whereas we obtain a weak synchronization estimate, namely practical synchronization for a heterogeneous ensemble. Unlike with the finite Kuramoto model, phase diameter can be constant for some class of network topologies which is a novel feature of the infinite model. We also consider a second class of network topology (so-called a sender network) in which coupling strengths are proportional to a constant that depends only on sender's index number. For this network topology, we have a better control on emergent dynamics. For a homogeneous ensemble, there are only two possible asymptotic states, complete phase synchrony or bi-cluster configuration in any positive coupling strengths. In contrast, for a heterogeneous ensemble, complete synchronization occurs exponentially fast for a class of initial configuration confined in a quarter arc. This is a joint work with Euntaek Lee (SNU) and Woojoo Shim (Kyungpook National University).
(KAI-X Distinguished Lecture Series)
We have multiple approaches to vanishing theorems for the cohomology of Shimura varieties, via either algebraic geometry or automorphic forms. Such theorems have been of interest with either complex or torsion coefficients. Recently, results have been obtained under various genericity hypotheses by Caraiani-Scholze, Koshikawa, Hamann-Lee et al. I will survey different approaches. If time permits, I may discuss an ongoing project with Koshikawa to understand the non-generic case.
: For a translation surface, the associated saddle connection graph has saddle connections as vertices, and edges connecting pairs of non-crossing saddle connections. This can be viewed as an induced subgraph of the arc graph of the surface. In this talk, I will discuss both the fine and coarse geometry of the saddle connection graph. We show that the isometry type is rigid: any isomorphism between two such graphs is induced by an affine diffeomorphism between the underlying translation surfaces. However, the situation is completely different when one considers the quasi-isometry type: all saddle connection graphs form a single quasi-isometry class. We will also discuss the Gromov boundary in terms of foliations. This is based on joint work with Valentina Disarlo, Huiping Pan, and Anja Randecker.
The Gauss-Bonnet theorem implies that the two dimensional torus does not have nonnegative Gauss curvature unless it is flat, and that the two dimensional sphere does not a metric which has Gaussian curvature bounded below by one and metric bounded below by the standard round metric.
Gromov proposed a series of conjectures on generalizing the Gauss-Bonnet theorem in his four lectures. I will report my work with Gaoming Wang (now Tsinghua) on Gromov dihedral rigidity conjecture in hyperbolic 3-space and scalar curvature comparison of rotationally symmetric convex bodies with some simple singularities.
With the success of deep learning technologies in many scientific and engineering applications, neural network approximation methods have emerged as an active research area in numerical partial differential equations. However, the new approximation methods still need further validations on their accuracy, stability, and efficiency so as to be used as alternatives to classical approximation methods. In this talk, we first introduce the neural network approximation methods for partial differential equations, where a neural network function is introduced to approximate the PDE (Partial Differential Equation) solution and its parameters are then optimized to minimize the cost function derived from the differential equation. We then present the approximation error and the optimization error behaviors in the neural network approximate solution. To reduce the approximation error, a neural network function with a larger number of parameters is often employed but when optimizing such a larger number of parameters the optimization error usually pollutes the solution accuracy. In addition to that, the gradient-based parameter optimization usually requires computation of the cost function gradient over a tremendous number of epochs and it thus makes the cost for a neural network solution very expensive. To deal with such problems in the neural network approximation, a partitioned neural network function can be formed to approximate the PDE solution, where localized neural network functions are used to form the global neural network solution. The parameters in each local neural network function are then optimized to minimize the corresponding cost function. To enhance the parameter training efficiency further, iterative algorithms for the partitioned neural network function can be developed. We finally discuss the possibilities in this new approach as a way of enhancing the neural network solution accuracy, stability, and efficiency by utilizing classical domain decomposition algorithms and their convergence theory. Some interesting numerical results are presented to show the performance of the partitioned neural network approximation and the iteration algorithms.
Maximal functions of various forms have played crucial roles in harmonic analysis. Various outstanding open problems are related to Lp boundedness (estimate) of the associated maximal functions. In this talk, we discuss Lp boundedness of maximal functions given by averages over curves.
산업경영학동(E2-1) 세미나실 (2216)
ACM Seminars
강남우 (KAIST)
Generative AI-based Product Design and Development
산업경영학동(E2-1) 세미나실 (2216)
ACM Seminars
"어떻게 하면 더 좋은 제품을 더 빠르게 개발할 수 있을까?"라는 문제는 모든 제조업이 안고 있는 숙제입니다. 최근 DX를 통해 많은 데이터들이 디지털화되고, AI의 급격한 발전을 통해 제품개발프로세스를 혁신하려는 시도가 일어나고 있습니다. 과거의 시뮬레이션 기반 설계에서 AI 기반 설계로의 패러다임 전환을 통해 제품개발 기간을 단축함과 동시에 제품의 품질을 향상시킬 수 있습니다. 본 세미나는 딥러닝을 통해 제품 설계안을 생성/탐색/예측/최적화/추천할 수 있는 생성형 AI 기반의 설계 프로세스(Deep Generative Design)를 소개하고, 모빌리티를 비롯한 제조 산업에 적용된 다양한 사례들을 소개합니다.
In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표.
In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표로, 본 시간에는 주로 9월 14일 내용의 리뷰를 주로 다룸.
9월 14일, 10월 4일, 5일 세 번에 걸친 발표로, 본 시간에는 주로 9월 14일 내용의 리뷰를 주로 다룸.