Department Seminars & Colloquia
When you're logged in, you can subscribe seminars via e-mail
Mr. Saqib Mushtaq Shah, a KAIX visiting graduate student from ISI Bangalore who will stay at KAIST for 8 weeks, is going to give a series of weekly talks on the Milnor K-theory from the beginning. It is part of his KAIX summer internship works.
It has been well known that any closed, orientable 3-manifold can be obtained by performing Dehn surgery on a link in S^3. One of the most prominent problems in 3-manifold topology is to list all the possible lens spaces that can be obtained by a Dehn surgery along a knot in S^3, which has been solved by Greene. A natural generalization of this problem is to list all the possible lens spaces that can be obtained by a Dehn surgery from other lens spaces. Besides, considering surgeries between lens spaces is also motivated from DNA topology. In this talk, we will discuss distance one surgeries between lens spaces L(n, 1) with n ≥ 5 odd and lens spaces L(s, 1) for nonzero s and the corresponding band surgeries from T(2, n) to T(2, s), by using our Heegaard Floer d-invariant surgery formula, which is deduced from the Heegaard Floer mappping cone formula. We give an almost complete classification of the above surgeries.
This is a one-day workshop with young geometric topologists. Follow the link for more details
https://sites.google.com/site/hrbaik85/workshop-and-conferences-at-kaist/yggt-at-kaist?authuser=0
https://sites.google.com/site/hrbaik85/workshop-and-conferences-at-kaist/yggt-at-kaist?authuser=0
In this talk we present homogeneous nonprime ideals that can be used to produce, via an unprojection process, homogeneous prime ideals of high Castelnuovo-Mumford regularity. We thus provide counterexamples to the Eisenbud-Goto regularity conjecture other than those given by the Rees-like algebra method of J. McCullough and I. Peeva. Their construction was inspired by G. Caviglia (2004), J. Beder et al. (2011), and K. Borna-A. Mohajer (2015, arXiv).
Let G be a numerical semigroup. We prove an upper bound for the Betti numbers of the semigroup ring of G which depends only on the width of G, that is, the difference between the largest and the smallest generators of G. In this way, we make progress towards a conjecture of Herzog and Stamate. Moreover, for 4-generated numerical semigroups, the first significant open case, we prove the Herzog-Stamate bound for all but finitely many values of the width.
This is a joint work with A. Moscariello and A. Sammartano.
A major trajectory in the development of statistical learning has been the expansion of mathematical spaces underlying observed data, extending from numbers to vectors, functions, and beyond. This expansion has fostered significant theoretical and computational breakthroughs. One notable direction involves analyzing sets where each set becomes an object of interest for inference. This perspective accommodates the intrinsic and non-ignorable heterogeneity inherent in data-generating processes. Among various theoretical frameworks to analyze sets, a principled approach is viewing a set as an empirical measure. In this talk, I revisit the concept of the median - a robust alternative to the mean as a centroid - and introduce a novel extension of this concept within the space of probability measures under the framework of optimal transport. I will present theoretical results and a generic computational pipeline that leverages existing algorithmic developments in the field, with examples. Furthermore, the potential benefits of this novel approach for scalable inference and scientific discovery will be explored.
This is part of an informal seminar series to be given by Mr. Jaehong Kim, who has been studying the book "Hodge theory and Complex Algebraic Geometry Vol 1 by Claire Voisin" for a few months. There will be 6-8 seminars during Spring 2024, and it will summarize about 70-80% of the book.
Pressure functions are key ideas in the thermodynamic formalism of dynamical systems. McMullen used the convexity of the pressure function to construct a metric, called a pressure metric, on the Teichmuller space and showed that it is a constant multiple of the Weil-Petersson metric. In the spirit of Sullivan's dictionary, McMullen applied the same idea to define a metric on the space of Blaschke products.
In this talk, we will discuss Bridgeman-Taylor and McMullen's earlier works on the pressure metric, as well as recent developments in more generic settings. Then we will talk about pressure metrics on hyperbolic components in complex dynamics, as well as unsolved problems.
When does a topological branched self-covering of the sphere enjoy a holomorphic structure? William Thurston answered this question in the 1980s by using a holomorphic self-map of the Teichmuller space known as Thurston's pullback map. About 30 years later, Dylan Thurston took a different approach to the same question, reducing it to a one-dimensional dynamical problem. We will discuss both characterizations and their applications to various questions in complex dynamics.
Deep learning has shown remarkable success in various fields, and efforts continue to develop investment methodologies using deep learning in the financial sector. Despite numerous successes, the fact is that the revolutionary results seen in areas such as image processing and natural language processing have not been seen in finance. There are two reasons why deep learning has not led to disruptive change in finance. First, the scarcity of financial data leads to overfitting in deep learning models, so excellent backtesting results do not translate into actual outcomes. Second, there is a lack of methodological development for optimizing dynamic control models under general conditions. Therefore, I aim to overcome the first problem by artificially augmenting market data through an integration of Generative Adversarial Networks (GANs) and the Fama-French factor model, and to address the second problem by enabling optimal control even under complex conditions using policy-based reinforcement learning. The methods of this study have been shown to significantly outperform traditional linear financial factor models such as the CAPM and value-based approaches such as the HJB equation.
This talk presents a uniform framework for computational fluid dynamics in porous media based on finite element velocity and pressure spaces with minimal degrees of freedom. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. Since the fluid model in porous media can be seen as a combination of the Stokes and Darcy equations, different conformities of finite element spaces are required depending on viscous parameters, making it challenging to develop a robust numerical solver uniformly performing for all viscous parameters. Therefore, we propose a pressure-robust method by utilizing a velocity reconstruction operator and replacing the velocity functions with a reconstructed velocity. The robust method leads to error estimates independent of a pressure term and shows uniform performance for all viscous parameters, preserving minimal degrees of freedom. We prove well-posedness and error estimates for the robust method while comparing it with a standard method requiring an impractical mesh condition. We finally confirm theoretical results through numerical experiments with two- and three-dimensional examples and compare the methods' performance to support the need for our robust method.
Deep learning has emerged as a dominant approach in machine learning and has achieved remarkable success in various domains such as computer vision and natural language processing. Its influence has progressively extended to numerous research areas within the fields of science and engineering. In this presentation, I will outline our work on the design and training of a foundation model, named PDEformer, which aims to serve as a flexible and efficient solver across a spectrum of parametric PDEs. PDEformer is specifically engineered to facilitate a range of downstream tasks, including but not limited to parameter estimation and system identification. Its design is tailored to accommodate applications necessitating repetitive solving of PDEs, where a balance between efficiency and accuracy is sought.
This is a joint workshop with the Serabol program.
We discuss how optimal transport, which is a theory for matching different distributions in a cost effective way, is applied to the supercooled Stefan problem, a free boundary problem that describes the interface dynamics of supercooled water freezing into ice. This problem exhibits a highly unstable behaviour and its mathematical study has been limited mostly to one space dimension, and widely open for multi-dimensional cases. We consider a version of optimal transport problem that considers stopping of the Brownian motion, whose solution is then translated into a solution to the supercooled Stefan problem in general dimensions.
In this talk, we consider the Boltzmann equation in general 3D toroidal domains with a specular reflection boundary condition. So far, it is a well-known open problem to obtain the low-regularity solution for the Boltzmann equation in general non-convex domains because there are grazing cases, such as inflection grazing. Thus, it is important to analyze trajectories which cause grazing. We will provide new analysis to handle these trajectories in general 3D toroidal domains.
Post-critically finite (PCF) rational maps are a fascinating class of dynamical systems with rich mathematical structures. In this minicourse, we explore the interplay between topology, geometry, and dynamics in the study of PCF rational maps.
[Lecture 3: Geometry of PCF rational maps]
Geometry of PCF rational maps The topological models for PCF rational maps we discuss define canonical quasi-symmetric classes of metrics on their Julia sets. We investigate the conformal dimensions of Julia sets, which measure their geometric complexity and provide insights into the underlying dynamics. Through this exploration, we uncover the intricate relationship between the topology, geometry, and dynamics of PCF rational maps.
The finite quotient groups of étale fundamental groups of algebraic curves in positive characteristic are precisely determined, but without explicit construction of quotient maps, by well-known results of Raynaud, Harbater and Pop, previously known as Abhyankar's conjecture. Katz, Rojas León and Tiep have been studying the constructive side of this problem using certain "easy to remember" local systems. In this talk, I will discuss the main results and methods of this project in the case of a specific type of local systems called hypergeometric sheaves.
Post-critically finite (PCF) rational maps are a fascinating class of dynamical systems with rich mathematical structures. In this minicourse, we explore the interplay between topology, geometry, and dynamics in the study of PCF rational maps.
[Lecture 2: Topology of PCF rational maps]
W.Thurston's and D.Thurston's characterizations provide powerful frameworks for understanding the topological dynamics of rational maps. We delve into these characterizations, exploring their implications for the dynamics of PCF rational maps. Additionally, we discuss finite subdivision rules and topological surgeries, such as matings, tunings, and decompositions, as tools for constructing and analyzing PCF rational maps in topological ways.
Quantum embedding is a fundamental prerequisite for applying quantum machine learning techniques to classical data, and has substantial impacts on performance outcomes. In this study, we present Neural Quantum Embedding (NQE), a method that efficiently optimizes quantum embedding beyond the limitations of positive and trace-preserving maps by leveraging classical deep learning techniques. NQE enhances the lower bound of the empirical risk, leading to substantial improvements in classification performance. Moreover, NQE improves robustness against noise. To validate the effectiveness of NQE, we conduct experiments on IBM quantum devices for image data classification, resulting in a remarkable accuracy enhancement. In addition, numerical analyses highlight that NQE simultaneously improves the trainability and generalization performance of quantum neural networks, as well as of the quantum kernel method.
We provide general upper and lower bounds for the Gromov–Hausdorff distance d_GH(S^m, S^n) between spheres S^m and S^n (endowed with the round metric) for 0 <= m < n <= 1. Some of these lower bounds are based on certain topological ideas related to the Borsuk–Ulam theorem. Via explicit constructions of (optimal) correspondences, we prove that our lower bounds are tight in the cases of d_GH(S^0, S^n), d_GH(S^m, S^\infty), d_GH(S^1, S^2), d_GH(S^1, S^3), and d_GH(S^2, S^3). We also formulate a number of open questions.
This is part of an informal seminar series to be given by Mr. Jaehong Kim, who has been studying the book "Hodge theory and Complex Algebraic Geometry Vol 1 by Claire Voisin" for a few months. There will be 6-8 seminars during Spring 2024, and it will summarize about 70-80% of the book.
Reinforcement learning (RL) has become one of the most central problems in machine learning, showcasing remarkable success in recommendation systems, robotics and super-human level game plays. Yet, existing literature predominantly focuses on (almost) fully observable environments, overlooking the complexities of real-world scenarios where crucial information remains hidden.
In this talk, we consider reinforcement learning in partially observable systems through the proposed framework of the Latent Markov Decision Process (LMDP). In LMDPs, an MDP is randomly drawn from a set of possible MDPs at the beginning of the interaction, but the context -- the latent factors identifying the chosen MDP -- is not revealed to the agent. This opacity poses new challenges for decision-making, particularly in scenarios like recommendation systems without sensitive user data, or medical treatments for undiagnosed illnesses. Despite the significant relevance of LMDPs to real-world problems, existing theories rely on restrictive separation assumptions -- an unrealistic constraint in practical applications. We present a series of new results addressing this gap: from leveraging higher-order information to develop sample-efficient RL algorithms, to establishing lower bounds and improved results under more realistic assumptions within Latent MDPs.
Post-critically finite (PCF) rational maps are a fascinating class of dynamical systems with rich mathematical structures. In this minicourse, we explore the interplay between topology, geometry, and dynamics in the study of PCF rational maps.
[Lecture 1: What are PCF rational maps?]
We begin by introducing PCF rational maps, highlighting their significance in complex dynamics.
1. 데이터 분석 업무의 이해(김준범)- 데이터 분석가의 역할 소개
2. 초거대 언어 모델 동향(김정섭)-GPT-3 부터 Llama-3까지 이미 우리 삶 속에 깊숙이 자리잡은 초거대 언어 모델의 동향
3. 데이터 분석가에서 공직으로 오게된 과정과 앞으로의 계획(심규석)-
삼성화재에서의 데이터 분석 및 AI 모델링 업무, 행정안전부에서의 데이터 분석과제 기획·관리 및 공무원의 데이터 분석 역량지원 업무 전반에 관한 설명과 함께 각 기관을 지원하게 된 동기, 지원방법, 준비사항 등
The qualitative theory of dynamical systems mainly provides a mathematical framework for analyzing the long-time behavior of systems without necessarily finding solutions for the given ODEs. The theory of dynamical systems could be related to deep learning problems from various perspectives such as approximation, optimization, generalization, and explainability.
In this talk, we first introduce the qualitative theory of dynamical systems. Then, we present numerical results as the application of the qualitative theory of dynamical systems to deep learning problems.
This is part of an informal seminar series to be given by Mr. Jaehong Kim, who has been studying the book "Hodge theory and Complex Algebraic Geometry Vol 1 by Claire Voisin" for a few months. There will be 6-8 seminars during Spring 2024, and it will summarize about 70-80% of the book.
EO strata are subvarieties in the moduli space of g-dimensional abelian varieties in characterstic p which classify points with given isomorphism type of p-torson subgroups.
We are interested in how automorphism groups of points vary in supersingular EO strata. We show that when g is even and p>3, there is an open dense of the maximal supersingular EO stratum in which every point has automorphism group {\pm 1}, and prove Oor's conjecture in this case.
This is joint work in progress with Valentijn Karemaker.
This lecture explores the topics and areas that have guided my research in computational mathematics and machine learning in recent years. Numerical methods in computational science are essential for comprehending real-world phenomena, and deep neural networks have achieved state-of-the-art results in a range of fields. The rapid expansion and outstanding success of deep learning and scientific computing have led to their applications across multiple disciplines, ranging from fluid dynamics to material sciences. In this lecture, I will focus on bridging machine learning with applied mathematics, specifically discussing topics such as scientific machine learning, numerical PDEs, and mathematical approaches of machine learning, including generative models and adversarial examples.
The Tomas-Stein inequality is a fundamental inequality in Fourier Analysis. It measures the L^4 norm of the Fourier transform of the sphere surface measure in terms of the L^2 norm. It is possible because the sphere has a positive Gaussian curvature. In this talk we will present what is an extremizer problem to this inequality and what is the progress of this problem.
We introduce a general equivalence problems for geometric structures arising from minimal rational curves on uniruled complex projective manifolds. To study these problems, we need approaches fusing differential geometry and algebraic geometry. Among such geometric structures, those associated to homogeneous manifolds are particularly accessible to differential-geometric methods of Cartan geometry. But even in these cases, only a few cases have been worked out so far. We review some recent developments.
Deep learning techniques are increasingly applied to scientific problems, where the precision of networks is crucial. Despite being deemed as universal function approximators, neural networks, in practice, struggle to reduce the prediction errors below O(10−5) even with large network size and extended training iterations. To address this issue, we developed the multi-stage neural networks that divides the training process into different stages, with each stage using a new network that is optimized to fit the residue from the previous stage. Across successive stages, the residue magnitudes decreases substantially and follows an inverse power-law relationship with the residue frequencies. The multi-stage neural networks effectively mitigate the spectral biases associated with regular neural networks, enabling them to capture the high frequency feature of target functions. We demonstrate that the prediction error from the multi-stage training for both regression problems and physics-informed neural networks can nearly reach the machine-precision O(10−16) of double-floating point within a finite number of iterations. Such levels of accuracy are rarely attainable using single neural networks alone.
Link prediction (LP), inferring the connectivity between nodes, is a significant research area in graph data, where a link represents essential information on relationships between nodes. Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks. We employ persistent homology (PH), a topological data analysis method that helps analyze the topological information of graphs, to explain the reasons for the high performance. We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology. The PHLP utilizes the angle hop subgraph and new node labeling called degree double radius node labeling (Degree DRNL), distinguishing the information of graphs better than DRNL. Using only a classifier, PHLP performs similarly to state-of-the-art (SOTA) models on most benchmark datasets. Incorporating the outputs calculated using PHLP into the existing GNN-based SOTA models improves performance across all benchmark datasets. To the best of our knowledge, PHLP is the first method of applying PH to LP without GNNs. The proposed approach, employing PH while not relying on neural networks, enables the identification of crucial factors for improving performance.
https://arxiv.org/abs/2404.15225
I tell a personal story of how a mathematician working in complex algebraic geometry had come to discover the relevance of Cartan geometry, a subject in differential geometry, in an old problem in algebraic geometry, the problem of deformations of Grassmannians as projective manifolds, which originated from the work of Kodaira and Spencer. In my joint work with Ngaiming Mok, we used the theory of minimal rational curves to study such deformations and it reduced the question to a problem in Cartan geometry.