학과 세미나 및 콜로퀴엄
We discuss on fractional weighted Sobolev spaces with degenerate weights and related weighted nonlocal integrodifferential equations. We provide embeddings and Poincare inequalities for these spaces and show robust convergence when the parameter of fractional differentiability goes to $1$. Moreover, we prove local H\"older continuity and Harnack inequalities for solutions to the corresponding nonlocal equations. The regularity results naturally extend those for degenerate linear elliptic equations presented in [Comm. Partial Differential Equations 7 (1982); no. 1; 77?116] by Fabes, Kenig, and Serapioni to the nonlocal setting. This is a joint work with Linus Behn, Lars Diening and Julian Rolfes from Bielefeld.
This presentation will consider the Sobolev regularity for solutions to space-time non-local equations. Spatial non-local operators in this presentation are the infinitesimal generator of Levy processes. The relation between the fundamental solution and the transition density of corresponding processes which generates the operator allows one to obtain the estimation of solutions in Lp-spaces. Several results will be introduced with assumptions. The main ingredients are a representation of solutions, heat kernel estimations, and some properties of (singular) integral operators.
This lecture explores the mathematical foundations underlying neural network approximation, focusing on the development of rigorous theories that explain how and why neural networks approximate functions effectively. We talk about key topics such as error estimation, convergence analysis, and the role of activation functions in enhancing network performance. Additionally, the lecture will demonstrate convergence analysis in the context of scientific machine learning, further bridging the gap between empirical success and theoretical understanding. Our goal is to provide deeper insights into the mechanisms driving neural network efficiency, reliability, and their applications in scientific computing.
(E6-1) Room 1401
SAARC 세미나
Olivier Hénot (École Polytechnique)
Computer-assisted proofs in nonlinear analysis
(E6-1) Room 1401
SAARC 세미나
The objective of the tutorial Computer-Assisted Proofs in Nonlinear Analysis is to introduce participants to fundamental concepts of a posteriori validation techniques. This mini-course will cover topics ranging from finite-dimensional problems, such as finding periodic orbits of maps, to infinite-dimensional problems, including solving the Cauchy problem, proving the existence of periodic orbits, and computing invariant manifolds of equilibria. Each session will last 2 hours: 1 hour of theory followed by 1 hour of hands-on practical applications. The practical exercises will focus on implementing on implementing computer-assisted proofs using the Julia programming language.
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
(E6-1) Room 1410
SAARC 세미나
Olivier Hénot (École Polytechnique)
Computer-assisted proofs in nonlinear analysis
(E6-1) Room 1410
SAARC 세미나
The objective of the tutorial Computer-Assisted Proofs in Nonlinear Analysis is to introduce participants to fundamental concepts of a posteriori validation techniques. This mini-course will cover topics ranging from finite-dimensional problems, such as finding periodic orbits of maps, to infinite-dimensional problems, including solving the Cauchy problem, proving the existence of periodic orbits, and computing invariant manifolds of equilibria. Each session will last 2 hours: 1 hour of theory followed by 1 hour of hands-on practical applications. The practical exercises will focus on implementing on implementing computer-assisted proofs using the Julia programming language.
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
(E2) Room 1225
SAARC 세미나
Olivier Hénot (École Polytechnique)
Computer-assisted proofs in nonlinear analysis
(E2) Room 1225
SAARC 세미나
The objective of the tutorial Computer-Assisted Proofs in Nonlinear Analysis is to introduce participants to fundamental concepts of a posteriori validation techniques. This mini-course will cover topics ranging from finite-dimensional problems, such as finding periodic orbits of maps, to infinite-dimensional problems, including solving the Cauchy problem, proving the existence of periodic orbits, and computing invariant manifolds of equilibria.
Each session will last 2 hours: 1 hour of theory followed by 1 hour of hands-on practical applications. The practical exercises will focus on implementing on implementing computer-assisted proofs using the Julia programming language.
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
Please bring your laptop to the tutorial and find the attached syllabus for more information. (to download the syllabus) https://saarc.kaist.ac.kr/boards/view/seminars/32
Wave turbulence refers to the statistical theory of weakly nonlinear dispersive waves. In the weakly turbulent regime of a system of dispersive waves, its statistics can be described via a coarse-grained dynamics, governed by the kinetic wave equation. Remarkably, kinetic wave equations admit exact power-law solutions, called Kolmogorov-Zakharov spectra, which resemble Kolmogorov spectrum of hydrodynamic turbulence, and is often interpreted as a transient equilibrium between excitation and dissipation. In this talk, we will outline a local well-posedness result for kinetic wave equation for a toy model for wave turbulence. The result includes well-posedness near K-Z spectra, and demonstrates a surprising smoothing effect of the kinetic wave equation. The talk is based on the joint work with Pierre Germain (ICL) and Katherine Zhiyuan Zhang (Northeastern).
Modern machine learning methods such as multi-layer neural networks often have millions of parameters achieving near-zero training errors. Nevertheless, they maintain strong generalization capabilities, challenging traditional statistical theories based on the uniform law of large numbers. Motivated by this phenomenon, we consider high-dimensional binary classification with linearly separable data. For Gaussian covariates, we characterize linear classification problems for which the minimum norm interpolating prediction rule, namely the max-margin classification, has near-optimal generalization error.
In the second part of the talk, we consider max-margin classification with non-Gaussian covariates. In particular, we leverage universality arguments to characterize the generalization error of non-linear random features model, a two-layer neural network with random first layer weights. In the wide-network limit, where the number of neurons tends to infinity, we show how non-linear max-margin classification with random features collapse to a linear classifier with a soft-margin objective.
We study large random matrices with i.i.d. entries conditioned to have prescribed row and column sums (margin). This problem has rich connections to relative entropy minimization, Schrodinger bridge, the enumeration of contingency tables, and random graphs with given degree sequences. Such margin-constrained random matrix turns out to be sharply concentrated around a certain deterministic matrix, which we call the "typical table". Typical tables have dual characterizations: (1) the expectation of the random matrix ensemble with minimum relative entropy from the base model constrained to have the expected target margin, and (2) the expectation of the maximum likelihood model obtained by rank-one exponential tilting of the base model. The structure of the typical table is dictated by two dual variables, which give the maximum likelihood estimates of the tilting parameters. Based on these results, for a sequence of "tame" margins that converges in $L^{1}$ to a limiting continuum margin as the size of the matrix diverges, we show that the sequence of margin-constrained random matrices converges in cut norm to a limiting kernel, which is the $L^{2}$-limit of the corresponding rescaled typical tables. The rate of convergence is controlled by how fast the margins converge in $L^{1}$. We also propose a Sinkhorn-type alternating minimization algorithm for computing typical tables, which speicalizes to the classical Sinkhorn algorithm for the Poisson base measure. We derive several new results for random contingency tables from our general framework.
This talk is based on a Joint work with Sumit Mukherjee (Columbia).