# Department Seminars & Colloquia

When you're logged in, you can subscribe seminars via e-mail

In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.

9월 14일, 10월 4일, 5일 세 번에 걸친 발표로, 본 시간에는 주로 9월 14일 내용의 리뷰를 주로 다룸.

9월 14일, 10월 4일, 5일 세 번에 걸친 발표로, 본 시간에는 주로 9월 14일 내용의 리뷰를 주로 다룸.

In this talk, we discuss the Neural Tangent Kernel. The NTK is closely related to the dynamics of the neural network during training via the Gradient Flow(or Gradient Descent). But, since the NTK is random at initialization and varies during training, it is quite delicate to understand the dynamics of the neural network. In relation to this issue, we introduce an interesting result: in the infinite-width limit, the NTK converge to a deterministic kernel at initialization and remains constant during training. We provide a brief proof of the result for the simplest case.

9월 14일, 10월 4일, 5일 세 번에 걸친 발표.

9월 14일, 10월 4일, 5일 세 번에 걸친 발표.

####
산업경영학동(E2-1) 세미나실 (2216)
ACM Seminars
강남우 (KAIST)
Generative AI-based Product Design and Development

산업경영학동(E2-1) 세미나실 (2216)

ACM Seminars

"어떻게 하면 더 좋은 제품을 더 빠르게 개발할 수 있을까?"라는 문제는 모든 제조업이 안고 있는 숙제입니다. 최근 DX를 통해 많은 데이터들이 디지털화되고, AI의 급격한 발전을 통해 제품개발프로세스를 혁신하려는 시도가 일어나고 있습니다. 과거의 시뮬레이션 기반 설계에서 AI 기반 설계로의 패러다임 전환을 통해 제품개발 기간을 단축함과 동시에 제품의 품질을 향상시킬 수 있습니다. 본 세미나는 딥러닝을 통해 제품 설계안을 생성/탐색/예측/최적화/추천할 수 있는 생성형 AI 기반의 설계 프로세스(Deep Generative Design)를 소개하고, 모빌리티를 비롯한 제조 산업에 적용된 다양한 사례들을 소개합니다.

With the success of deep learning technologies in many scientific and engineering applications, neural network approximation methods have emerged as an active research area in numerical partial differential equations. However, the new approximation methods still need further validations on their accuracy, stability, and efficiency so as to be used as alternatives to classical approximation methods. In this talk, we first introduce the neural network approximation methods for partial differential equations, where a neural network function is introduced to approximate the PDE (Partial Differential Equation) solution and its parameters are then optimized to minimize the cost function derived from the differential equation. We then present the approximation error and the optimization error behaviors in the neural network approximate solution. To reduce the approximation error, a neural network function with a larger number of parameters is often employed but when optimizing such a larger number of parameters the optimization error usually pollutes the solution accuracy. In addition to that, the gradient-based parameter optimization usually requires computation of the cost function gradient over a tremendous number of epochs and it thus makes the cost for a neural network solution very expensive. To deal with such problems in the neural network approximation, a partitioned neural network function can be formed to approximate the PDE solution, where localized neural network functions are used to form the global neural network solution. The parameters in each local neural network function are then optimized to minimize the corresponding cost function. To enhance the parameter training efficiency further, iterative algorithms for the partitioned neural network function can be developed. We finally discuss the possibilities in this new approach as a way of enhancing the neural network solution accuracy, stability, and efficiency by utilizing classical domain decomposition algorithms and their convergence theory. Some interesting numerical results are presented to show the performance of the partitioned neural network approximation and the iteration algorithms.

The Gauss-Bonnet theorem implies that the two dimensional torus does not have nonnegative Gauss curvature unless it is flat, and that the two dimensional sphere does not a metric which has Gaussian curvature bounded below by one and metric bounded below by the standard round metric.
Gromov proposed a series of conjectures on generalizing the Gauss-Bonnet theorem in his four lectures. I will report my work with Gaoming Wang (now Tsinghua) on Gromov dihedral rigidity conjecture in hyperbolic 3-space and scalar curvature comparison of rotationally symmetric convex bodies with some simple singularities.

In the analysis of singularities, uniqueness of limits often arises as an important question: that is, whether the geometry depends on the scales one takes to approach the singularity. In his seminal work, Simon demonstrated that Lojasiewicz inequalities, originally known in real algebraic geometry in finite dimensions, can be applied to show uniqueness of limits in geometric analysis in infinite dimensional settings. We will discuss some instances of this very successful technique and its applications.