Friday, October 6, 2023

<< >>  
2023. 9
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
2023. 10
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
2023. 11
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30
2023-10-06 / 16:00 ~ 17:00
학과 세미나/콜로퀴엄 - 대수기하학: 인쇄
by ()

2023-10-13 / 11:00 ~ 12:00
학과 세미나/콜로퀴엄 - 응용 및 계산수학 세미나: 인쇄
by 김혜현(경희대학교)
With the success of deep learning technologies in many scientific and engineering applications, neural network approximation methods have emerged as an active research area in numerical partial differential equations. However, the new approximation methods still need further validations on their accuracy, stability, and efficiency so as to be used as alternatives to classical approximation methods. In this talk, we first introduce the neural network approximation methods for partial differential equations, where a neural network function is introduced to approximate the PDE (Partial Differential Equation) solution and its parameters are then optimized to minimize the cost function derived from the differential equation. We then present the approximation error and the optimization error behaviors in the neural network approximate solution. To reduce the approximation error, a neural network function with a larger number of parameters is often employed but when optimizing such a larger number of parameters the optimization error usually pollutes the solution accuracy. In addition to that, the gradient-based parameter optimization usually requires computation of the cost function gradient over a tremendous number of epochs and it thus makes the cost for a neural network solution very expensive. To deal with such problems in the neural network approximation, a partitioned neural network function can be formed to approximate the PDE solution, where localized neural network functions are used to form the global neural network solution. The parameters in each local neural network function are then optimized to minimize the corresponding cost function. To enhance the parameter training efficiency further, iterative algorithms for the partitioned neural network function can be developed. We finally discuss the possibilities in this new approach as a way of enhancing the neural network solution accuracy, stability, and efficiency by utilizing classical domain decomposition algorithms and their convergence theory. Some interesting numerical results are presented to show the performance of the partitioned neural network approximation and the iteration algorithms.
2023-10-06 / 11:00 ~ 12:00
학과 세미나/콜로퀴엄 - 응용 및 계산수학 세미나: 생성형 AI 기반 제품 설계 및 디자인 인쇄
by 강남우(KAIST)
"어떻게 하면 더 좋은 제품을 더 빠르게 개발할 수 있을까?"라는 문제는 모든 제조업이 안고 있는 숙제입니다. 최근 DX를 통해 많은 데이터들이 디지털화되고, AI의 급격한 발전을 통해 제품개발프로세스를 혁신하려는 시도가 일어나고 있습니다. 과거의 시뮬레이션 기반 설계에서 AI 기반 설계로의 패러다임 전환을 통해 제품개발 기간을 단축함과 동시에 제품의 품질을 향상시킬 수 있습니다. 본 세미나는 딥러닝을 통해 제품 설계안을 생성/탐색/예측/최적화/추천할 수 있는 생성형 AI 기반의 설계 프로세스(Deep Generative Design)를 소개하고, 모빌리티를 비롯한 제조 산업에 적용된 다양한 사례들을 소개합니다.
2023-10-10 / 16:30 ~ 17:30
IBS-KAIST 세미나 - 이산수학: Effective bounds for induced size-Ramsey numbers of cycles 인쇄
by Domagoj Bradač(ETH Zürich)
The k-color induced size-Ramsey number of a graph H is the smallest number of edges a (host) graph G can have such that for any k-coloring of its edges, there exists a monochromatic copy of H which is an induced subgraph of G. In 1995, in their seminal paper, Haxell, Kohayakawa and Łuczak showed that for cycles these numbers are linear for any constant number of colours, i.e., for some C=C(k), there is a graph with at most Cn edges whose any k-edge-coloring contains a monochromatic induced cycle of length n. The value of C comes from the use of the sparse regularity lemma and has a tower-type dependence on k. In this work, we obtain nearly optimal bounds for the required value of C. Joint work with Nemanja Draganić and Benny Sudakov.
2023-10-12 / 14:30 ~ 15:45
학과 세미나/콜로퀴엄 - 기타: 인쇄
by ()
(information) "Introduction to Oriented Matroids" Series Thursdays 14:30-15:45
2023-10-12 / 16:15 ~ 17:15
학과 세미나/콜로퀴엄 - 콜로퀴엄: 인쇄
by 이상혁(서울대학교)
Maximal functions of various forms have played crucial roles in harmonic analysis. Various outstanding open problems are related to Lp boundedness (estimate) of the associated maximal functions. In this talk, we discuss Lp boundedness of maximal functions given by averages over curves.
2023-10-10 / 16:00 ~ 17:00
SAARC 세미나 - SAARC 세미나: Colloquium: Quantum-Classical Correspondence from an Analytic Point of View 인쇄
by 홍영훈(중앙대학교 수학과)
In physics, Bohr’s correspondence principle asserts that the theory of quantum mechanics can be reduced to that of classical mechanics in the limit of large quantum numbers. This rather vague statement can be formulated explicitly in various ways. In this talk, focusing on an analytic point of view, we discuss the correspondence between basic inequalities and that between measures. Then, as an application, we present the convergence from quantum to kinetic white dwarfs.
Events for the 취소된 행사 포함 모두인쇄
export to Google calendar  .ics download