학과 세미나 및 콜로퀴엄




2024-12
Sun Mon Tue Wed Thu Fri Sat
1 2 3 1 4 5 6 7
8 9 10 1 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
2025-01
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  

로그인 시, 세미나를 이메일로 구독할 수 있습니다.

Ist lecture: Understanding material microstructure Abstract Under temperature changes or loading, alloys can form beautiful patterns of microstructure that largely determine their macroscopic behaviour. These patterns result from phase transformations involving a change of shape of the underlying crystal lattice, together with the requirement that such changes in different parts of the crystal fit together geometrically. Similar considerations apply to plastic slip. The lecture will explain both successes in explaining such microstructure mathematically, and how resolving deep open questions of the calculus of variations could lead to a better understanding. 2nd lecture: Monodromy and nondegeneracy conditions in viscoelasticity Abstract For certain models of one-dimensional viscoelasticity, there are infinitely many equilibria representing phase mixtures. In order to prove convergence as time tends to infinity of solutions to a single equilibrium, it is necessary to impose a nondegeneracy condition on the constitutive equation for the stress, which has been shown in interesting recent work of Park and Pego to be necessary. The talk will explain this, and show how in some cases the nondegeneracy condition can be proved using the monodromy group of a holomorphic function. This is joint work with Inna Capdeboscq and Yasemin Şengül.
Host: 변재형     영어     2024-11-04 17:07:27
Semi-supervised domain adaptation (SSDA) is a statistical learning problem that involves learning from a small portion of labeled target data and a large portion of unlabeled target data, together with many labeled source data, to achieve strong predictive performance on the target domain. Since the source and target domains exhibit distribution shifts, the effectiveness of SSDA methods relies on assumptions that relate the source and target distributions. In this talk, we develop a theoretical framework based on structural causal models to analyze and compare the performance of SSDA methods. We introduce fine-tuning algorithms under various assumptions about the relationship between source and target distributions and show how these algorithms enable models trained on source and unlabeled target data to perform well on the target domain with low target sample complexity. When such relationships are unknown, as is often the case in practice, we propose the Multi-Start Fine-Tuning (MSFT) algorithm, which selects the best-performing model from fine-tuning with multiple initializations. Our analysis shows that MSFT achieves optimal target prediction performance with significantly fewer labeled target samples compared to target-only approaches, demonstrating its effectiveness in scenarios with limited target labels.
Host: 이지운     Contact: saarc (042-350-8117)     미정     2024-09-06 13:40:49