Wednesday, December 4, 2024

<< >>  
2024. 11
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
2024. 12
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
2025. 1
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
2024-12-10 / 10:00 ~ 11:00
학과 세미나/콜로퀴엄 - 박사논문심사: 영상 감시 응용에서의 이동 경로 분석 및 예측 인쇄
by 권용진()

2024-12-04 / 15:00 ~ 16:30
학과 세미나/콜로퀴엄 - 위상수학 세미나: 인쇄
by 최인혁()
Recently, Bowden-Hensel-Webb introduced the notion of fine curve graph as an analogue of the classical curve graph. They used this to construct nontrivial quasi-morphisms (in fact, infinitely many independent ones) on Homeo_0(S). Their method crucially uses independent pseudo-Anosov conjugacy classes, whose existence follows from the WPD-ness of pseudo-Anosov mapping classes on the curve graph. Meanwhile, the WPD-ness of pseudo-Anosov maps on the fine curve graph is not achievable, as Homeo_0(S) is a simple group. In this talk, I will explain my ongoing regarding an analogue of WPD-ness for point-pushing pseudo-Anosov maps on the fine curve graph. If time allows, I will explain how this is related to the construction of independent pseudo-Anosov conjugacy classes in Homeo_0(S).
2024-12-11 / 15:00 ~ 16:00
학과 세미나/콜로퀴엄 - 박사논문심사: 모듈러 형식의 주기다항식의 산술적 성질에 관하여 인쇄
by Hojin Kim()

2024-12-05 / 11:50 ~ 12:40
대학원생 세미나 - 대학원생 세미나: 인쇄
by 이우주()
TBA
2024-12-10 / 16:00 ~ 17:00
SAARC 세미나 - SAARC 세미나: 인쇄
by 하우석()
Semi-supervised domain adaptation (SSDA) is a statistical learning problem that involves learning from a small portion of labeled target data and a large portion of unlabeled target data, together with many labeled source data, to achieve strong predictive performance on the target domain. Since the source and target domains exhibit distribution shifts, the effectiveness of SSDA methods relies on assumptions that relate the source and target distributions. In this talk, we develop a theoretical framework based on structural causal models to analyze and compare the performance of SSDA methods. We introduce fine-tuning algorithms under various assumptions about the relationship between source and target distributions and show how these algorithms enable models trained on source and unlabeled target data to perform well on the target domain with low target sample complexity. When such relationships are unknown, as is often the case in practice, we propose the Multi-Start Fine-Tuning (MSFT) algorithm, which selects the best-performing model from fine-tuning with multiple initializations. Our analysis shows that MSFT achieves optimal target prediction performance with significantly fewer labeled target samples compared to target-only approaches, demonstrating its effectiveness in scenarios with limited target labels.
2024-12-11 / 16:00 ~ 17:00
IBS-KAIST 세미나 - IBS-KAIST 세미나: 인쇄
by ()
TBA
Events for the 취소된 행사 포함 모두인쇄
export to Google calendar  .ics download