Wednesday, May 17, 2023

<< >>  
2023. 4
Sun Mon Tue Wed Thu Fri Sat
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30
2023. 5
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
2023. 6
Sun Mon Tue Wed Thu Fri Sat
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30
2023-05-23 / 16:00 ~ 17:00
SAARC 세미나 - SAARC 세미나: 인쇄
by 하우석()
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to test the model. While many DA algorithms have demonstrated considerable empirical success, the unavailability of target labels in DA makes it challenging to determine their effectiveness in new datasets without a theoretical basis. Therefore, it is essential to clarify the assumptions required for successful DA algorithms and quantify the corresponding guarantees. In this work, we focus on the assumption that conditionally invariant components (CICs) useful for prediction exist across the source and target data. Under this assumption, we demonstrate that CICs found via conditional invariant penalty (CIP) play three essential roles in providing guarantees for DA algorithms. First, we introduce a new CIC-based algorithm called importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings like covariate shift and label shift. Second, we show that CICs can be used to identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm helps to address its known failure scenario caused by label-flipping features. We support our findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
2023-05-19 / 11:00 ~ 12:00
학과 세미나/콜로퀴엄 - 응용 및 계산수학 세미나: 인쇄
by ()
In this talk, we consider the problem of minimizing multi-modal loss functions with a large number of local optima. Since the local gradient points to the direction of the steepest slope in an infinitesimal neighborhood, an optimizer guided by the local gradient is often trapped in a local minimum. To address this issue, we develop a novel nonlocal gradient to skip small local minima by capturing major structures of the loss's landscape in black-box optimization. The nonlocal gradient is defined by a directional Gaussian smoothing (DGS) approach. The key idea is to conducts 1D long-range exploration with a large smoothing radius along orthogonal directions, each of which defines a nonlocal directional derivative as a 1D integral. Such long-range exploration enables the nonlocal gradient to skip small local minima. We use the Gauss-Hermite quadrature rule to approximate the d 1D integrals to obtain an accurate estimator. We also provide theoretical analysis on the convergence of the method on nonconvex landscape. In this work, we investigate the scenario where the objective function is composed of a convex function, perturbed by a highly oscillating, deterministic noise. We provide a convergence theory under which the iterates converge to a tightened neighborhood of the solution, whose size is characterized by the noise frequency. Furthermore, if the noise level decays to zero when approaching global minimum, we prove that the DGS optimization converges to the exact global minimum with linear rates, similarly to standard gradient-based method in optimizing convex functions. We complement our theoretical analysis with numerical experiments to illustrate the performance of this approach.
2023-05-23 / 13:00 ~ 14:00
학과 세미나/콜로퀴엄 - 박사논문심사: 기하적 특이점 유무를 포괄하는 경계치 문제에서의 공명 현상에 관한 분석과 계산 인쇄
by 홍지호 (KAIST)()

2023-05-17 / 13:00 ~ 14:00
학과 세미나/콜로퀴엄 - 박사논문심사: Gamma_0+(2)와 Gamma_0+(3)를 포함하는 특정 푹스 군에 관한 보형 형식의 산술 인쇄
by ()

2023-05-19 / 10:00 ~ 11:00
SAARC 세미나 - SAARC 세미나: 인쇄
by 강문진(한국과학기술원)
The compressible Euler system (CE) is one of the oldest PDE models in fluid dynamics as a representative model that describes the flow of compressible fluids with singularities such as shock waves. But, CE is regarded as an ideal model for inviscid gas, and may be physically meaningful only as a limiting case of the corresponding Navier-Stokes system(NS) with small viscosity and heat conductivity that can be negligible. Therefore, any stable physical solutions of CE should be constructed by inviscid limit of solutions of NS. This is known as the most challenging open problem in mathematical fluid dynamics (even for incompressible case). In this talk, I will present my recent works that tackle the open problem, using new methods: the (so-called) weighted relative entropy method with shifts (for controlling shocks) and the viscous wave-front tracking method (for handling general solution with small total variation).
2023-05-24 / 16:00 ~ 17:00
IBS-KAIST 세미나 - 수리생물학: 인쇄
by ()
Stochasticity in gene expression is an important source of cell-to-cell variability (or noise) in clonal cell populations. So far, this phenomenon has been studied using the Gillespie Algorithm, or the Chemical Master Equation, which implicitly assumes that cells are independent and do neither grow nor divide. This talk will discuss recent developments in modelling populations of growing and dividing cells through agent-based approaches. I will show how the lineage structure affects gene expression noise over time, which leads to a straightforward interpretation of cell-to-cell variability in population snapshots. I will also illustrate how cell cycle variability shapes extrinsic noise across lineage trees. Finally, I outline how to construct effective chemical master equation models based on dilution reactions and extrinsic variability that provide surprisingly accurate approximations of the noise statistics across growing populations. The results highlight that it is crucial to consider cell growth and division when quantifying cellular noise.
2023-05-18 / 11:50 ~ 12:40
대학원생 세미나 - 대학원생 세미나: 인쇄
by 안정호(카이스트)
We introduce concepts of parameterized complexity, especially, kernelization. Kernelization is a polynomial-time preprocessing algorithm that converts a given instance for a problem to a smaller instance while keeping the answer to the problem. Delicate kernelization mostly boosts the speed of solving the problem. We explain standard techniques in kernelizations, for instance, the sunflower lemma. Most optimization problems can be reformulated in the Hitting Set problem format, and the sunflower lemma gives us a simple yet beautiful kernelization for the problem. We further introduce our recent work about the Hitting Set problem on sparse graph classes.
Events for the 취소된 행사 포함 모두인쇄
export to Google calendar  .ics download