Department Seminars & Colloquia




2021-05
Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 1 8
9 10 11 12 13 14 1 15
16 17 18 19 20 21 1 22
23 24 25 26 27 28 1 29
30 31          
2021-06
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 1 22 23 24 25 26
27 28 29 30      

When you're logged in, you can subscribe seminars via e-mail

The linear bandit problem has received a lot of attention in the past decade due to its applications in new recommendation systems and online ad placements where the feedback is binary such as thumbs up/down or click/no click. Linear bandits, however, assume the standard linear regression model and thus are not well-suited for binary feedback. While logistic linear bandits, the logistic regression counterpart of linear bandits, are more attractive for these applications, developments have been slow and practitioners often end up using linear bandits for binary feedback -- this corresponds to using linear regression for classification tasks. In this talk, I will present recent breakthroughs in logistic linear bandits leading to tight performance guarantees and lower bounds. These developments are based on self-concordant analysis, improved fixed design concentration inequalities, and novel methods for the design of experiments. I will also discuss open problems and conjectures on concentration inequalities. This talk will be based on our recent paper accepted to ICML'21 (https://arxiv.org/abs/2011.11222).
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-06-07 10:38:20
One famous conjecture in quantum chaos and random matrix theory is the so-called phase transition conjecture of random band matrices. It predicts that the eigenvectors' localization-delocalization transition occurs at some critical bandwidth $W_c(d)$, which depends on the dimension $d$. The well-known Anderson model and Anderson conjecture have a similar phenomenon. It is widely believed that $W_c(d)$ matches $1/\lambda_c(d)$ in the Anderson conjecture, where $\lambda_c(d)$ is the critical coupling constant. Furthermore, this random matrix eigenvector phase transition coincides with the local eigenvalue statistics phase transition, which matches the Bohigas-Giannoni-Schmit conjecture in quantum chaos theory. We proved the eigenvector's delocalization property for most of the general $d>=8$ random band matrix as long as the size of this random matrix does not grow faster than its bandwidth polynomially. In other words, as long as bandwidth $W$ is larger than $L^\epislon$ for some $\epislon>0$, and matrix size $L$. It is joint work with H.T. Yau (Harvard) and F. Yang (Upenn).
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-05-18 10:00:33
We show that log-concavity is the weakest power concavity preserved by the Dirichlet heat flow in convex domains in ${\bf R}^N$, where $N\ge 2$. Jointly with what we already know, i.e. that log-concavity is the strongest power concavity preserved by the heat flow, we see that log-concavity is the only power concavity preserved by the Dirichlet heat flow. This is a joint work with Paolo Salani (Univ. of Florence) and Asuka Takatsu (Tokyo Metropolitan Univ.)
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-04-20 10:07:51
다중관점기하학(multiple view geometry)에서는 카메라 관점의 변화에 따라 피사체의 영상이 어떻게 변화하는 지를 분석하고, 여러 관점에서 얻은 2차원 영상으로부터 3차원 모델을 재구성하는 기법을 연구한다. 본 강연에서는 다중관점기하학의 기본적인 개념과 테크닉을 설명하고, 이를 의료영상, 자율주행, 스마트헬스케어 등 다양한 산업분야에 적용하는 것에 대하여 소개한다.
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-05-04 14:16:53
The standard machine learning paradigm optimizing average-case performance performs poorly under distributional shift. For example, image classifiers have low accuracy on underrepresented demographic groups, and their performance degrades significantly on domains that are different from what the model was trained on. We develop and analyze a distributionally robust stochastic optimization (DRO) framework over shifts in the data-generating distribution. Our procedure efficiently optimizes the worst-case performance, and guarantees a uniform level of performance over subpopulations. We characterize the trade-off between distributional robustness and sample complexity, and prove that our procedure achieves this optimal trade-off. Empirically, our procedure improves tail performance, and maintains good performance on subpopulations even over time.
Contact: 확률 해석 및 응용 연구센터 (042-350-8111/8117)     To be announced     2021-03-22 10:23:11