With the goal of reducing the number of annotated data necessary for current deep learning (DL) algorithms, semi-supervised learning (SSL) algorithms use unlabeled data which is vastly more accessible than their labeled counterpart to enhance the performance of deep neural networks (DNNs) when trained on a small number of labeled data. As an example, state-of-the-art SSL algorithms can achieve up to ~84% accuracy on the CIFAR10 dataset using 1 image per class, as long as the single image is of “prototypical” quality. This session will introduce common SSL settings considered in recent works and cover DL-based SSL algorithms in a chronological fashion. While existing SSL algorithms are mainly heuristics (they lack theoretical justifications), the intuition underlying such algorithms will also be discussed in relation to the merging consensus in DL-based generalization theory/studies.
|