학과 세미나 및 콜로퀴엄
While deep neural networks (DNNs) have been widely used in numerous applications over the past few decades, their underlying theoretical mechanisms remain incompletely understood. In this presentation, we propose a geometrical and topological approach to understand how deep ReLU networks work on classification tasks. Specifically, we provide lower and upper bounds of neural network widths based on the geometrical and topological features of the given data manifold. We also prove that irrespective of whether the mean square error (MSE) loss or binary cross entropy (BCE) loss is employed, the loss landscape has no local minimum.
In this talk, we will primarily discuss the theoretical analysis of knowledge distillation based federated learning algorithms. Before we explore the main topics, we will introduce the basic concepts of federated learning and knowledge distillation. Subsequently, we will understand a nonparametric view of knowledge distillation based federated learning algorithms and introduce generalization analysis of these algorithms based the theory of regularized kernel regression methods.
In this talk, I will introduce twistor theory, which connects complex geometry, Riemannian geometry, and algebraic geometry by producing a complex manifold, called the twistor space, from a quaternionic Kähler manifold. First, I will explain why quaternionic Kähler manifolds have to be studied in view of holonomy theory in Riemannian geometry, and how twistor theory enables us to use algebraic geometry in studying their geometry. Next, based on the realization of homogeneous twistor spaces as adjoint varieties, I will present a description of the compactified spaces of conics in adjoint varieties, which is motivated by twistor theory.