In recent years, machine learning techniques based on neural networks have achieved remarkable success across various fields, and they have demonstrated a notable ability to represent solutions to inverse problems. From a mathematical perspective, the core aspect of this success lies in their strong approximation ability to target functions, underscoring the importance of understanding their approximation properties. As wavelet systems offer notable advantages in approximation, this talk focuses on neural network approximations that employ such systems. We will begin by studying wavelet systems' fundamental structures and basic properties, then introduce main approximation theories using wavelet frames. Finally, we will explore recent studies on neural networks that incorporate these wavelet systems.
|