# Department Seminars & Colloquia

When you're logged in, you can subscribe seminars via e-mail

In this presentation, we discuss comprehensive frequency domain methods for estimating and inferring the second-order structure of spatial point processes. The main element here is on utilizing the discrete Fourier transform (DFT) of the point pattern and its tapered counterpart. Under second-order stationarity, we show that both the DFTs and the tapered DFTs are asymptotically jointly independent Gaussian even when the DFTs share the same limiting frequencies. Based on these results, we establish an α-mixing central limit theorem for a statistic formulated as a quadratic form of the tapered DFT. As applications, we derive the asymptotic distribution of the kernel spectral density estimator and establish a frequency domain inferential method for parametric stationary point processes. For the latter, the resulting model parameter estimator is computationally tractable and yields meaningful interpretations even in the case of model misspecification. We investigate the finite sample performance of our estimator through simulations, considering scenarios of both correctly specified and misspecified models. Joint work with Yongtao Guan @CUHK-Shenzhen.

In nonstationary bandit learning problems, the decision-maker must continually gather information and adapt their action selection as the latent state of the environment evolves. In each time period, some latent optimal action maximizes expected reward under the environment state. We view the optimal action sequence as a stochastic process, and take an information-theoretic approach to analyze attainable performance. We bound per-period regret in terms of the entropy rate of the optimal action process. The bound applies to a wide array of problems studied in the literature and reflects the problem’s information structure through its information-ratio.

This is part of an informal seminar series to be given by Mr. Jaehong Kim, who has been studying the book "Hodge theory and Complex Algebraic Geometry Vol 1 by Claire Voisin" for a few months. There will be 6-8 seminars during Spring 2024, and it will summarize about 70-80% of the book.

In the past decade, machine learning methods (MLMs) for solving partial differential equations (PDEs) have gained significant attention as a novel numerical approach. Indeed, a tremendous number of research projects have surged that apply MLMs to various applications, ranging from geophysics to biophysics. This surge in interest stems from the ability of MLMs to rapidly predict solutions for complex physical systems, even those involving multi-physics phenomena, uncertainty, and real-world data assimilation. This trend has led many to hopeful thinking MLMs as a potential game-changer in PDE solving. However, despite the hopeful thinking on MLMs, there are still significant challenges to overcome. These include limits compared to conventional numerical approaches, a lack of thorough analytical understanding of its accuracy, and the potentially long training times involved. In this talk, I will first assess the current state of MLMs for solving PDEs. Following this, we will explore what roles MLMs should play to become a conventional numerical scheme.

I will discuss some recent progress on the freeness problem for groups of 2x2 rational matrices generated by two parabolic matrices. In particular, I will discuss recent progress on determining the structural properties of such groups (beyond freeness) and when they have finite index in the finitely presented group SL(2,Z[1/m]), for appropriately chosen m.

In this talk, we focus on the global existence of volume-preserving mean curvature flows. In the isotropic case, leveraging the gradient flow framework, we demonstrate the convergence of solutions to a ball for star-shaped initial data. On the other hand, for anisotropic and crystalline flows, we establish the global-in-time existence for a class of initial data with the reflection property, utilizing explicit discrete-in-time approximation methods.

Using the invariant splitting principle, we construct an infinite family of exotic pairs of contractible 4-manifolds which survive one stabilization. We argue that some of them are potential candidates for surviving two stabilizations.

The size and complexity of recent deep learning models continue to increase exponentially, causing a serious amount of hardware overheads for training those models. Contrary to inference-only hardware, neural network training is very sensitive to computation errors; hence, training processors must support high-precision computation to avoid a large performance drop, severely limiting their processing efficiency. This talk will introduce a comprehensive design approach to arrive at an optimal training processor design. More specifically, the talk will discuss how we should make important design decisions for training processors in more depth, including i) hardware-friendly training algorithms, ii) optimal data formats, and iii) processor architecture for high precision and utilization.

We begin the first talk by introducing the concept of an h-principle that is mostly accessible through the two important methods. One of the methods is the convex integration that was successfully used by Mueller and Sverak and has been applied to many important PDEs. The other is the so-called Baire category method that was mainly studied by Dacorogna and Marcellini. We compare these methods in applying to a toy example.

In the second talk of the series, we exhibit several examples of application of convex integration to important PDE problems. In particular, we shall sketch some ideas of proof such as in the p-Laplace equation and its parabolic analogue, Euler-Lagrange equation of a polyconvex energy, gradient flow of a polyconvex energy and polyconvex elastodynamics.

After a brief review of the history, some applications of these models will be reviewed. This will include descriptions of rogue waves, tsunami propagation, internal waves and blood flow. Some of the theory emanaging from these applications will then be sketched.

One of the classical and most fascinating problems at the intersection between combinatorics and number theory is the study of the parity of the partition function. Even though p(n) in widely believed to be equidistributed modulo 2, progress in the area has proven exceptionally hard. The best results available today, obtained incrementally over several decades by Serre, Soundarajan, Ono and many otehrs, do not even guarantee that, asymptotically, p(n) is odd for /sqrt{x} values of n/neq x,
In this talk, we present a new, general conjectural framework that naturally places the parity of p(n) into the much broader, number-theoretic context of eta-eqotients. We discuss the history of this problem as well as recent progress on our "master conjecture," which includes novel results on multi-and regular partitions. We then show how seemingly unrelated classes of eta-equotients carry surprising (and surprisingly deep) connections modulo 2 to the partition function. One instance is the following striking result: If any t-multiparition function, with t/neq 0(mod 3), is odd with positive density, then so is p(n). (Note that proving either fact unconditionally seems entirely out of reach with current methods.) Throughout this talk, we will give a sense of the many interesting mathematical techniques that come into play in this area. They will include a variety of algebraic and combinatorial ideas, as well as tools from modular forms and number theory.

In this talk, we consider some polynomials which define Gaussian Graphical models in algebraic statistics. First, we briefly introduce background materials and some preliminary on this topic. Next, we regard a conjecture due to Sturmfels and Uhler concerning generation of the prime ideal of the variety associated to the Gaussian graphical model of any cycle graph and explain how to prove it. We also report a result on linear syzygies of any model coming from block graphs. The former work was done jointly with A. Conner and M. Michalek and the latter with J. Choe.

We introduce bordered Floer theory and its involutive version, as well as their applications to knot complements. We will sketch the proof that invariant splittings of CFK and those of CFD correspond to each other under the Lipshitz-Ozsvath-Thurston correspondence, via invariant splitting principle, which is an ongoing work with Gary Guth.

Geometric and topological structures can aid statistics in several ways. In high dimensional statistics, geometric structures can be used to reduce dimensionality. High dimensional data entails the curse of dimensionality, which can be avoided if there are low dimensional geometric structures. On the other hand, geometric and topological structures also provide useful information. Structures may carry scientific meaning about the data and can be used as features to enhance supervised or unsupervised learning.
In this talk, I will explore how statistical inference can be done on geometric and topological structures. First, given a manifold assumption, I will explore the minimax rate for estimating the dimension of the manifold. Second, also under the manifold assumption, I will explore the minimax rate for estimating the reach, which is a regularity quantity depicting how a manifold is smooth and far from self-intersecting. Third, I will investigate inference on cluster trees, which is a hierarchy tree of high-density clusters of a density function. Fourth, I will investigate inference on persistent homology, which quantifies salient topological features that appear at different resolutions of the data.

The Kudla-Rapoport conjecture predicts a relation between the arithmetic intersection numbers of special cycles on a unitary Shimura variety and the derivative of representation densities for hermitian forms at a place of good reduction. In this talk, I will present a variant of the Kudla-Rapoport conjecture at a place of bad reduction. Additionally, I will discuss a proof of the conjecture in several new cases in any dimension. This is joint work with Qiao He and Zhiyu Zhang.

Scientific knowledge, written in the form of differential equations, plays a vital role in various deep learning fields. In this talk, I will present a graph neural network (GNN) design based on reaction-diffusion equations, which addresses the notorious oversmoothing problem of GNNs. Since the self-attention of Transformers can also be viewed as a special case of graph processing, I will present how we can enhance Transformers in a similar way. I will also introduce a spatiotemporal forecasting model based on neural controlled differential equations (NCDEs). NCDEs were designed to process irregular time series in a continuous manner and for spatiotemporal processing, it needs to be combined with a spatial processing module, i.e., GNN. I will show how this can be done.

In dimension 4, the works of Freedman and Donaldson led us to the striking discovery that the smooth category is drastically different from the topological category, compared to other dimensions. Since then, it has been extraordinarily successful in investigating the difference in various contexts. In contrast, our understanding of when smooth and topological categories would exhibit similarity in dimension 4 remained, at best, minimal. In this talk, we will introduce some recent progress on new “topological = smooth” results in dimension 4, focusing on embedded disks.

Motivated by the Cohen-Lenstra heuristics, Friedman and Washington studied the distribution of the cokernels of random matrices over the ring of p-adic integers. This has been generalized in many directions, as well as some applications to the distribution of random algebraic objects. In this talk, first we give an overview of random matrix theory over the ring of p-adic integers, together with their connections to conjectures in number theory. After that, we investigate the distribution of the cokernels of random p-adic matrices with given zero entries. The second part of this talk is based on work in progress with Gilyoung Cheong, Dong Yeap Kang and Myungjun Yu.

The Julia set of a (hyperbolic) rational map
naturally comes embedded in the Riemann sphere, and thus has a
Hausdorff dimension. But the Hausdorff dimension varies if we tweak
the parameters slightly. Is there a "best" representative or more
invariant dimension? One answer comes from looking at
quasi-symmetries; the \emph{conformal dimension} of the Julia set is
the minimum Hausdorff dimension of any metri quasi-symmetric to the
original. We characterize the Ahlfors-regular conformal dimension of
Julia sets of rational maps using graphical energies arising from a
natural combinatorial description. (Ahlfors-regular is a dynamically
natural extra condition on the metric.)
This is joint work with Kevin Pilgrim.

This talk presents mathematical modeling, numerical analysis and simulation using finite element method in the field of electromagnetics at various scales, from analyzing quantum mechanical effects to calculating the scattering of electromagnetic wave in free space. First, we discuss and analyze the Schrodinger-Poisson system of quantum transport model to calculate electron states in three-dimensional heterostructures. Second, the electromagnetic vector wave scattering problem is solved to analyze the field characteristics in the presence of stealth platform. This talk also introduces several challenging issues in these applications and proposes their solutions through mathematical analysis.

A rational map, like f(z) = (1+z^2)/(1-z^2),
gives a map from the (extended) complex plane to itself. Studying the
dynamics under iteration yields beautiful Julia set fractals with
intricate nested structure. How can that structure be best understood?
One approach is combinatorial or topological, giving concrete models
for the Julia set and tools for cataloguing the possibilities.

Optimal Transport (OT) problem investigates a transport map that bridges two distributions while minimizing a specified cost function. OT theory has been widely utilized in generative modeling. Initially, the OT-based Wasserstein metric served as a measure for assessing the distance between data and generated distributions. More recently, the OT transport map, connecting data and prior distributions, has emerged as a new approach for generative models. In this talk, we will introduce generative models based on Optimal Transport. Specifically, we will present our work on a generative model utilizing Unbalanced Optimal Transport. We will also discuss our subsequent efforts to address the challenges associated with this approach.

List flow is a geometric flow for a pair $(g,u)$, where $g$ is a Riemannian metric and $u$ a smooth function. This extended Ricci flow system has applications to static vacuum solutions of the Einstein equations and to Ricci flow on warped products. The coupling induces additional difficulties compared to Ricci flow, which we overcome by proving an improved bound on the Hessian of the function u. This allows us to prove a convergence result, a singularity classification result and a surgery result in three dimensions.