Department Seminars & Colloquia
When you're logged in, you can subscribe seminars via e-mail
Analyses of molecular phenotypes, such as gene expression, transcription factor binding, chromatin accessibility, and translation, is an important part of understanding the molecular basis of gene regulation and eventually organismal-level phenotypes, such as human disease susceptibility. The development of cheap high-throughput sequencing (HTS) technologies with experiment protocols has increased the use of HTS data as measurements of the molecular phenotypes (e.g., RNA-seq, ChIP-seq, and ATAC-seq). The HTS data provide high-resolution measurements across the whole genome that represent how the molecular phenotypes vary along the genome. We develop multiple statistical methods that better exploit the high-resolution information in the data and apply them to different biological questions in genomics. In this talk, I will briefly introduce two projects: 1) wavelet-based methods for identification of genetic variants associated with chromatin accessibility, and 2) mixture of hidden Markov models for inference of translated coding sequences.
Hyperbolic dynamical systems are nowadays fairly well understood from the topological and ergodic point of view. In this talk, we discuss some recent and ongoing works on the dynamics beyond hyperbolicity. In the first part, we will provide a characterization of robustly shadowable chain transitive sets for C1-vector fields on compact smooth manifolds. In the second part, we extend the concepts of topological stability and pseudo-orbit tracing property from homeomorphisms to Borel measures, and prove that every expansive measure with the pseudo-orbit tracing property is topologically stable. This represents a measurable version of the stability theorem by Peter Walters. The first part is joint work with M. Reza, and the second part is joint work with C.A. Morales.
In a constantly changing world, animals must account for fluctuations and changes in their environment when making decisions. They must make use of recent information, and appropriately discount older, irrelevant information. But to do so they need to learn the rate at which the environment changes. Recent experimental studies show that humans and other animals can indeed do so. However it it is unclear what underlying computations they use to achieve this. Developing normative models of evidence accumulation is a first step in quantifying such decision-making processes. While optimal, these algorithms are computationally intensive. To address this problem we developed an approximation of the normative inference process, and show how this approximate computation can be implemented in neural circuits. In the second part of the talk I will discuss evidence accumulation on networks where private information can be shared between neighboring nodes.