About Me

visualization of residual of iterative projection method for linear inequalities

I am the Iris & Howard Critchell Assistant Professor in the Mathematics Department at Harvey Mudd College. My research focuses are in mathematical data science, optimization, and applied convex geometry. I leverage mathematical tools, such as those from probability, combinatorics, and convex geometry, on problems in data science and optimization. Areas in which I have been active recently include randomized numerical linear algebra, combinatorial methods for convex optimization, tensor decomposition for topic modeling, network consensus and ranking problems, and community detection on graphs and hypergraphs. My research is supported by NSF CAREER #2440040 “Randomized Iterative Methods for Corrupted Data, Constrained Problems, and Compressed Updates” and has been supported by NSF DMS #2211318 “Tensor Models, Methods, and Medicine”.

Before starting at HMC, I received my PhD in the Graduate Group in Applied Mathematics at the University of California, Davis where I was fortunate to be advised by Professor Jesús A. De Loera, and then was a CAM Assistant Professor (post-doc) in the University of California, Los Angeles (UCLA) Mathematics Department where my exceptional postdoctoral mentor was Professor Deanna Needell.


Recent News

November ‘25: We (with collaborators Toby Anderson, Max Collins, Jackie Lok, and Elizaveta Rebrova) submitted our paper Beyond Expectation: Concentration Inequalities for Randomized Iterative Methods! In this paper, we provide upper bounds for the concentration and variance of the error of a general class of linear stochastic iterative methods, including the randomized Kaczmarz method and the randomized Gauss–Seidel method, and a more general class of nonlinear stochastic iterative methods, including the randomized Kaczmarz method for systems of linear inequalities. Most theoretical convergence results for stochastic iterative methods provide bounds on the expected error of the iterates, and yield a type of average case analysis. However, understanding the behavior of these methods in the near-worst-case is desirable. For stochastic methods, this motivates providing bounds on the variance and concentration of their error, which can be used to generate confidence intervals around the bounds on their expected error.

September ‘25: Our paper Block Gauss-Seidel methods for t-product tensor regression (with collaborators Alejandra Castillo, Iryna Hartsock, Paulina Hoyos, Lara Kassab, Alona Kryshchenko, Kamila R. Larripa, Deanna Needell, Shambhavi Suryanarayanan, and Karamatou Yacoubou-Djima) was accepted to the journal Numerical Algorithms! In this paper, we extend two variants of the block-randomized Gauss-Seidel method to solve a t-product tensor regression problem. We additionally develop methods for the special case where the measurement tensor is given in factorized form. We provide theoretical guarantees of the exponential convergence rate of our algorithms, accompanied by illustrative numerical simulations.

July ‘25: Our paper Randomized Kaczmarz methods for t-product tensor linear systems with factorized operators (with collaborators Alejandra Castillo, Iryna Hartsock, Paulina Hoyos, Lara Kassab, Alona Kryshchenko, Kamila R. Larripa, Deanna Needell, Shambhavi Suryanarayanan, and Karamatou Yacoubou-Djima) was accepted to the journal BIT Numerical Mathematics! In this paper, we extend the randomized Kaczmarz method to solve a t-product tensor system where the measurement tensor is given in factorized form. We develop variants of the randomized factorized Kaczmarz method for matrices that approximately solve tensor systems in both the consistent and inconsistent regimes. We provide theoretical guarantees of the exponential convergence rate of our algorithms, accompanied by illustrative numerical simulations, including in image deblurring!