Max Daniels

I am a PhD student working with Phillipe Rigollet's group at MIT. I'm interested in the intrinsic structures of large, high-dimensional datasets and the conditions under which these structures can be recovered, compressed, or visualized. Previously, I worked on topics in imaging inverse problems, generative modeling with deep neural networks, and optimal transport, under the various supervisions of Dr. Paul Hand at Northeastern University and Dr. Lenka Zdeborová at EPFL.


Contact Me

Please reach out by email: .
Alternatively, on social media: Twitter, LinkedIn, and GitHub.

Publications

* indicates equal contribution.

Multi-layer State Evolution Under Random Convolutional Design.

Max Daniels*, Cédric Gerbelot*, Florent Krzakala, Lenka Zdeborová. Published in NeurIPS 2022.

We study signal recovery in a multi-layer model with convolutional matrices, which is a simple model for a convolutional neural network. We prove state evolution equations for Approximate Message Passing, an algorithm that can be used to compute posterior statistics in high dimensional Bayesian models. These equations provide a tractable method to predict signal recovery error in a large-size limit.

Score-based Generative Neural Networks for Large-Scale Optimal Transport.

Max Daniels, Tyler Maunu, Paul Hand. Published in NeurIPS 2021.

We propose a new method to solve a regularized form of the Optimal Transport problem. The goal is to learn a transportation plan between a given source and target probability distribution so that the cost to execute that plan is minimized. We prove global optimization guarantees for a fast, large-scale learning algorithm to solve this problem and we demonstrate strong empirical performance.

Generator Surgery for Compressed Sensing

Jung Yeon Park*, Niklas Smedemark-Marguilies*, Max Daniels, Rose Yu, Jan-Willem van de Meent, Paul Hand. Presented at NeurIPS 2020 Deep Inverse Workshop.

Generative priors for imaging inverse problems allow one to model images from a dataset by generating samples from the dataset. We demonstrate a simple method to improve recovery performance by modifying these priors after training, but this performance boost comes at the cost of no longer being able to generate samples using the prior.

Invertible generative models for inverse problems: mitigating representation error and dataset bias.

Muhammad Asim*, Max Daniels*, Oscar Leong, Paul Hand, and Ali Ahmed. Published in ICML 2020.

In an imaging inverse problem, one must recover missing information about a target image using prior assumptions on the image structure. We show that Invertible Neural Networks can be used to vastly outperform classical approaches when one has access to a dataset of known images.

Statistical Distances and Their Implications to GAN Training.

Max Daniels. Presented at VISxAI workshop at IEEE VIS 2019. Honorable mention for best submission.

This is an interactive article about the role of statistical distances like Kullback Leibler Divergence and Earth Mover's Distance in training Generative Adversarial Networks (GANs).

An Overview of Graph Spectral Clustering and Partial Differential Equations.

Max Daniels*, Catherine Huang*, Chloe Makdad*, Shubham Makharia*. Product of a 2020 summer undergraduate research program run by the Institute for Computational and Experimental Research in Mathematics.

Clustering is a useful tool in data analysis. We explain the connection between the graph spectral clustering algorithm and the physical process of heat diffusion.

<<<<<<< HEAD

Resources

More coming soon!

======= >>>>>>> bfde7bf0db851afb67530ed43bd5f45691a1ccc2