ELE520: Mathematics of Data Science
The term project can either be a literature review or include original research, and you can do it
either individually or in groups of two:
Literature review. We will provide a list of related papers not covered in the lectures, and the literature review should involve indepth summaries and exposition of one of these papers.
Original research. It can be either theoretic or experimental (ideally a mix of the two), with approval from the instructor. If you choose this option, you can do it either individually or in groups of two. You are encouraged to combine your current research with your term project.
There are 2 milestones / deliverables to help you through the process.
Proposal (due Oct. 13). Submit a short report (no more than 1 page) stating the papers you plan to survey or the research problems that you plan to work on. Describe why they are important or interesting, and provide some appropriate references. If you elect to do original research, please do not propose an overly ambitious project that cannot be completed by the end of the semester, and do not be too lured by generality. Focus on the simplest scenarios that can capture the issues you’d like to address.
A written report (due Dec. 15). You are expected to submit a final project report—up to 4 pages with unlimited appendix—summarizing your findings / contributions. You must turn in an electronic copy.
A few suggested (theoretical) papers for literature review
‘‘The landscape of empirical risk for nonconvex losses,’’ S. Mei, Y. Bai, and A. Montanari, The Annals of Statistics, 2018.
‘‘Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution, ’’ C. Ma, K. Wang, Y. Chi, and Y. Chen, Foundations of Computational Mathematics, 2020.
‘‘Gradient Descent Learns Linear Dynamical Systems, ’’ M. Hardt, T. Ma, B. Recht, Journal of Machine Learning Research, 2018
‘‘Universality laws for randomized dimension reduction, with applications,’’ S. Oymak, and Joel A. Tropp, Information and Inference, 2017.
‘‘Phase transitions in semidefinite relaxations,’’ A. Javanmard, A. Montanari, and F. RicciTersenghi, Proceedings of the National Academy of Sciences, 2016.
‘‘On the Optimization Landscape of Tensor Decompositions,’’ R. Ge and T. Ma, Advances in Neural Information Processing Systems, 2017.
‘‘SLOPE is adaptive to unknown sparsity and asymptotically minimax,’’ W. Su and E. Candes, The Annals of Statistics, 2016.
‘‘Spectral methods meet EM: A provably optimal algorithm for crowdsourcing,’’ Y. Zhang, X. Chen, D. Zhou, and M. Jordan, Advances in Neural Information Processing Systems, 2014.
‘‘Tensor SVD: Statistical and Computational Limits,’’ A. Zhang, D. Xia, IEEE Transactions on Information Theory, 2018.
‘‘No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis,’’ R. Ge, C. Jin, Y. Zheng, International Conference on Machine Learning, 2017.
‘‘Is Qlearning Provably Efficient?’’ C. Jin, Z. AllenZhu, S. Bubeck, M. Jordan, Advances in Neural Information Processing Systems, 2018.
‘‘Nonconvex LowRank Symmetric Tensor Completion from Noisy Data,’’ C. Cai, G. Li, H. V. Poor, Y. Chen, Advances in Neural Information Processing Systems, 2019.
‘‘The Landscape of the Spiked Tensor Model,’’ G. Arous, S. Mei, A. Montanari, and M. Nica, Communications on Pure and Applied Mathematics, 2019.
‘‘Learning Mixtures of LowRank Models,’’ Y. Chen, C. Ma, H. V. Poor, Y. Chen, 2020.
‘‘Selfregularizing Property of Nonparametric Maximum Likelihood Estimator in Mixture Models,’’ Y. Polyanskiy, Y. Wu, 2020.
‘‘Algorithmic Regularization in Overparameterized Matrix Sensing and Neural Networks with Quadratic Activations,’’ Y. Li, T. Ma, H. Zhang, COLT 2018.
‘‘Inference and Uncertainty Quantification for Noisy Matrix Completion,’’ Y. Chen, J. Fan, C. Ma, and Y. Yan, Proceedings of the National Academy of Sciences (PNAS), 2019.
‘‘The Lasso with General Gaussian Designs with Applications to Hypothesis Testing,’’ M. Celentano, A. Montanari, Y. Wei, 2020.
‘‘Matrix concentration for products,’’ D. Huang, J. NilesWeed, J. Tropp, and R. Ward, 2020.
‘‘Metalearning for Mixed Linear Regression,’’ W. Kong, R. Somani, Z. Song, S. Kakade, S. Oh, International Conference on Machine Learning, 2020.
‘‘FedSplit: An Algorithmic Framework for Fast Federated Optimization,’’ R. Pathak, M. Wainwright, 2020.
‘‘Nonconvex Matrix Completion with Linearly Parameterized Factors,’’ J. Chen, X. Li, and Z. Ma, 2020.
‘‘Lowrank Matrix Recovery with Composite Optimization: Good Conditioning and Rapid Convergence,’’ V. Charisopoulos, Y. Chen, D. Davis, M. Diaz, L. Ding, D. Drusvyatskiy, 2019.
‘‘Toward the Fundamental Limits of Imitation Learning,’’ N. Rajaraman, L. Yang, J. Jiao, K. Ramachandran, 2020.
‘‘Robust Estimation via Generalized Quasigradients,’’ B. Zhu, J. Jiao, J. Steinhardt, 2020.
‘‘Decoupling Representation Learning from Reinforcement Learning,’’ A. Stooke, K. Lee, P. Abbeel, M. Laskin, 2020.
‘‘Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?’’ S. Du, S. Kakade, R. Wang, L. Yang, International Conference on Learning Representations, 2020.
‘‘Deep Networks and the Multiple Manifold Problem,’’ S. Buchanan, D. Gilboa, J. Wright, 2020.
‘‘Breaking the Sample Size Barrier in ModelBased Reinforcement Learning with a Generative Model,’’ G. Li, Y. Wei, Y. Chi, Y. Gu, Y. Chen, 2020.
‘‘Two Models of Double Descent for Weak Features,’’ M. Belkin, D. Hsu, J. Xu, 2019.
‘‘Kernel and Rich Regimes in Overparametrized Models,’’ B. Woodworth, S. Gunasekar, J. Lee, E. Moroshko, P. Savarese, I. Golan, D. Soudry, N. Srebro, 2020.
‘‘Just Interpolate: Kernel Ridgeless Regression Can Generalize,’’ T. Liang, A. Rakhlin, The Annals of Statistics, 2020.
‘‘Benign Overfitting in Linear Regression,’’ P. Bartlett, P. Long, G. Lugosi, and A. Tsigler, Proceedings of the National Academy of Sciences (PNAS), 2020.
‘‘Consistent Risk Estimation in HighDimensional Linear Regression,’’ J. Xu, A. Maleki, K. Rad, 2019.
‘‘Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification,’’ C. Dan, Y. Wei, P. Ravikumar, International Conference on Machine Learning, 2020.
‘‘Learning Models with Uniform Performance via Distributionally Robust Optimization,’’ J. Duchi and H. Namkoong, The Annals of Statistics, 2020.
‘‘The Importance of Better Models in Stochastic Optimization,’’ H. Asi and J. Duchi, Proceedings of the National Academy of Sciences (PNAS), 2019.
‘‘Gaussian Differential Privacy,’’ J. Dong, A. Roth, W. Su, Journal of the Royal Statistical Society: Series B, 2020.
‘‘Precise Tradeoffs in Adversarial Training for Linear Regression,’’ A. Javanmard, M. Soltanolkotabi, H. Hassani, 2020.
‘‘Robust Estimation via Robust Gradient Estimation,’’ A. Prasad, A. Suggala, S. Balakrishnan, P. Ravikumar, Journal of the Royal Statistical Society, Series B, 2020.
You have the freedom to select a paper of your own interest (especially more practical papers), as long as it is related to the topics of this course.
