STAT 991-302: Mathematics of High-Dimensional Data

Yuxin Chen, Wharton Statistics and Data Science, Spring 2022

The term project can either be a literature review or include original research, and you can do it either individually or in groups of two:

  • Literature review. We will provide a list of related papers not covered in the lectures, and the literature review should involve in-depth summaries and exposition of one of these papers. If you choose this option, then you need to do it individually.

  • Original research. It can be either theoretic or experimental (ideally a mix of the two). If you choose this option, then you can do it either individually or in groups of two. You are encouraged to combine your current research with your term project.

There are 2 milestones / deliverables to help you through the process.

  1. Proposal (due Feb. 27). Submit a short report (no more than 1 page) stating the papers you plan to survey or the research problems that you plan to work on. Describe why they are important or interesting, and provide some appropriate references. If you elect to do original research, please do not propose an overly ambitious project that cannot be completed by the end of the semester, and do not be too lured by generality. Focus on the simplest scenarios that can capture the issues you’d like to address.

  2. A written report (due May 8). You are expected to submit a final project report  —  up to 5 pages with unlimited appendix  —  summarizing your findings and contributions. You must turn in an electronic copy.

A few suggested (theoretical) papers for literature review

  1. ‘‘The landscape of empirical risk for non-convex losses,’’ S. Mei, Y. Bai, and A. Montanari, The Annals of Statistics, 2018.

  2. ‘‘Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution, ’’ C. Ma, K. Wang, Y. Chi, and Y. Chen, Foundations of Computational Mathematics, 2020.

  3. ‘‘Gradient Descent Learns Linear Dynamical Systems, ’’ M. Hardt, T. Ma, B. Recht, Journal of Machine Learning Research, 2018

  4. ‘‘Phase transitions in semidefinite relaxations,’’ A. Javanmard, A. Montanari, and F. Ricci-Tersenghi, Proceedings of the National Academy of Sciences, 2016.

  5. ‘‘Nonconvex Low-Rank Tensor Completion from Noisy Data,’’ C. Cai, G. Li, H. V. Poor, Y. Chen, accepted to Operations Research, 2020.

  6. ‘‘The Landscape of the Spiked Tensor Model,’’ G. Arous, S. Mei, A. Montanari, and M. Nica, Communications on Pure and Applied Mathematics, 2019.

  7. ‘‘Self-regularizing Property of Nonparametric Maximum Likelihood Estimator in Mixture Models,’’ Y. Polyanskiy, Y. Wu, 2020.

  8. ‘‘Inference and Uncertainty Quantification for Noisy Matrix Completion,’’ Y. Chen, J. Fan, C. Ma, and Y. Yan, Proceedings of the National Academy of Sciences (PNAS), 2019.

  9. ‘‘The Lasso with General Gaussian Designs with Applications to Hypothesis Testing,’’ M. Celentano, A. Montanari, Y. Wei, 2020.

  10. ‘‘Matrix concentration for products,’’ D. Huang, J. Niles-Weed, J. Tropp, and R. Ward, 2020.

  11. ‘‘Meta-learning for Mixed Linear Regression,’’ W. Kong, R. Somani, Z. Song, S. Kakade, S. Oh, International Conference on Machine Learning, 2020.

  12. ‘‘FedSplit: An Algorithmic Framework for Fast Federated Optimization,’’ R. Pathak, M. Wainwright, 2020.

  13. ‘‘Toward the Fundamental Limits of Imitation Learning,’’ N. Rajaraman, L. Yang, J. Jiao, K. Ramachandran, 2020.

  14. ‘‘Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?’’ S. Du, S. Kakade, R. Wang, L. Yang, International Conference on Learning Representations, 2020.

  15. ‘‘Deep Networks and the Multiple Manifold Problem,’’ S. Buchanan, D. Gilboa, J. Wright, 2020.

  16. ‘‘Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model,’’ G. Li, Y. Wei, Y. Chi, Y. Gu, Y. Chen, Neural Information Processing Systems, 2020.

  17. ‘‘Two Models of Double Descent for Weak Features,’’ M. Belkin, D. Hsu, J. Xu, 2019.

  18. ‘‘Kernel and Rich Regimes in Overparametrized Models,’’ B. Woodworth, S. Gunasekar, J. Lee, E. Moroshko, P. Savarese, I. Golan, D. Soudry, N. Srebro, 2020.

  19. ‘‘Just Interpolate: Kernel Ridgeless Regression Can Generalize,’’ T. Liang, A. Rakhlin, The Annals of Statistics, 2020.

  20. ‘‘Minimum L1 Interpolators: Precise Asymptotics and Multiple Descent,’’ Y. Li, Y. Wei, 2021.

  21. ‘‘Benign Overfitting in Linear Regression,’’ P. Bartlett, P. Long, G. Lugosi, and A. Tsigler, Proceedings of the National Academy of Sciences (PNAS), 2020.

  22. ‘‘Consistent Risk Estimation in High-Dimensional Linear Regression,’’ J. Xu, A. Maleki, K. Rad, 2019.

  23. ‘‘Learning Models with Uniform Performance via Distributionally Robust Optimization,’’ J. Duchi and H. Namkoong, The Annals of Statistics, 2020.

  24. ‘‘The Importance of Better Models in Stochastic Optimization,’’ H. Asi and J. Duchi, Proceedings of the National Academy of Sciences (PNAS), 2019.

  25. ‘‘Gaussian Differential Privacy,’’ J. Dong, A. Roth, W. Su, Journal of the Royal Statistical Society: Series B, 2020.

  26. ‘‘Precise Tradeoffs in Adversarial Training for Linear Regression,’’ A. Javanmard, M. Soltanolkotabi, H. Hassani, 2020.

  27. ‘‘Robust Estimation via Robust Gradient Estimation,’’ A. Prasad, A. Suggala, S. Balakrishnan, P. Ravikumar, Journal of the Royal Statistical Society, Series B, 2020.

  28. ‘‘Prevalence of Neural Collapse During the Terminal Phase of Deep Learning Training,’’ V. Papyan, X. Han, and David Donoho, Proceedings of the National Academy of Sciences (PNAS), 2020.

  29. ‘‘Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training,’’ C. Fang, H. He, Q. Long, W. Su, Proceedings of the National Academy of Sciences (PNAS), 2021.

  30. ‘‘V-Learning – A Simple, Efficient, Decentralized Algorithm for Multiagent RL,’’ C. Jin, Q. Liu, Y. Wang, T. Yu, 2021.

  31. ‘‘The Statistical Complexity of Interactive Decision Making,’’ D. Foster, S. Kakade, J. Qian, A. Rakhlin, 2021.

  32. ‘‘On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points,’’ C. Jin, P. Netrapalli, R. Ge, S. Kakade, M. Jordan, 2019.

  33. ‘‘Universal Approximation Bounds for Superpositions of a Sigmoidal Function,’’ A. Barron, IEEE Transactions on Information theory, 1993.

  34. ‘‘Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability,’’ J. Cohen, S. Kaur, Y. Li, J. Z. Kolter, A. Talwalkar, 2021.

  35. ‘‘Inference for Heteroskedastic PCA with Missing Data,’’ Y. Yan, Y. Chen, J. Fan, 2021.

  36. ‘‘A Theoretical Analysis of Contrastive Unsupervised Representation Learning,’’ S. Arora, H. Khandeparkar, M. Khodak, O. Plevrakis, N. Saunshi, International Conference on Machine Learning, 2019.

  37. ‘‘Contrastive Learning, Multi-View Redundancy, and Linear Models,’’ C. Tosh, A, Krishnamurthy, D. Hsu, Algorithmic Learning Theory, 2021.

  38. ‘‘Predicting What You Already Know Helps: Provable Self-Supervised Learning,’’ J. Lee, Q. Lei, N. Saunshi, J. Zhuo, Neural Information Processing Systems, 2021.

  39. ‘‘Invariant Risk Minimization,’’ M. Arjovsky, L. Bottou, I. Gulrajani, D. Lopez-Paz, 2019.

  40. ‘‘Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data,’’ C. Wei, K. Shen, Y. Chen, T. Ma, 2020.

  41. ‘‘Implicit Regularization Towards Rank Minimization in ReLU Networks,’’ N. Timor, G. Vardi, O. Shamir, 2022.

  42. ‘‘The Implicit Bias of Benign Overfitting,’’ O. Shamir, 2022.

  43. ‘‘Minimax Regret Optimization for Robust Machine Learning under Distribution Shift,’’ A. Agarwal, T. Zhang, 2022.

You have the freedom to select a paper of your own interest (especially more practical papers), as long as it is related to the topics of this course.