ELE538: Mathematics of High-Dimensional Data

Yuxin Chen, Princeton University, Fall 2018

The term project can either be a literature review or include original research:

  • Literature review. We will provide a list of related papers not covered in the lectures, and the literature review should involve in-depth summaries and exposition of one of these papers.

  • Original research. It can be either theoretic or experimental (ideally a mix of the two), with approval from the instructor. If you choose this option, you can do it either individually or in groups of two. You are encouraged to combine your current research with your term project.

There are 3 milestones / deliverables to help you through the process.

  1. Proposal (due Oct. 19). Submit a short report (no more than 1 page) stating the papers you plan to survey or the research problems that you plan to work on. Describe why they are important or interesting, and provide some appropriate references. If you elect to do original research, please do not propose an overly ambitious project that cannot be completed by the end of the semester, and do not be too lured by generality. Focus on the simplest scenarios that can capture the issues you’d like to address.

  2. In-class presentation. Prepare an oral presentation with slides (the exact time will depend on the number of projects in the class). Focus on high-level ideas, and leave most technical details to your report.

  3. A written report (due Jan. 14). You are expected to submit a final project report—up to 3 pages with unlimited appendix—summarizing your findings / contributions. You must turn in a hard copy of your report to my office (you can slip it under the door of my office), as well as an electronic copy to my email for our records.

A few suggested (theoretical) papers for literature review

  1. ‘‘The landscape of empirical risk for non-convex losses,’’ S. Mei, Y. Bai, and A. Montanari, 2016.

  2. ‘‘Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima, ’’ P. Loh and M. Wainwright, 2013.

  3. ‘‘Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution, ’’ C. Ma, K. Wang, Y. Chi, and Y. Chen, 2017

  4. ‘‘On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition, ’’ M. Mondelli, A. Montanari, 2018

  5. ‘‘Learning One-hidden-layer Neural Networks with Landscape Design, ’’ R. Ge, J. Lee, T. Ma, 2017

  6. ‘‘Gradient Descent Learns Linear Dynamical Systems, ’’ M. Hardt, T. Ma, B. Recht, 2016

  7. ‘‘Memory-efficient Kernel PCA via Partial Matrix Sampling and Nonconvex Optimization: a Model-free Analysis of Local Minima, ’’ J. Chen, X. Li, 2017

  8. ‘‘Characterizing Implicit Bias in Terms of Optimization Geometry, ’’ S. Gunasekar, J. Lee, D. Soudry, N. Srebro, 2018

  9. ‘‘Universality laws for randomized dimension reduction, with applications,’’ S. Oymak, and Joel A. Tropp, arXiv preprint arXiv:1511.09433, 2015.

  10. ‘‘Phase transitions in semidefinite relaxations,’’ A. Javanmard, A. Montanari, and F. Ricci-Tersenghi, Proceedings of the National Academy of Sciences, 2016.

  11. ‘‘Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery,’’ C. Mu, B. Huang, J. Wright, D. Goldfarb, International Conference on Machine Learning, 2014.

  12. ‘‘On the Optimization Landscape of Tensor Decompositions,’’ R. Ge and T. Ma, 2016.

  13. ‘‘SLOPE is adaptive to unknown sparsity and asymptotically minimax,’’ W. Su and E. Candes, The Annals of Statistics, 2016.

  14. ‘‘Spectral methods meet EM: A provably optimal algorithm for crowdsourcing,’’ Y. Zhang, X. Chen, D. Zhou, and M. Jordan, Advances in Neural Information Processing Systems, 2014.

  15. ‘‘Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data,’’ Y. Li, Y. Liang, 2018.

  16. ‘‘Approximability of Discriminators Implies Diversity in GANs,’’ Y. Bai, T. Ma, A. Risteski, 2018.

  17. ‘‘Spectral algorithms for tensor completion,’’ A. Montanari, N. Sun, 2016.

  18. ‘‘Tensor SVD: Statistical and Computational Limits,’’ A. Zhang, 2017.

  19. ‘‘A Mean Field View of the Landscape of Two-Layers Neural Networks,’’ S. Mei, A. Montanari, P. Nguyen, 2018.

  20. ‘‘No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis,’’ R. Ge, C. Jin, Y. Zheng, 2017.

  21. ‘‘Is Q-learning Provably Efficient?’’ C. Jin, Z. Allen-Zhu, S. Bubeck, M. Jordan, 2018.

You have the freedom to select a paper of your own interest (especially more practical papers), as long as it is related to the topics of this course.