Textbooks. We will use some materials from the following books, although we might not follow them closely.
Spectral methods for data science: A statistical perspective, Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Foundations and Trends in Machine Learning, vol. 14, no. 5, pp. 566–806, 2021.
An introduction to matrix concentration inequalities, Joel Tropp, Foundations and Trends in Machine Learning, 2015.
High-dimensional statistics: A non-asymptotic viewpoint, Martin J. Wainwright, 2019.
References. The following references also contain topics relevant to this course, and you might want to consult them.
Statistical foundations of data science, Jianqing Fan, Runze Li, Cun-Hui Zhang, Hui Zou, Chapman and Hall, 2020.
High-dimensional probability: An introduction with applications in data science, Roman Vershynin, Cambridge University Press, 2018.
High-dimensional data analysis with sparse models: Theory, algorithms, and applications, John Wright, Yi Ma, 2022.
Mathematics of sparsity (and a few other things), Emmanuel Candes, International Congress of Mathematicians, 2014.
Nonconvex optimization meets low-rank matrix factorization: An overview, Yuejie Chi, Yue M. Lu, Yuxin Chen, IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5239-5269, Oct. 2019.
Topics in random matrix theory, Terence Tao, American Mathematical Society, 2012.