Textbooks. We recommend the following books, although we will not follow them closely.
High-dimensional statistics: A non-asymptotic viewpoint, Martin J. Wainwright, 2019.
Statistical Foundations of Data Science, Jianqing Fan, Runze Li, Cun-Hui Zhang, Hui Zou, Chapman and Hall, 2020.
High-dimensional probability: An introduction with applications in data science, Roman Vershynin, Cambridge University Press, 2018.
High-dimensional data analysis with sparse models: Theory, algorithms, and applications, John Wright, Yi Ma, Allen Yang, 2018.
References. The following references also contain topics relevant to this course, and you might want to consult them.
Mathematics of sparsity (and a few other things), Emmanuel Candes, International Congress of Mathematicians, 2014.
Nonconvex optimization meets low-rank matrix factorization: An overview, Yuejie Chi, Yue M. Lu, Yuxin Chen, IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5239-5269, Oct. 2019.
An Introduction to Matrix Concentration Inequalities, Joel Tropp, Foundations and Trends in Machine Learning, 2015.
Topics in random matrix theory, Terence Tao, American Mathematical Society, 2012.