Publications

Monographs and overview articles

alt text 

  1. Spectral Methods for Data Science: A Statistical Perspective
    Y. Chen, Y. Chi, J. Fan, C. Ma, Foundations and Trends in Machine Learning, vol. 14, no. 5, pp. 566–806, 2021. [Arxiv][FnT link]

  2. Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
    Y. Chi, Y. Lu, Y. Chen, IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5239-5269, October 2019 (invited overview article). [TSP version][Arxiv][slides]



2022

  1. Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a Generative Model
    G. Li, Y. Chi, Y. Wei, Y. Chen, accepted to Neural Information Processing Systems (NeurIPS), December 2022. [paper]

  2. Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games
    Y. Yan, G. Li, Y. Chen, J. Fan, 2022. [paper]

  3. Settling the Sample Complexity of Model-Based Offline Reinforcement Learning
    G. Li, L. Shi, Y. Chen, Y. Chi, Y. Wei, 2022. [paper]

  4. The Efficacy of Pessimism in Asynchronous Q-Learning
    Y. Yan, G. Li, Y. Chen, J. Fan, 2022. [paper]

  5. Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
    L. Shi, G. Li, Y. Wei, Y. Chen, Y. Chi, International Conference on Machine Learning (ICML), July 2022. [paper][ICML version]

  6. MagNet: An Open-Source Database for Data-Driven Magnetic Core Loss Modeling
    H. Li, D. Serrano, T. Guillod, E. Dogariu, A. Nadler, S. Wang, M. Luo, V. Bansal, Y. Chen, C. R. Sullivan, and M. Chen, IEEE Applied Power Electronics Conference (APEC), 2022. [paper][Github repo][website]

2021

  1. Breaking the Sample Complexity Barrier to Regret-Optimal Model-free Reinforcement Learning
    G. Li, L. Shi, Y. Chen, Y. Chi, 2021. [paper][slides]
           — appeared in part in NeurIPS 2021

  2. Inference for Heteroskedastic PCA with Missing Data
    Y. Yan, Y. Chen, J. Fan, 2021. [paper]

  3. Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
    G. Li, Y. Chen, Y. Chi, Y. Gu, Y. Wei, Neural Information Processing Systems (NeurIPS), December 2021. [paper][NeurIPS version][slides]

  4. Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence
    W. Zhan*, S. Cen*, B. Huang, Y. Chen, J. D. Lee, Y. Chi, 2021. (*=equal contributions) [paper]

  5. Minimax Estimation of Linear Functions of Eigenvectors in the Face of Small Eigen-Gaps
    G. Li, C. Cai, H. V. Poor, Y. Chen, 2021. [paper]

  6. Softmax Policy Gradient Methods Can Take Exponential Time to Converge
    G. Li, Y. Wei, Y. Chi, Y. Chen, 2021. [paper][slides]
           — appeared in part in COLT 2021

  7. Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis
    G. Li, C. Cai, Y. Chen, Y. Wei, Y. Chi, 2021. [paper]
           — appeared in part in ICML 2021

2020

  1. Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization
    S. Cen, C. Cheng, Y. Chen, Y. Wei, Y. Chi, Operations Research, vol. 70, no. 4, pp. 2563–2578, 2022 (INFORMS George Nicholson award finalist, 2021). [paper][OR version][slides]

  2. Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
    G. Li, Y. Wei, Y. Chi, Y. Chen, 2020. [paper][slides]
           — appeared in part in NeurIPS 2020

  3. Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction
    G. Li, Y. Wei, Y. Chi, Y. Gu, Y. Chen, IEEE Transactions on Information Theory, vol. 68, no. 1, pp. 448-473, Jan. 2022. [paper][slides]
           — appeared in part in NeurIPS 2020

  4. Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution under Random Designs
    Y. Chen, J. Fan, B. Wang, Y. Yan, accepted to Journal of the American Statistical Association, 2020. [paper]

  5. Uncertainty Quantification for Nonconvex Tensor Completion: Confidence Intervals, Heteroscedasticity and Optimality
    C. Cai, H. V. Poor, Y. Chen, accepted to IEEE Transactions on Information Theory, 2022. [paper][slides]
           — appeared in part in ICML 2020

  6. Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data
    Y. Chen, J. Fan, C. Ma, Y. Yan, Annals of Statistics, vol. 49, no. 5, pp. 2948-2971, Oct. 2021. [paper][AoS version]

  7. Tackling Small Eigen-gaps: Fine-Grained Eigenvector Estimation and Inference under Heteroscedastic Noise
    C. Cheng, Y. Wei, Y. Chen, IEEE Transactions on Information Theory, vol. 67, no. 11, pp. 7380-7419, Nov. 2021. [paper][slides]

  8. Learning Mixtures of Low-Rank Models
    Y. Chen, C. Ma, H. V. Poor, Y. Chen, IEEE Transactions on Information Theory, vol. 67, no. 7, pp. 4613-4636, July 2021. [paper]

  9. MagNet: A Machine Learning Framework for Magnetic Core Loss Modeling
    H. Li, S. R. Lee, M. Luo, C. R. Sullivan, Y. Chen, M. Chen, IEEE Workshop on Control and Modeling of Power Electronics (COMPEL), 2020. [paper]

2019

  1. Nonconvex Low-Rank Tensor Completion from Noisy Data
    C. Cai, G. Li, H. V. Poor, Y. Chen, Operations Research, vol. 70, no. 2, pp. 1219–1237, 2022. [paper][OR version][slides]
           — appeared in part in NeurIPS 2019

  2. Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
    C. Cai, G. Li, Y. Chi, H. V. Poor, Y. Chen, Annals of Statistics, vol. 49, no. 2, pp. 944-967, 2021. [paper][AoS version]

  3. Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
    B. Li, S. Cen, Y. Chen, Y. Chi, Journal of Machine Learning Research, vol. 21, no. 180, pp. 1-51, 2020. [paper][code]
           — appeared in part in AISTATS 2020

  4. Inference and Uncertainty Quantification for Noisy Matrix Completion
    Y. Chen, J. Fan, C. Ma, Y. Yan, Proceedings of the National Academy of Sciences (PNAS), vol. 116, no. 46, pp. 22931–22937, Nov. 2019 (direct submission). [PNAS version][full paper][slides]

  5. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
    Y. Chen, Y. Chi, J. Fan, C. Ma, Y. Yan, SIAM Journal on Optimization, vol. 30, no. 4, pp. 3098–3121, 2020. [paper][SIOPT version][slides]

2018

  1. Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval
    Y. Chen, Y. Chi, J. Fan, C. Ma, Mathematical Programming, vol. 176, no. 1-2, pp. 5-37, July 2019. [paper][MP version][slides]

  2. Nonconvex Matrix Factorization from Rank-One Measurements
    Y. Li, C. Ma, Y. Chen, Y. Chi, IEEE Transactions on Information Theory, vol. 67, no. 3, pp. 1928-1950, March 2021. [paper]
           — appeared in part in AISTATS 2019

  3. Asymmetry Helps: Eigenvalue and Eigenvector Analyses of Asymmetrically Perturbed Low-Rank Matrices
    Y. Chen, C. Cheng, J. Fan, Annals of Statistics, vol. 49, no. 1, pp. 435-458, 2021. [paper][AoS version][slides]

2017

  1. Implicit Regularization in Nonconvex Statistical Estimation:
    Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution

    C. Ma, K. Wang, Y. Chi, Y. Chen, Foundations of Computational Mathematics, vol. 20, no. 3, pp. 451-632, June 2020. [FoCM version][main text][supplement][Arxiv][slides]
           — appeared in part in ICML 2018

  2. Spectral Method and Regularized MLE Are Both Optimal for Top-K Ranking
    Y. Chen, J. Fan, C. Ma, K. Wang, Annals of Statistics, vol. 47, no. 4, pp. 2204-2235, August 2019. [paper][Arxiv][AoS version][slides]

  3. The Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square
    P. Sur, Y. Chen, E. J. Candes, Probability Theory and Related Fields, vol. 175, no. 1-2, pp.487–558, October 2019. [paper][supplement][slides][code]

2016

  1. The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
    Y. Chen and E. J. Candes, Communications on Pure and Applied Mathematics, vol. 71, issue 8, pp. 1648-1714, August 2018. [paper][slides][code]

  2. Community Recovery in Graphs with Locality
    Y. Chen, G. Kamath, C. Suh, and D. Tse, International Conference on Machine Learning (ICML), June 2016. [paper][ICML version][CGSI slides][slides][Github repo][lecture by D. Tse]

  3. Resolving Phase Ambiguity in Dual-Echo Dixon Imaging Using a Projected Power Method
    T. Zhang, Y. Chen, S. Bao, M. Alley, J. M. Pauly, B. Hargreaves, S. S. Vasanawala, Magnetic Resonance in Medicine, vol. 77, no. 5, pp. 2066 - 2076, May 2017. [paper][webpage][code]

2015

  1. Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems
    Y. Chen and E. J. Candes, Communications on Pure and Applied Mathematics, vol. 70, issue 5, pp. 822 - 883, May 2017. [paper][NIPS version][supplement][webpage and code][slides]
           — appeared in part in NIPS 2015 (oral)

  2. Spectral MLE: Top-K Rank Aggregation from Pairwise Comparisons
    Y. Chen and C. Suh, International Conference on Machine Learning (ICML), July 2015. [paper][ICML version]

  3. Information Recovery from Pairwise Measurements
    Y. Chen, C. Suh, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 62, no. 10, pp. 5881 - 5905, Oct. 2016. [paper][slides]
           — appeared in part in ISIT 2014 and ISIT 2015

  4. Robust Self-Navigated Body MRI Using Dense Coil Arrays
    T. Zhang, J. Y. Cheng, Y. Chen, D. G. Nishimura, J. M. Pauly, and S. S. Vasanawala, Magnetic Resonance in Medicine, vol. 76, no. 1, pp. 197 - 205, 2016. [paper][webpage][code]

2014

  1. Near-Optimal Joint Object Matching via Convex Relaxation
    Y. Chen, L. Guibas, and Q. Huang, International Conference on Machine Learning (ICML), June 2014. [paper][ICML version][slides][code]

  2. Scalable Semidefinite Relaxation for Maximum A Posterior Estimation
    Q. Huang, Y. Chen, and L. Guibas, International Conference on Machine Learning (ICML), June 2014. [paper][slides]

  3. Backing off from Infinity: Performance Bounds via Concentration of Spectral Measure for Random MIMO Channels
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 61, no. 1, pp. 366 - 387, Jan. 2015. [paper]

2013

  1. Exact and Stable Covariance Estimation from Quadratic Sampling via Convex Programming
    Y. Chen, Y. Chi, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 61, no. 7, pp. 4034 - 4059, July 2015. [paper][slides]
           — appeared in part in ISIT 2014 and ICASSP 2014

  2. Robust Spectral Compressed Sensing via Structured Matrix Completion
    Y. Chen and Y. Chi, IEEE Trans. on Info. Theory, vol. 60, no. 10, pp. 6576 - 6601, Oct. 2014. [paper][ICML version][slides]
           — appeared in part in ICML 2013 (full oral)

  3. Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization
    Y. Chi and Y. Chen, IEEE Trans. on Signal Processing, vol. 63, no. 4, pp. 1030 - 1042, Feb. 2015. [paper]

  4. Minimax Capacity Loss under Sub-Nyquist Universal Sampling
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 63, no. 6, pp. 3348 - 3367, June 2017. [paper][slides]
           — appeared in part in ISIT 2013

2012

  1. Channel Capacity under Sub-Nyquist Nonuniform Sampling
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 60, no. 8, pp. 4739 - 4756, Aug. 2014. [paper]
           — appeared in part in ISIT 2012

2011

  1. Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels
    Y. Chen, Y. C. Eldar, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 59, no. 8, pp. 4889 - 4914 , Aug. 2013. [paper][slides]
           — appeared in part in ICASSP 2011

  2. On the Role of Mobility on Multi-message Gossip
    Y. Chen, S. Shakkottai and J. G. Andrews, IEEE Trans. on Info. Theory, vol. 59, no. 6, pp. 3953 - 3970, June 2013. [paper][slides]
           — appeared in part in INFOCOM 2011 (full oral)

2010

  1. An Upper Bound on Multi-hop Transmission Capacity with Dynamic Routing Selection
    Y. Chen and J. G. Andrews, IEEE Trans. on Info. Theory, vol. 58, no. 6, pp. 3751 - 3765, June 2012. [paper]
           — appeared in part in ISIT 2010