Publications

Monographs and overview articles

alt text 

  1. Spectral Methods for Data Science: A Statistical Perspective
    Y. Chen, Y. Chi, J. Fan, C. Ma, Foundations and Trends in Machine Learning, vol. 14, no. 5, pp. 566–806, 2021. [Arxiv][FnT link][Slides for PKU short course]

  2. Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
    Y. Chi, Y. Lu, Y. Chen, IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5239-5269, October 2019 (invited overview article). [TSP version][Arxiv][slides]



2024

  1. Accelerating Convergence of Score-Based Diffusion Models, Provably
    G. Li*, Y. Huang*, T. Efimov, Y. Wei, Y. Chi, Y. Chen, 2024 (*=equal contributions). [paper]

  2. Horizon-Free Regret for Linear Markov Decision Processes
    Zihan Zhang, Jason D. Lee, Y. Chen, Simon S. Du, International Conference on Learning Representations (ICLR), 2024. [paper]

2023

  1. Optimal Multi-Distribution Learning
    Z. Zhang, W. Zhan, Y. Chen, S. S. Du, J. D. Lee, 2023. [paper][slides]

  2. Heteroskedastic Tensor Clustering
    Y. Zhou, Y. Chen, 2023 [paper]

  3. Settling the Sample Complexity of Online Reinforcement Learning
    Z. Zhang, Y. Chen, J. D. Lee, S. S. Du, 2023. [paper][slides]

  4. Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative Models
    G. Li, Y. Wei, Y. Chen, Y. Chi, 2023. [paper]
           — appeared in part in ICLR 2024

  5. Federated Natural Policy Gradient Methods for Multi-task Reinforcement Learning
    T. Yang, S. Cen, Y. Wei, Y. Chen, Y. Chi, 2023 [paper]

  6. Minimax-Optimal Reward-Agnostic Exploration in Reinforcement Learning
    G. Li, Y. Yan, Y. Chen, J. Fan, 2023. [paper]

  7. Deflated HeteroPCA: Overcoming the Curse of Ill-Conditioning in Heteroskedastic PCA
    Y. Zhou, Y. Chen, 2023 [paper][slides]

  8. Reward-Agnostic Fine-Tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning
    G. Li, W. Zhan, J. D. Lee, Y. Chi, Y. Chen, Neural Information Processing Systems (NeurIPS), December 2023. [paper]

  9. The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
    L. Shi, G. Li, Y. Wei, Y. Chen, M. Geist, Y. Chi, Neural Information Processing Systems (NeurIPS), December 2023. [paper]

  10. Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods
    G. Li, Y. Chen, Y. Chi, H. V. Poor, Y. Chen, 2023 [paper]

  11. Why MagNet: Quantifying the Complexity of Modeling Power Magnetic Material Characteristics
    D. Serrano, H. Li, S. Wang, T. Guillod, M. Luo, V. Bansal, N. Jha, Y. Chen, C. R. Sullivan, and M. Chen, IEEE Transactions on Power Electronics, vol. 38, no. 11, pp. 14292-14316, Nov. 2023 [paper]

  12. How MagNet: Machine Learning Framework for Modeling Power Magnetic Material Characteristics
    H. Li, D. Serrano, T. Guillod, S. Wang, E. Dogariu, A. Nadler, M. Luo, V. Bansal, N. Jha, Y. Chen, C. R. Sullivan, and M. Chen, IEEE Transactions on Power Electronics, vol. 38, no. 12, pp. 15829-15853, Dec. 2023 [paper]

2022

  1. Settling the Sample Complexity of Model-Based Offline Reinforcement Learning
    G. Li, L. Shi, Y. Chen, Y. Chi, Y. Wei, Annals of Statistics, vol. 52, no. 1, pp. 233-260, 2024. [paper][AoS version]

  2. Minimax-Optimal Multi-Agent RL in Markov Games With a Generative Model
    G. Li, Y. Chi, Y. Wei, Y. Chen, Neural Information Processing Systems (NeurIPS), December 2022 (selected as oral). [paper]

  3. Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games
    Y. Yan, G. Li, Y. Chen, J. Fan, accepted to Operations Research, 2024+. [paper]

  4. The Efficacy of Pessimism in Asynchronous Q-Learning
    Y. Yan, G. Li, Y. Chen, J. Fan, IEEE Transactions on Information Theory, vol. 69, no. 11, pp. 7185 - 7219, Nov. 2023. [paper]

  5. Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
    L. Shi, G. Li, Y. Wei, Y. Chen, Y. Chi, International Conference on Machine Learning (ICML), July 2022. [paper][ICML version]

  6. MagNet: An Open-Source Database for Data-Driven Magnetic Core Loss Modeling
    H. Li, D. Serrano, T. Guillod, E. Dogariu, A. Nadler, S. Wang, M. Luo, V. Bansal, Y. Chen, C. R. Sullivan, and M. Chen, IEEE Applied Power Electronics Conference (APEC), 2022. [paper][Github repo][website]

2021

  1. Softmax Policy Gradient Methods Can Take Exponential Time to Converge
    G. Li, Y. Wei, Y. Chi, Y. Chen, Mathematical Programming, vol. 201, pp. 707-802, 2023. [paper][MP version][slides]
           — appeared in part in COLT 2021

  2. Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis
    G. Li, C. Cai, Y. Chen, Y. Wei, Y. Chi, Operations Research, vol. 72, no. 1, pp. 203-221, 2024. [paper][OR version]
           — appeared in part in ICML 2021

  3. Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence
    W. Zhan*, S. Cen*, B. Huang, Y. Chen, J. D. Lee, Y. Chi, SIAM Journal on Optimization, vol. 33, no. 2, pp. 1061-1091, June 2023. (*=equal contributions) [paper]

  4. Inference for Heteroskedastic PCA with Missing Data
    Y. Yan, Y. Chen, J. Fan, accepted to Annals of Statistics, 2024+. [paper][slides]

  5. Breaking the Sample Complexity Barrier to Regret-Optimal Model-free Reinforcement Learning
    G. Li, L. Shi, Y. Chen, Y. Chi, Information and Inference: A Journal of the IMA, vol. 12, no. 2, pp. 969-1043, June 2023. [paper][slides]
           — appeared in part in NeurIPS 2021 (selected as spotlight)

  6. Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
    G. Li, Y. Chen, Y. Chi, Y. Gu, Y. Wei, Neural Information Processing Systems (NeurIPS), December 2021. [paper][NeurIPS version][slides]

  7. Minimax Estimation of Linear Functions of Eigenvectors in the Face of Small Eigen-Gaps
    G. Li, C. Cai, H. V. Poor, Y. Chen, 2021. [paper]

2020

  1. Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
    G. Li, Y. Wei, Y. Chi, Y. Chen, Operations Research, vol. 72, no. 1, pp. 222-236, 2024. [paper][OR version][slides]
           — appeared in part in NeurIPS 2020

  2. Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data
    Y. Chen, J. Fan, C. Ma, Y. Yan, Annals of Statistics, vol. 49, no. 5, pp. 2948-2971, Oct. 2021. [paper][AoS version]

  3. Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution under Random Designs
    Y. Chen, J. Fan, B. Wang, Y. Yan, Journal of the American Statistical Association, vol. 118, no. 542, pp. 858-868, 2023. [paper]

  4. Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization
    S. Cen, C. Cheng, Y. Chen, Y. Wei, Y. Chi, Operations Research, vol. 70, no. 4, pp. 2563–2578, 2022. [paper][OR version][slides]
           — INFORMS George Nicholson award finalist

  5. Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction
    G. Li, Y. Wei, Y. Chi, Y. Gu, Y. Chen, IEEE Transactions on Information Theory, vol. 68, no. 1, pp. 448-473, Jan. 2022. [paper][slides]
           — appeared in part in NeurIPS 2020

  6. Uncertainty Quantification for Nonconvex Tensor Completion: Confidence Intervals, Heteroscedasticity and Optimality
    C. Cai, H. V. Poor, Y. Chen, IEEE Transactions on Information Theory, vol. 69, no. 1, pp. 407-452, Jan. 2023. [paper][slides]
           — appeared in part in ICML 2020

  7. Tackling Small Eigen-gaps: Fine-Grained Eigenvector Estimation and Inference under Heteroscedastic Noise
    C. Cheng, Y. Wei, Y. Chen, IEEE Transactions on Information Theory, vol. 67, no. 11, pp. 7380-7419, Nov. 2021. [paper][slides]

  8. Learning Mixtures of Low-Rank Models
    Y. Chen, C. Ma, H. V. Poor, Y. Chen, IEEE Transactions on Information Theory, vol. 67, no. 7, pp. 4613-4636, July 2021. [paper]

  9. MagNet: A Machine Learning Framework for Magnetic Core Loss Modeling
    H. Li, S. R. Lee, M. Luo, C. R. Sullivan, Y. Chen, M. Chen, IEEE Workshop on Control and Modeling of Power Electronics (COMPEL), 2020. [paper]

2019

  1. Nonconvex Low-Rank Tensor Completion from Noisy Data
    C. Cai, G. Li, H. V. Poor, Y. Chen, Operations Research, vol. 70, no. 2, pp. 1219–1237, 2022. [paper][OR version][slides]
           — appeared in part in NeurIPS 2019

  2. Inference and Uncertainty Quantification for Noisy Matrix Completion
    Y. Chen, J. Fan, C. Ma, Y. Yan, Proceedings of the National Academy of Sciences (PNAS), vol. 116, no. 46, pp. 22931–22937, Nov. 2019 (direct submission). [PNAS version][full paper][slides]

  3. Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
    C. Cai, G. Li, Y. Chi, H. V. Poor, Y. Chen, Annals of Statistics, vol. 49, no. 2, pp. 944-967, 2021. [paper][AoS version]

  4. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
    Y. Chen, Y. Chi, J. Fan, C. Ma, Y. Yan, SIAM Journal on Optimization, vol. 30, no. 4, pp. 3098–3121, 2020. [paper][SIOPT version][slides]

  5. Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
    B. Li, S. Cen, Y. Chen, Y. Chi, Journal of Machine Learning Research, vol. 21, no. 180, pp. 1-51, 2020. [paper][code]
           — appeared in part in AISTATS 2020

2018

  1. Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval
    Y. Chen, Y. Chi, J. Fan, C. Ma, Mathematical Programming, vol. 176, no. 1-2, pp. 5-37, July 2019. [paper][MP version][slides]

  2. Asymmetry Helps: Eigenvalue and Eigenvector Analyses of Asymmetrically Perturbed Low-Rank Matrices
    Y. Chen, C. Cheng, J. Fan, Annals of Statistics, vol. 49, no. 1, pp. 435-458, 2021. [paper][AoS version][slides]

  3. Nonconvex Matrix Factorization from Rank-One Measurements
    Y. Li, C. Ma, Y. Chen, Y. Chi, IEEE Transactions on Information Theory, vol. 67, no. 3, pp. 1928-1950, March 2021. [paper]
           — appeared in part in AISTATS 2019

2017

  1. Implicit Regularization in Nonconvex Statistical Estimation:
    Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution

    C. Ma, K. Wang, Y. Chi, Y. Chen, Foundations of Computational Mathematics, vol. 20, no. 3, pp. 451-632, June 2020. [FoCM version][main text][supplement][Arxiv][slides]
           — SIAM Activity Group on Imaging Science Best Paper Prize
           — appeared in part in ICML 2018

  2. Spectral Method and Regularized MLE Are Both Optimal for Top-K Ranking
    Y. Chen, J. Fan, C. Ma, K. Wang, Annals of Statistics, vol. 47, no. 4, pp. 2204-2235, August 2019. [paper][Arxiv][AoS version][slides]

  3. The Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square
    P. Sur, Y. Chen, E. J. Candes, Probability Theory and Related Fields, vol. 175, no. 1-2, pp.487–558, October 2019. [paper][supplement][slides][code]

2016

  1. The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
    Y. Chen and E. J. Candes, Communications on Pure and Applied Mathematics, vol. 71, issue 8, pp. 1648-1714, August 2018. [paper][slides][code]

  2. Community Recovery in Graphs with Locality
    Y. Chen, G. Kamath, C. Suh, and D. Tse, International Conference on Machine Learning (ICML), June 2016. [paper][ICML version][CGSI slides][slides][Github repo][lecture by D. Tse]

  3. Resolving Phase Ambiguity in Dual-Echo Dixon Imaging Using a Projected Power Method
    T. Zhang, Y. Chen, S. Bao, M. Alley, J. M. Pauly, B. Hargreaves, S. S. Vasanawala, Magnetic Resonance in Medicine, vol. 77, no. 5, pp. 2066 - 2076, May 2017. [paper][webpage][code]

2015

  1. Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems
    Y. Chen and E. J. Candes, Communications on Pure and Applied Mathematics, vol. 70, issue 5, pp. 822 - 883, May 2017. [paper][NIPS version][supplement][webpage and code][slides]
           — ICCM Best Paper Award (Gold Medal)
           — Finalist, Best Paper Prize for Young Researchers in Continuous Optimization
           — appeared in part in NIPS 2015 (oral)

  2. Spectral MLE: Top-K Rank Aggregation from Pairwise Comparisons
    Y. Chen and C. Suh, International Conference on Machine Learning (ICML), July 2015. [paper][ICML version]

  3. Information Recovery from Pairwise Measurements
    Y. Chen, C. Suh, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 62, no. 10, pp. 5881 - 5905, Oct. 2016. [paper][slides]
           — appeared in part in ISIT 2014 and ISIT 2015

  4. Robust Self-Navigated Body MRI Using Dense Coil Arrays
    T. Zhang, J. Y. Cheng, Y. Chen, D. G. Nishimura, J. M. Pauly, and S. S. Vasanawala, Magnetic Resonance in Medicine, vol. 76, no. 1, pp. 197 - 205, 2016. [paper][webpage][code]

2014

  1. Near-Optimal Joint Object Matching via Convex Relaxation
    Y. Chen, L. Guibas, and Q. Huang, International Conference on Machine Learning (ICML), June 2014. [paper][ICML version][slides][code]

  2. Scalable Semidefinite Relaxation for Maximum A Posterior Estimation
    Q. Huang, Y. Chen, and L. Guibas, International Conference on Machine Learning (ICML), June 2014. [paper][slides]

  3. Backing off from Infinity: Performance Bounds via Concentration of Spectral Measure for Random MIMO Channels
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 61, no. 1, pp. 366 - 387, Jan. 2015. [paper]

2013

  1. Exact and Stable Covariance Estimation from Quadratic Sampling via Convex Programming
    Y. Chen, Y. Chi, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 61, no. 7, pp. 4034 - 4059, July 2015. [paper][slides]
           — appeared in part in ISIT 2014 and ICASSP 2014

  2. Robust Spectral Compressed Sensing via Structured Matrix Completion
    Y. Chen and Y. Chi, IEEE Trans. on Info. Theory, vol. 60, no. 10, pp. 6576 - 6601, Oct. 2014. [paper][ICML version][slides]
           — appeared in part in ICML 2013 (full oral)

  3. Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization
    Y. Chi and Y. Chen, IEEE Trans. on Signal Processing, vol. 63, no. 4, pp. 1030 - 1042, Feb. 2015. [paper]

  4. Minimax Capacity Loss under Sub-Nyquist Universal Sampling
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 63, no. 6, pp. 3348 - 3367, June 2017. [paper][slides]
           — appeared in part in ISIT 2013

2012

  1. Channel Capacity under Sub-Nyquist Nonuniform Sampling
    Y. Chen, A. J. Goldsmith, and Y. C. Eldar, IEEE Trans. on Info. Theory, vol. 60, no. 8, pp. 4739 - 4756, Aug. 2014. [paper]
           — appeared in part in ISIT 2012

2011

  1. Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels
    Y. Chen, Y. C. Eldar, and A. J. Goldsmith, IEEE Trans. on Info. Theory, vol. 59, no. 8, pp. 4889 - 4914 , Aug. 2013. [paper][slides]
           — appeared in part in ICASSP 2011

  2. On the Role of Mobility on Multi-message Gossip
    Y. Chen, S. Shakkottai and J. G. Andrews, IEEE Trans. on Info. Theory, vol. 59, no. 6, pp. 3953 - 3970, June 2013. [paper][slides]
           — appeared in part in INFOCOM 2011 (full oral)

2010

  1. An Upper Bound on Multi-hop Transmission Capacity with Dynamic Routing Selection
    Y. Chen and J. G. Andrews, IEEE Trans. on Info. Theory, vol. 58, no. 6, pp. 3751 - 3765, June 2012. [paper]
           — appeared in part in ISIT 2010