Multi-Criteria Reinforcement Learning

Xingtu Liu, Lin F. Yang, Sharan Vaswani. Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model. International Conference on Algorithmic Learning Theory (ALT), 2026. NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning.

Valentin Tiriac*, Xingtu Liu*, Lin F. Yang, Csaba Szepesvari, Sharan Vaswani. Linear Scalarizations are Enough for Risk-Neutral Multi-Objective Reinforcement Learning. In Preparation.

Statistical Foundations of Reinforcement Learning

Xingtu Liu. An Information-Theoretic Analysis of Out-of-Distribution Generalization in Meta-Learning with Applications to Meta-RL. In Submission to L4DC 2026.

Xingtu Liu. Central Limit Theorems for Asynchronous Averaged Q-Learning. In Submission to L4DC 2026. NeurIPS 2025 Workshop on Optimization for Machine Learning.

Xingtu Liu. Information-Theoretic Generalization Bounds for Batch Reinforcement Learning. Entropy, 2024. NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning.

Xingtu Liu. Landscape Analysis of Stochastic Policy Gradient Methods. European Conference on Machine Learning (ECML), 2024.

Other Works in Theoretical Machine Learning

Xingtu Liu. A Note on Arithmetic–Geometric Mean Inequality for Well-Conditioned Matrices. Conference on Information Sciences and Systems (CISS), 2025. (A Short Note Partially Resolving a COLT 2021 Open Problem.)

Xingtu Liu. Neural Networks with Complex-Valued Weights Have No Spurious Local Minima. Conference on Information Sciences and Systems (CISS), 2025. NeurIPS 2024 Workshop on Optimization for Machine Learning.

Gautam Kamath*, Xingtu Liu*, HuanyuZhang*. (*Alphabetical Order) Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data. International Conference on Machine Learning (ICML), 2022. Oral Presentation (2.1% Acceptance Rate).