Foundation of (Continuous-Time) Reinforcement Learning

Stay Tuned…

Foundation of (Discrete-Time) Reinforcement Learning

Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model.
Xingtu Liu, Lin F. Yang, Sharan Vaswani.
International Conference on Algorithmic Learning Theory (ALT), 2026.
NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning.

Central Limit Theorems for Asynchronous Averaged Q-Learning.
Xingtu Liu.
Learning for Dynamics and Control Conference (L4DC), 2026.
NeurIPS 2025 Workshop on Optimization for Machine Learning.

Landscape Analysis of Stochastic Policy Gradient Methods.
Xingtu Liu.
European Conference on Machine Learning (ECML), 2024.

Information-Theoretic Generalization Bounds for Batch Reinforcement Learning.
Xingtu Liu.
Entropy, 2024.
NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning.

An Information-Theoretic Analysis of OOD Generalization in Meta-Reinforcement Learning.
Xingtu Liu.
Manuscript.

Other Works in Machine Learning Theory

Neural Networks with Complex-Valued Weights Have No Spurious Local Minima.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.
NeurIPS 2024 Workshop on Optimization for Machine Learning.

A Note on Arithmetic–Geometric Mean Inequality for Well-Conditioned Matrices.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data.
Gautam Kamath*, Xingtu Liu*, Huanyu Zhang*.
International Conference on Machine Learning (ICML), 2022.
Oral Presentation (2.1% Acceptance Rate).

* indicates alphabetical author order