Preprints
Linear Scalarizations are Enough for Risk-Neutral Multi-Objective Reinforcement Learning.
Valentin Tiriac*, Xingtu Liu*, Lin F. Yang, Csaba Szepesvari, Sharan Vaswani.
Manuscript.
An Information-Theoretic Analysis of Out-of-Distribution Generalization in Meta-Learning with Applications to Meta-RL.
Xingtu Liu.
Manuscript.
2026
Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model.
Xingtu Liu, Lin F. Yang, Sharan Vaswani.
International Conference on Algorithmic Learning Theory (ALT), 2026.
NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning.
Central Limit Theorems for Asynchronous Averaged Q-Learning.
Xingtu Liu.
Learning for Dynamics and Control Conference (L4DC), 2026.
NeurIPS 2025 Workshop on Optimization for Machine Learning.
2025
Neural Networks with Complex-Valued Weights Have No Spurious Local Minima.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.
NeurIPS 2024 Workshop on Optimization for Machine Learning.
A Note on Arithmetic–Geometric Mean Inequality for Well-Conditioned Matrices.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.
(A Short Note Partially Resolving a COLT 2021 Open Problem.)
2024
Information-Theoretic Generalization Bounds for Batch Reinforcement Learning.
Xingtu Liu.
Entropy, 2024.
NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning.
Landscape Analysis of Stochastic Policy Gradient Methods.
Xingtu Liu.
European Conference on Machine Learning (ECML), 2024.
2022
Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data.
Gautam Kamath*, Xingtu Liu*, HuanyuZhang*. (*Alphabetical Order)
International Conference on Machine Learning (ICML), 2022.
Oral Presentation (2.1% Acceptance Rate).