Xingtu Liu
About
I am a thesis-based Master’s student in computing science at Simon Fraser University, working under the supervision of Prof. Sharan Vaswani. I obtained my Bachelor’s degree in mathematics at University of Waterloo. At UWaterloo, I was fortunate to be advised by Prof. Gautam Kamath. My research interests lie in theoretical reinforcement learning and machine learning. In particular, I am interested in: (i) developing statistical theories and methods for RL and ML; (ii) designing provably efficient learning algorithms; and (iii) high-dimensional statistics. Consequently, I am fascinated in exploring the interplay between the dynamic, data-driven, and high-dimensional nature of RL.
My name is pronounced shin-two. My papers by topic.

Contact
rltheory@outlook.com
PREPRINTS
Linear Scalarizations are Enough for Risk-Neutral Multi-Objective Reinforcement Learning.
Valentin Tiriac*, Xingtu Liu*, Lin F. Yang, Csaba Szepesvari, Sharan Vaswani.
In Preparation.
An Information-Theoretic Analysis of Out-of-Distribution Generalization in Meta-Learning with Applications to Meta-RL.
Xingtu Liu.
In Submission to L4DC 2026.
Central Limit Theorems for Asynchronous Averaged Q-Learning.
Xingtu Liu.
In Submission to L4DC 2026.
NeurIPS 2025 Workshop on Optimization for Machine Learning.
Publications
Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model.
Xingtu Liu, Lin F. Yang, Sharan Vaswani.
International Conference on Algorithmic Learning Theory (ALT), 2026.
NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning.
A Note on Arithmetic–Geometric Mean Inequality for Well-Conditioned Matrices.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.
(A Short Note Partially Resolving a COLT 2021 Open Problem.)
Neural Networks with Complex-Valued Weights Have No Spurious Local Minima.
Xingtu Liu.
Conference on Information Sciences and Systems (CISS), 2025.
NeurIPS 2024 Workshop on Optimization for Machine Learning.
Information-Theoretic Generalization Bounds for Batch Reinforcement Learning.
Xingtu Liu.
Entropy, 2024.
NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning.
Landscape Analysis of Stochastic Policy Gradient Methods.
Xingtu Liu.
European Conference on Machine Learning (ECML), 2024.
Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data.
Gautam Kamath*, Xingtu Liu*, Huanyu Zhang*. (*Alphabetical Order)
International Conference on Machine Learning (ICML), 2022.
Oral Presentation (2.1% Acceptance Rate).
Links
CV; Google Scholar; X (Personal; DailyPapers).
Teaching
Teaching Assistant (SFU)
CMPT 409/981 – Optimization for Machine Learning (Fall 2025)
CMPT 210 – Probability and Computing (Summer 2025)
CMPT 120 – Introduction to Computing Science and Programming (Spring 2024, Spring 2025)
MACM 101 – Discrete Mathematics (Fall 2023)
SERVICE
Conference Reviewer: ICML 2025, NeurIPS (2024-2025), ICLR (2025-2026), AISTATS (2022-2024), RLC 2025, IJCNN 2025
Journal Reviewer: TMLR
Workshop Reviewer: NeurIPS-OPT 2025
