Effective algorithms for non-convex non-smooth regularized learning problems.
Proposed a group of stochastic proximal gradient methods based on arbitrary sampling to solve a family of non-convex non-smooth regularized empirical risk minimization problems
Presented a new analytic approach to investigate the convergence and computational complexity of the proposed methods, which helps compare the different sampling schemes.
Faster algorithm for nonconvex sparse learning problems.
Proposed a hard thresholding method based on stochastically controlled stochastic gradients (SCSG-HT) to solve a family of sparsity-constrained empirical risk minimization problems
Proved that the new method has a strong guarantee to recover the optimal sparse estimator and its computational complexity is independent of sample size n, which enhances the scalability.
Effective ADAM-type optimizers to speed up (Federated) Deep Learning training process.
Designed a new (Fed) ADAM-typed method by calibrating the A-LR with a softplus function.
Conducted experiments to show that the proposed methods outperform existing (Fed)ADAM-typed methods and generalize even better than S-Momentum in multiple deep learning tasks.
Matrix completion problem with application in recommender system.
Proposed a new algorithm which utilize side information to improve existing matrix completion methods
Designed experiments show that our new proposed approach outperforms three state-of-the-art methods both in simulations and on real world datasets.
-Multi-party differential private machine learning algorithms with privacy guarantees.
Developed differentially private decentralized ADMM algorithms.
Designed stochastic differentially private hard thresholding algorithms for nonconvex sparse learning problems.