Yeming Wen

I am a computer science graduate student at UT Austin, advised by Prof. Swarat Chaudhuri. My research focuses on building a machine learning framework to generate code with human-like efficiency. Before joining UT Austin, I was a master student in computer science advised by Prof. Jimmy Ba at University of Toronto. I worked on the development of efficient learning algorithms for deep neural networks.

Email  /  CV  /  Scholar  /  LinkedIn

Publication
Neural Program Generation Modulo Static Analysis
Rohan Mukherjee, Yeming Wen , Dipak Chaudhari, Thomas Reps, Swarat Chaudhuri & Chris Jermaine
Advances in Neural Information Processing Systems (NeurIPS) , 2021, (Spotlight)

By conditioning on the semantic attributes that are computed on AST nodes by the compiler, our model generates better code snippets such as Java method bodies given their surrounding context

Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran

High-quality implementations of standard and SOTA methods on a variety of tasks. Preprints (2021)

Combining Ensembles and Data Augmentation can Harm your Calibration
Yeming Wen*, Ghassen Jerfel*, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan & Dustin Tran
International Conference on Learning Representations (ICLR), 2021

By adjusting data augmentation according to calibration, we can exploit both marginalization and invariances.

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
Michael W. Dusenberry*, Ghassen Jerfel*, Yeming Wen, Yi-An Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan & Dustin Tran
International Conference on Machine Learning (ICML) , 2020

Improved BatchEnsemble with mixture posteriors, Cauchy priors and rank-1 parameterization.

BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning
Yeming Wen, Dustin Tran & Jimmy Ba
8th International Conference on Learning Representations (ICLR), 2020
Bayesian Deep Learning Workshop at NeurIps, 2019

How to efficiently ensemble deep neural networks efficiently in both computation and memory.

Benchmarking Model-Based Reinforcement Learning
Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel & Jimmy Ba
Arxiv, 2019

Benchmarking several commonly used model-based algorithms.

An Empirical Study of Large-Batch Stochastic Gradient Descent with Structured Covariance Noise
Yeming Wen*, Kevin Luk*, Maxime Gazeau*, Guodong Zhang, Harris Chan & Jimmy Ba
The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) , 2020

How to add a noise to gradients with correct covariance structure such that large-batch training genenalizes better without longer training.

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran & Roger Grosse
6th International Conference on Learning Representations (ICLR), 2018

How to efficiently make psedo-independent weight perturbations on mini-batches in evolution strategies and variational BNNs as activation perturbations in dropout.


Last Update: June, 16th, 2018;
Template: this/that, ce/cette, das/der, kono/sono and zhege/nage.