John Miller
|
John Miller
Machine learning researcher
Quantitative trader
miller_john@berkeley.edu
Github / Google Scholar
|
About Me
I obtained my PhD in Electrical Engineering and Computer
Sciences from UC Berkeley in August 2022.
I was advised by Moritz Hardt and
Ben Recht.
Throughout my PhD, I was generously supported by the
Berkeley Fellowship and the
NSF Graduate Research Fellowship.
From 2016-2017, I was a research scientist in
Baidu's Silicon Valley AI
Lab. Before that, I received a BS in Computer Science and an MS in Electrical
Engineering from Stanford University, where I had the privilege of working with
Percy Liang and
Tim Roughgarden.
Publications
(asterisk indicates joint or alphabetical authorship)
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization.
John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt.
International Conference on Machine Learning (ICML), 2021.
(Interactive plotting)
Retiring Adult: New Datasets for Fair Machine Learning.
Frances Ding*, Moritz Hardt*, John Miller*, Ludwig Schmidt*.
Advances in Neural Information Processing Systems (NeurIPS), 2021.
(New datasets)
Outside the Echo Chamber: Optimizing the Performative Risk.
John Miller*, Juan C. Perdomo*, Tijana Zrnic*.
International Conference on Machine Learning (ICML), 2021.
The Effect of Natural Distribution Shift on Question Answering Models.
John Miller, Karl Krauth, Benjamin Recht, Ludwig Schmidt.
International Conference on Machine Learning (ICML), 2020.
(website)
Strategic Classification is Causal Modeling in Disguise.
John Miller, Smitha Milli, Moritz Hardt.
International Conference on Machine Learning (ICML), 2020.
Test-Time Training for Out-of-Distribution Generalization.
Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt.
International Conference on Machine Learning (ICML), 2020.
Model Similarity Mitigates Test Set Overuse.
Horia Mania, John Miller, Ludwig Schmidt, Moritz Hardt, Benjamin Recht.
Advances in Neural Information Processing Systems (NeurIPS), 2019.
A Meta-Analysis of Overfitting in Machine Learning.
Rebecca Roelofs, Sara Fridovich-Keil, John Miller, Vaishaal Shankar, Moritz Hardt, Benjamin Recht, Ludwig Schmidt.
Advances in Neural Information Processing Systems (NeurIPS), 2019.
The Social Cost of Strategic Classification.
Smitha Milli, John Miller, Anca Dragan, Moritz Hardt.
ACM FAT*, 2019.
Stable Recurrent Models.
John Miller and Moritz Hardt.
International Conference on Learning Representations (ICLR), 2019. (blog)
Deep Voice 3: 2000-Speaker Neural Text-to-Speech.
Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, John Miller.
International Conference on Learning Representations (ICLR), 2018.
Deep Voice 2: Multi-Speaker Neural Text-to-Speech.
Sercan Arik*, Gregory Diamos*, Andrew Gibiansky*, John Miller*, Kainan Peng*, Wei Ping*, Jonathan Raiman*, and Yanqi Zhou*.
Advances in Neural Information Processing Systems (NeurIPS), 2017.
Globally Normalized Reader.
Jonathan Raiman and John Miller.
Empirical Methods in Natural Language Processing (EMNLP), 2017.
Deep Voice: Real-time Neural Text-to-Speech.
Sercan Arik*, Mike Chrzanowski*, Adam Coates*, Gregory Diamos*, Andrew
Gibiansky*, Yongguo Kang*, Xian Li*, John Miller*, Andrew Ng*, Jonathan Raiman*, Shubho Sengupta*, and Mohammad Shoeybi*.
International Conference on Machine Learning (ICML), 2017.
Traversing Knowledge Graphs in Vector Space.
Kelvin Guu, John Miller, and Percy Liang.
Empirical Methods in Natural Language Processing (EMNLP), 2015. Best paper honorable mention. (code)
Software
Before graduate school, I wrote CVXCanon, a package for canonicalization of convex programs that's used in CVXPY and CVXR.
I maintain Folktables, a Python package that provides access to datasets derived from US Census data.
I was also the main contributor to WhyNot, a Python package that provides an experimental sandbox for causal inference and decision making in dynamic environments.
|