James McInerney
James McInerney
Researcher in Machine Learning
james_mcinerney
 

About Me

Senior Research Scientist at Netflix (New York).

My research is on statistical approaches to machine learning, in particular, epistemic uncertainty quantification, temporal point process models, and causal reasoning. These find applications in recommender systems, personalization, contextual bandits, among others.

Previously: 

  • Senior Research Scientist, Spotify, New York

  • Adjunct Professor, Columbia University

  • Postdoctoral researcher, Columbia University and Princeton University

  • PhD in machine learning for spatiotemporal modeling, University of Southampton

  • MSc (Artificial Intelligence), Imperial College London

  • BA (Computer Science), Oxford University

contact
 
 
 

Research Interests

My research interests are statistical machine learning, Bayesian inference, uncertainty quantification, and causal modeling. I have developed a number of machine learning techniques presented at the top machine learning conferences to discover hidden structure in text, recommendation, & mobility data. Below is a summary of projects I have led and collaborated on.

 
 

Deep Uncertainty Quantification

Uncertainty is an inherent aspect of machine learning models and is crucial to quantify for downstream insights and decision-making. As we scale up deep learning both in number of parameters and data size, the classical statistical methods become intractable or less relevant. I am particularly interested in rigorous methods to estimate epistemic uncertainty in deep learning.

 
causal_recommendation_research.jpg

causal recommendation

In recent years, the causal challenges present in recommender systems have been increasingly recognized. My work looks at how to train models with data that are confounded by the recommender and other biases.

 
iStock-832632796.jpg

Variational inference

Variational inference and variational autoencoders are widely used methods for performing approximate Bayesian inference on large data sets. My research is about how to perform inference on streaming data and how to deal with the non-convexity of the variational objective.

 

spatio-temporal probabilistic modeling

Spatio-temporal data require new models and decision-making techniques to deal with non-exchangeability. My research proposes methods in the areas of anomaly detection, reinforcement learning, and overcoming data sparsity in temporal patterns.

 
 

AbouT

 

It all started when…

My background is in mathematics and computer science. I become interested in artificial intelligence between my undergraduate and masters, particularly to the possibilities of having machine learning agents take the cognitive load of processing increasing amounts of data.

Studying artificial intelligence at Imperial College London opened my eyes to neural networks and Bayesian inference. I followed this passion with a PhD in machine learning for spatio-temporal data and studied probabilistic models of time series, discrete data, and variational inference. This took me to David Blei's lab at Princeton who introduced me to causal analysis and latent variable recommendation models.

I now combine all these elements in my job as a research scientist at Netflix where I continue to publish and work with other scientists to develop new machine learning methods.

 


I maintain a blog (sparsely) called D-Speculation about ML, statistics, and research.