What is machine learning?

Machine Learning is a branch of artificial intelligence concerned with the design of algorithms that can automatically identify patterns or regularities in data. An example application is the problem of telling whether a person in an image is a “man” or a “woman”. To solve this task, we could manually write a computer program based on rules such as “if the subject has long hair, then it is likely to be a woman”. However, the number of rules needed to solve the problem this way would be too big to be manually designed and coded. Instead, machine learning techniques can use a dataset of images with associated labels “man” or “woman” to automatically identify patterns or regularities in images with the same label. These patterns can then be used to make predictions about new images with no associated labels.

What is model-based machine learning and the Bayesian framework?

Model-based machine learning is a simple and general recipe for designing machine learning algorithms that are specifically tailored to each new application. The central idea is to propose for each application a custom-made model that captures, in a computer-readable form, all our knowledge about the data generating process. Different algorithms are then used to combine the proposed model and the observed data with the objective to make predictions. The Bayesian framework represents a powerful implementation of this model-based approach. Within this framework, we assume that the observed data is produced by drawing samples from a probabilistic model. At the same time, any uncertainty about unknown variables in the probabilistic model is encoded using probability distributions. Bayes’ theorem is then used to combine these probability distributions with the observed data in a consistent way. The newly updated distributions can be finally used to make reliable predictions. Bayesian machine learning is included in an article by MIT Technology Review as one of the 10 emerging technologies that will change our world.

What is my research about?

My work in Bayesian machine learning includes the design and implementation of scalable methods for approximate inference and the construction, evaluation and refinement of probabilistic models that successfully describe the statistical patterns present in the data. During the last years I have designed new Bayesian machine learning methods with applications to the prediction of customer purchases in on-line stores, the modeling of price changes in financial markets, the analysis of the connectivity of genes in biological systems, the discovery of new materials with optimal properties or the design of more efficient hardware. I have focused on approaches based on probabilistic models, relying on methods for approximate inference that scale to large datasets.

Specific areas of research

  1. Bayesian deep learning.
  2. Deep generative models
  3. Data efficient machine learning
  4. Molecule generation and optimization
  5. Neural network compression
  6. Image compression
  7. Reinforcement learning and causal inference
  8. Bayesian optimization
  9. Graph neural networks
  10. Hardware design and optimization
  11. Interpretable machine learning
  12. Meta-learning