Artificial Intelligence

Yoshua Bengio & Why He Is Bullish About Causal Learning

[ad_1]



Recently, Yoshua Bengio and researchers from the University of Montreal, the Max-Planck Institute for Intelligent Systems and Google Research demonstrated how causal representation learning contributes to the robustness and generalisation of machine learning models. The team reviewed the fundamental concepts of causal inference and related them to crucial open problems of machine learning, including transfer and generalisation.

Attaining general intelligence is one of the key goals in machine learning and deep learning. As things stand, the machine learning techniques are limited at some crucial feats where natural intelligence excels. These include transfer to new problems and any form of generalisation. 

According to the researchers, this shortcoming is not too surprising, given that machine learning often disregards information that animals use heavily, such as interventions in the world, domain shifts, temporal structure, among others. To overcome these shortcomings of the machine learning techniques, the researchers considered these factors as a nuisance and tried to increase the performance and robustness of the models.

The majority of machine learning successes boil down to large scale pattern recognition on the collected independent and identically distributed data. However, some of the key research challenges faced by machine learning researchers are robustness, learning reusable mechanisms and the perspective of causal inference.



Importance of Causal Learning

The causal model contains mechanisms that give rise to the observed statistical dependencies and allows the model of the distribution shifts through the notion of interventions. Also, causal relations can be viewed as the components of reasoning chains that provide predictions for situations very far from the observed distribution.

The researchers stated: “Discovering causal relations means acquiring robust knowledge that holds beyond the support of observed data distribution and a set of training tasks, and it extends to situations involving forms of reasoning.

Causality, with its focus on representing structural knowledge about the data generating process that allows interventions and changes, can contribute towards understanding and resolving some limitations of current machine learning methods. This would take the field a step closer to a form of artificial intelligence.”

See Also


Statistical Learning v/s Causal Learning

Statistical learning deals with developing algorithms and techniques that learn from observed data by constructing stochastic models to make predictions and decisions. It relies on rule-based programming, a smaller dataset and operates on assumptions. Despite its success, statistical learning provides a rather superficial description of reality that only holds when the experimental conditions are fixed.

Download our Mobile App


On the other hand, the field of causal learning seeks to model the effect of interventions and distribution changes with a combination of data-driven learning and assumptions that are not already included in the statistical description of a system.

Contributions Of This Research

  • The researchers described different levels of modelling in physical systems and presented the differences between causal and statistical models. They showed this both in terms of modeling abilities and discussed the assumptions and challenges involved.
  • They expanded on the Independent Causal Mechanisms (ICM) principle as a key component that enables the estimation of causal relations from data. Particularly, they stated the Sparse Mechanism Shift hypothesis as a consequence of the ICM principle and discussed its implications for learning causal models.
  • The researchers reviewed existing approaches to learn causal relations from appropriate descriptors (or features), where they covered both approaches and modern re-interpretations based on deep neural networks, focusing on the underlying principles that enable causal discovery.
  • They discussed how useful models of reality may be learned from data in the form of causal representations, and several current machine learning problems from a causal point of view.
  • Lastly, the researchers explored the implications of causality for practical machine learning. Using the causal learning, they revisited robustness and generalisation, as well as existing common practices such as semi-supervised learning, self-supervised learning, data augmentation, and pre-training.

Read the paper here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.


Join Our Telegram Group. Be part of an engaging online community. Join Here.

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: ambika.choudhury@analyticsindiamag.com



[ad_2]

Source link

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *