Highlights from AAAI 2020

Earlier in February I was at the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) to get a better feel for the general trends of the field and learn as much as I could (I did not to present anything, though). It was time well-spent. In this post I summarize some of the points that were particularly interesting to me. Since there was so much great content, it would be very difficult to summarize each presentation or talk. Instead, I will group it all in common themes and address the theme as a whole, commenting on the key points that showed up in a related presentation. The complete schedule and papers can be found through the official guide. Furthermore, many of the keynotes are available online for viewing now, I strongly recommend watching most of them, truly thought-provoking material.

Healthcare

Healthcare was a pervasive theme. There was an excellent tutorial about Precision Medicine, many regular research papers that addressed health issues, a workshop on Health Intelligence (which unfortunately I could not attend because it was sold out), and a special edition of AI in Practice dedicated to the theme with excellent keynotes. I find all this revealing, since it suggests that this might very well be one of the next booming sectors in information technology, like financial services before it. Key points:

Beware of causal confounding.
This example appeared often.
Labeling tool to help physicians create the necessary ground truth for retina image analysis.
Heartbeats follow patterns, which allows the creation of synthetic datasets for data augmentation.
Quality of life is computational to a large extent, hence amenable to automated optimization.

Intelligibility

Intelligibility is a more complex subject than I had realized.

  • Models might appear to be good, but in reality might be optimizing completely wrong things.
  • Mixed human/machine teams obviously need to cooperate.
  • Model output is not the only valuable result. Human insight, which might lead to other valuable results, is also important, and is promoted by transparent models.
  • Models that are explainable promote trust. However, one must know when to trust! Trust itself might be just a form of deception.
  • “King Midas problem”: difficulty to precisely define a goal.
  • Machines could instead learn preferences from observing human behavior. However, this might lead to non-democratic results (because, essentially, some people make more mistakes than others). See the paper: Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making
  • Microsoft provided some great material on designing interactive AI systems.
  • A notable application in healthcare: Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmissio

 

Not so easy to understand as we would like!
Example of optimizing the wrong thing: a fine snow detector, instead of a wolf detector.
Interpretability from compositionality, as seen in “Intelligible Models for HealthCare: Predicting Pneumonia
Risk and Hospital 30-day Readmission.”

Causal Inference

Causality was also a very popular theme.

  • Deep representation learning can be applied to causal inference by properly positioning treated and control subjects in a new representation space that makes matching easier. Moreover, it also allows the study of individual treatment effect (ITE) and not just average treatment effect (ATE). ITE would be useful, for instance, in Precision Medicine, allowing to prescribe causally-relevant treatments considering each patient specific needs. These topics were explored in a tutorial. The slides and a related survey on causal inference by the presenters are available.
  • Causal inference libraries:
  • It was recognized by many people that notions of causality should be incorporated in Machine Learning methods, notably Deep Learning. That is to say, causality is an important prior that can be assumed as given by learning algorithms.
  • Abstract structures for causal knowledge can be combined with Reinforcement Learning to promote efficient transfer learning. In the reported experiments, this approach speeds up learning in new environments at least by an order of magnitude. See the paper: Theory-Based Causal Transfer: Integrating Instance-Level Induction and Abstract-Level Structure Learning
  • Counterfactuals are important to understand consumer preferences, particularly pricing. Being able to predict events from past data is not enough, one must be able to formulate questions concerning new scenarios. For example, coffee sales predict diapers sales, but that does not mean that by actively trying to sell more coffee (e.g., through discounts) one can sell more diapers. Past data may allow one to model consumer preference and price-sensitivity, thus allowing counterfactual inferences. Some relevant papers:
We must consider what the effect of a new restaurant is, not simply look at existing restaurants and extrapolate!
Causality involving price and demand can be tricky.
From abstract causal structures to concrete learning from observation.

Scientific Discovery

Scientific discovery is a form of learning and therefore should be subject to computational reasoning.

Hypotheses are powerful, complex, constructs embedded in a scientific process. Accordingly, they should have proper computational support.
Useful refrences. I like the one with the solar system in the cover, “Scientific Discovery: Computational Explorations of the Creative Processes”.

Games and AI

Games are an interesting way to explore intelligence and recently we have seen impressive progress. However things are not so simple.

  • Debate on the subject.
  • Because games provide an objective performance criteria, they help to direct research and results.
  • Furthermore, humans seem to be hardwired to enjoy playing, so games are intimately related to human behavior.
  • However, it seems that so far learning in one game has not been transferable to another. Hence, present techniques, amazing as they are, fail in this crucial aspect of intelligence. See also: On the Measure of Intelligence
  • Garry Kasparov also pointed out (authoritatively, I guess) that “aptitude for playing chess is nothing besides aptitude for playing chess.”
  • Humans are open-ended and intentional, different from games, which are typically much more restricted.
  • AI and humans might actually work better together in games, which suggests that present technology fails to capture some important part of human intellect, event in these restricted applications.
  • “Augmented Intelligence” might be a better goal than “Artificial Intelligence.” I sympathize.

Relational Information Extraction and NLP

There was a lot of NLP content. What caught my attention, though, was mostly related to information extraction. With such techniques it would be possible to effectively transform human into machine knowledge (e.g., scientific or business texts to logical sentences). For example:

Example of knowledge graph.

Philosophical Aspects: attention, priors, compositionality and more

The special edition with the Turing Award laureates, as well as the talk with Daniel Kahneman, brought some interesting reflections about where the field should go.

  • Turing Award laureates special event was filmed.
  • The chat with Daniel Kahneman is also available for viewing.
  • Self-supervised learning is one way to use available data more effectively, since it eliminates the need for manual labeling.
  • Intelligence is about prediction, therefore forward models of the world are useful.
  • Why would evolution lead to conscious bottleneck? Attention mechanisms (e.g., Transformers) in neural networks might provide a clue.
  • Symbols work as abbreviations of more complex descriptions (“big vectors of stuff happening”).
  • System 1 (unconscious) process things in parallel and gives meaning to concepts. System 2 (conscious) works sequentially by “calling operations of System 1”. This description of the human mind suggests artificial neurosymbolic systems.
  • Priors are important to proper learning. One must assume things to learn. The difficulty might be to figure out the proper priors. Causality seems to be one of them.
  • Compositionality of learning would allow much better generalization, including the invention of new concepts, a key characteristic of intelligence.
  • Language models ultimately cannot just be “just about words.” They must be grounded in reality somehow.
  • Environment is essential for intelligence. Things have meaning with respect to contexts. Such environments, however, can be artificial and different from our own human reality.
  • When I asked about how to find ideas and prioritize them to the Turing laureates, we were told that the important thing is to have good intuition, to find the crux of the problem at hand and then just work stubbornly on it.
Forward models of the world. “Prediction is the essence of intelligence”, according to Yann.
Compositionality is the key to generality, which in turn apparently is the key to imagination.
Capsules allow great unsupervised learning at least in some situations.
Examples of neurosymbolic systems presented by David Cox in his own talk.

Other Themes and Results

People are using real brains to improve ML models! Of course, this came from Japan.
Multiple design options available for human inspection. From Sidewalk labs.

 

 

 

 

 

 

 

 

 

 

 

 

 

The only thing that disappointed me was the apparent immaturity of “the community” when given the chance of asking questions to the three Turing laureates. These were the top ones. Thankfully, they actually answered others, including one of mine.

Share your thoughts