Actions

Ontolog Forum

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Session Commonsense
Duration 1 hour
Date/Time Mar 6 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener GaryBergCross and TorstenHahmann

Ontology Summit 2019 Commonsense Session 2

An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do. The starting observation is that every ordinary person has "commonsense" or basic knowledge about the real world that is common to all humans. Spatial and physical reasoning are good examples. This is the kind we want to endow our machines with for several reasons including as part of conversation and understanding. System understanding of human perceptual and memory limitations might, for example, be an important thing for a dialog system to know about. Early on this was describe as giving them a capacity for "commonsense". However, early AI demonstrated that the nature and scale of the problem was difficult. People seemed to need a vast store of everyday knowledge for common tasks. A variety of knowledge was needed to understand even the simplest children's story. A feat that children master with what seems an natural process. One resulting approach was an effort like CyC to encode a broad range of human commonsense knowledge as a step to understanding text which would bootstrap further learning. Some believe that today this problem of scale can be addressed in a new ways including via modern machine learning. But these methods do not build in an obvious way to provide machine generated explanations of what they "know." As fruitful explanations appeal to folks understanding of the world, common sense reasoning would be a significant portion of any computer generated explanations. How hard is this to build into smart systems? One difficult aspect of common sense is making sure the explanations are presented at multiple levels of abstraction, i.e. from not too detailed to tracing exact justifications for each inference step. This track will explore these and other issues in light of current ML efforts and best practices for AI explanations.

Commonsense Session 2 Agenda

Speaker: Niket Tandon (research scientist at Allen Institute for AI, Seattle where he develops models to make Aristo commonsense aware) Link to Bio: https://allenai.org/team/nikett/

  • Talk Title: Commonsense for Deep Learning
    • Slides
    • Video Recording
    • Hypothesis & Observation: "commonsense aware models are more robust, and amicable to explanation."

Working Summary of Points from Niket's talk

This session addressed commonsense and explanations for Deep Learning (DL) models. The session explored how commonsense could assist in addressing some challenges in DL.

    • The challenges include brittleness of DL models against various adversarial changes to input, generalization to unseen situations when faced with limited training data, and ignoring the overall context while highlighting subtle patterns.
    • Commonsense can compensate for limited training data and make it easier to generate explanations, given that the commonsense is available in an easily consumable representation.
    • To explore how commonsense can compensate for limited training data, a specific use-case of state-change prediction in procedural text was discussed.

A recent commonsense aware DL model makes more sensible predictions despite limited training data. Here, commonsense is the prior knowledge of state-changes e.g., it is unlikely that a ball gets destroyed in a basketball game scenario. The model injects commonsense at decoding phase by re-scoring the search space such that the probability mass is driven away from unlikely situations. This results in much better performance as a result of adding commonsense.

  • Explanations were discussed along three dimensions.

In the recent years, datasets have fueled research in DL. In both NLP and computer vision community datasets containing explanations have gained interest. This has led to models that provide explanation in the form of attention, natural language sentences and structures such as scene graphs and state-change matrices.

    • Evaluating explanation has remained a challenge, and some evaluation metrics include string matching (exact and METEOR), human based, automated, and, finally, learning to evaluate.
  • While commonsense is an important asset for DL models, its logical representation such as microtheories has not been successfully employed. Instead, tuples or graphs comprising of natural language nodes has shown some promise, but these face the problem of string matching i.e., linguistic variations. More recent work on supplying commonsense in the form of adversarial examples, or in the form of unstructured paragraphs or sentences has been gaining attention recently.

Conference Call Information

  • Date: Wednesday, 6-March-2019
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

[11:51] Gary: Niket Tandon will be our sole speaker today, but he has a lot to say on these issues. He can only stay till 1 EST but this should be enough for presentation and Q and A.

[11:57] Gary: We had planned to have Pascal Hitzler present at one of our commonsense sessions but the timing has not worked out despite our joint efforts. His University, Wright State , was on strike for the 1st session and today he is on Spring break and visiting his new University Kansas state. We hope to have some input from him on the topic of symbolic and sub-symbolic knowledge for our Communique.

[12:21] TerryLongstreth: It seems to me that 'common sense' as represented in human, mental, knowledge (or assumptions) must mean something different from the same term as used in this presentation.

[12:22] David Whitten: Niket Tandon (at 12:20 AM EST) seems to be talking about a basic model for a basketball game.

[12:22] TerryLongstreth: For example, every basketball player knows that "charging" is a foul.

[12:24] David Whitten: The model appears to use some form of First Order Logic as notations: Speed(player) , location(ball), and existence(ball) using some sort of probability model, and maybe some activity maps. I'm pretty sure this is not a 4D model. I don't know if it is a 3D model or a 3D+1 model, as he doesn't seem to reference time at all.

[12:24] RaviSharma: Niket - I noticed distribution functions such as logit, these are used in GARCH financial analysis and risk modeling, what is the meaning of this analogy?

[12:25] David Whitten: He seems to recognize model constraints violations, not discussion (that I have heard yet) re structures for explanations.

[12:27] David Whitten: seems to use a function phi-prime which is a summation over the |E| which I assume is the size of E ? or maybe the cardinality of E ? is E the set of events ? I guess he covered this before I got on.

[12:28] David Whitten: Is discussing a commonsense biased search over State of the Art (SOTA) and says common sense is used to improve the value.

[12:28] David Whitten: He admits that TerryLongstreth is right that he is loose with the definition of 'common sense'

[12:30] David Whitten: Niket says this is a time model based on moves or activities (is this a step-by-step time model?)

[12:31] David Whitten: @Ravi, is "logit" a common function in Machine Learning ? base e log of "it" ? of items?

[12:31] David Whitten: Niket uses CSK to mean Common Sense Knowledge.

[12:33] anonymous1 morphed into BruceBray

[12:35] David Whitten: @Niket, I'm sorry to confuse. I usually just type notes for those who lose audio and my learning.

[12:37] RaviSharma: what slide number is Niket on

[12:38] ToddSchneider: David, I love your notes.

[12:38] RaviSharma: David - i think it is a stat distribution

[12:39] RaviSharma: Logit and Probit are two types

[12:40] David Whitten: in response to Terry, he says he is NOT working from a complete logical model for basketball, and is combining a First Order Logic model with a Machine Learning model.

[12:43] David Whitten: The explanation model uses state tracking and helps in debugging. No slide number visible, but is a table with headings ATTENTION, NATURAL LANGUAGE, and Structure. Now on a slide "Giving a voice to structures"

[12:44] ToddSchneider: The slide being shown currently is doesn't seem to be in those posted.

[12:46] David Whitten: Says language generation software may use synonyms rather than consistent words for concepts.

[12:47] RaviSharma: slide 38

[12:49] Gary: Slides at http://bit.ly/2XEOcWS

[12:49] RaviSharma: if you make the data set for training very wide such as multiple subjects such as human and dog etc, it is hard for ML to get it.

[12:50] RaviSharma: NL has bias due to the ability of human expression from vague to accurate etc.

[12:55] RaviSharma: Niket - some of these problems arise from expanding the applicability from training data to related objects and multiple related domains.

[12:57] janet singer: @David: LOGIT first appeared in Joseph Berkson's "Application to the Logistic Function to Bio-Assay," Journal of the American Statistical Association, 39, (1944), p. 361: "Instead of the observations qi we deal with their logits li = ln(pi / qi). [Note] I use this term for ln p/q following Bliss, who called the analogous function which is linear on x for the normal curve probit." (OED) From Earliest Known Uses of Some of the Words of Mathematics http://jeff560.tripod.com/l.html

[12:58] TerryLongstreth: Please post the references here; apparently aren't connected thru PDF.

[12:58] RaviSharma: can we reverse engineer by improving complexity

[12:59] Mark Underwood: @Janet: a 1944 cite, love it

[13:05] RaviSharma: Niket - many thanks

[13:06] janet singer: @Mark, Jeff's website is great quite the rabbit hole to fall down

[13:15] ToddSchneider: 'Common Sense', common to what or when"?

[13:22] Gary: Here is a link t the US Semantivc tech conf I mentioned taking place next week.

It includes a session on XAI 
http://us2ts.org/2019/posts/program.html

Resources

Video Recording

The following are the references provides by Niket which are now on the updated version of the slides linked to on this site (Gary Berg-Cross) • [1.1] Dalvi, Tandon, Clark: ProPara task: data.allenai.org/propara • [1.2]: Tandon et. al 2018, Reasoning about Actions and State Changes by Injecting Commonsense Knowledge • [D1]: Antolet et. al 2015, VQA: Visual Question Answering. • [D2]: Johnson et. al 2016, CLEVR: A Diagnostic Dataset for Compositional Lang. and Elementary Visual Reasoning • [D3]: Zeller et. al 2019, From Recognition to Cognition: Visual Commonsense Reasoning (VCR) • [D4]: Camburu et. al 2018L e-SNLI: Natural Language Inference with Natural Language Explanations • [D5]: Yang et. al , 2018: HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering • [D6]: Jansen et. al , 2018, WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference • [A2]: Kim et. al , 2018 Textual Explanations for Self-Driving Vehicles • [A9]: Ghosh et. al 2018, Generating Natural Lang. Explanations for VQA Using Scene Graphs and Visual Attention • [A4]: Zeiler 2013, Visualizing and Understanding Convolutional Networks • [A8]: Park et al 2017, Multimodal Explanations: Justifying Decisions and Pointing to the Evidence • [A3]: Hu et. al, 2017, Learning to Reason: End-to-End Module Networks for Visual Question Answering • [A1]: Ramaprasaath 2016, Why did you say that? visual explanations from deep networks via gradient-based localization • [A6]: Tandon et. al 2018, Reasoning about Actions and State Changes by Injecting Commonsense Knowledge References (Part II) • [2.1] Cui et al 2018, Learning to Evaluate Image Captioning • [2.2] Petsuik et. al , 2018 , RISE: Randomized Input Sampling for Explanation of Black-box Models

Previous Meetings

... further results

Next Meetings

... further results