Actions

Ontolog Forum

Revision as of 14:54, 28 October 2019 by imported>KennethBaclawski (→‎Proceedings)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Explainable AI
Duration 1 hour
Date/Time Apr 3 2019 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ken Baclawski

Ontology Summit 2019 Explainable AI Session 2

Agenda

  • Giedrius Buračas
    • Deep Attentional Representations for Explanations – DARE
    • BIO: Giedrius T. Buračas, PhD, is a Sr. Computer Scientist at SRI. During last 15 years he has led research and R&D teams in academia and industry, focusing on neuroscience, ML and AI. He is PI of DARPA XAI and Algorithm Lead of AFRL MEADE.
    • Video Recording

Conference Call Information

  • Date: Wednesday, 3-April-2019
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

[12:03] ToddSchneider: The soap hub chat is not working. We'll need to use this Zoom chat.

[12:08] Terry Longstreth: Don't degrade performance: both accuracy and computing resources?

[12:11] ToddSchneider: Terry, I don't understand your question.

[12:11] Alessandro Oltramari: I don't think computing resources here are a problem

[12:15] ToddSchneider: Did I miss the definition of "attention" in this context?

[12:17] Alessandro Oltramari: it could be based on a spatio-temporal window

[12:17] David Whitten: Was 'attention' defined by DARPA since they funded ?

[12:19] Alessandro Oltramari: in this case is pretty much spatial for natural language, attention in deep learning models focuses the model on specific sequences of words, for instance.

[12:20] David Whitten: So the architecture doesn't look for particular objects unless it is asked about them?

[12:21] Alessandro Oltramari: this particular architecture does seem to actually look for objects - which may be the innovative aspect

[12:21] David Whitten: so there is "heatmap" attention and "object attention" ?

[12:22] Alessandro Oltramari: the heat map being the corresponding sub-symbolic pattern

[12:22] David Whitten: So there is a 'scene graph' is this standardized? is there an ontology of scene graph components?

[12:23] Alessandro Oltramari: this is a great question - and something I've been working on in the last couple of months I believe a scene ontology is much needed

[12:23] Mark Underwood: Attention as practiced here begs issues reviewed in the Forum series on context

[12:26] David Whitten: People were wrong because they said "bears" instead of "bear" ? seems a bit off

[12:32] David Whitten: So if the machine is unpredictable, then it doesn't do a good job explaining? Why does the accuracy of prediction matter? Why isn't the accuracy of the explanation what is being measured?

[12:32] BobbinTeegarden: Is a 'scene ontology' another way of saying context or contextual ontology? Is it another instance of context?

[12:32] Mark Underwood: David: Yeah, some psychometrics issues here (prob not worth fussing over for us)

[12:33] David Whitten: If scene graph handles things in the 'background' I would expect a high overlap with context.

[12:33] Mike: @Bobbin everything in an ontology is the context for something. The question is what kinds of somethings are defined in a scene ontology. Well worth exploring IMV.

[12:34] Terry Longstreth: @Todd: in the introduction, Giedrius explained (I paraphrase - don't know how accurate) that one goal of their work was to create useful explanations without impacting performance of the system; in systems engineering we usually have to tradeoff function (accuracy) and system resources. I would assume that a challenge of moving these techniques out of the laboratory by, for example, training the system with data collected by a Web crawler, or ranging over concept trees and correlated intentions in 'the' Linked Open Data graphs.

[12:38] Mark Underwood: AMT Amazon Mechanical Turk

[12:49] David Whitten: When he introduced the grey blobs, the picture of the surfer looked (at a quick glance) like a penguin to me.

[12:51] Mike: Re the surfing answer - Is that an incorrect answer though? I would think that the picture does show surfing, but without a surfer. Only the surfer was removed. Like a grin without a cat.

[12:52] David Whitten: @Mike, I would argue that if you can get that wave pattern on a picture without a surfer to disrupt the water, then there is a problem saying it is surfer.

[12:53] Mike: Yes my point was it is exactly the waves as disturbed by an activity of surfing, with the agent removed but not the activity

[12:56] David Whitten: can he talk about the scene graphs? I apparently am muted. Especially as they are interconnected with context and explanations.

[12:56] ToddSchneider: David, Zoom does not show your phone connection as muted.

[12:57] David Whitten: And yet, no one can hear me when I ask.

[12:59] Mike: @David seconded - I would love to hear more about this idea of an ontology for scenes - I know people working in that area who could do with one. To John's point re testing, and the surfer example - it seems that an issue is what ontology is implied in the right v wrong indications we give to an AI when training. Needs to be a disciplined ontology not a folksonomy or we will indeed see the safety issues JFS mentions.

[01:03] ToddSchneider: WordNet is not an ontology.

[01:03] David Whitten: Okomoto?

[01:03] ToddSchneider: WordNet is based on Synsets.

[01:04] David Whitten: Provisional Genome and seven W's ?

[01:05] ToddSchneider: WordNet can be a good starting point.

[01:05] John Sowa: I would trust WordNet more than BFO

[01:05] David Whitten: Visional Genome.

[01:05] gburachas: https://visualgenome.org/

[01:06] David Whitten: Concept map is very noisy

[01:06] John Sowa: I don't trust driverless cars, but BFO would be the worst possible choice.

[01:07] Mike: @John I did but I don't now. Agree re BFO. We need the right ontology for the right job.

[01:07] Mark Underwood: Thanks for the shout-out to our context summit :)

Resources

Previous Meetings

... further results

Next Meetings

... further results