From OntologPSMW

Jump to: navigation, search
[ ]
Session Explainable AI
Duration 1 hour
Date/Time Apr 3 2019 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ken Baclawski


Ontology Summit 2019 Explainable AI Session 2     (2)

Agenda     (2A)

  • Giedrius Buračas     (2A1)
    • Deep Attentional Representations for Explanations – DARE     (2A1A)
    • BIO: Giedrius T. Buračas, PhD, is a Sr. Computer Scientist at SRI. During last 15 years he has led research and R&D teams in academia and industry, focusing on neuroscience, ML and AI. He is PI of DARPA XAI and Algorithm Lead of AFRL MEADE.     (2A1B)

Conference Call Information     (2B)

Attendees     (2C)

Proceedings     (2D)

[12:03] ToddSchneider: The soap hub chat is not working. We'll need to use this Zoom chat.     (2D1)

[12:08] Terry Longstreth: Don't degrade performance: both accuracy and computing resources?     (2D2)

[12:11] ToddSchneider: Terry, I don't understand your question.     (2D3)

[12:11] Alessandro Oltramari: I don't think computing resources here are a problem     (2D4)

[12:15] ToddSchneider: Did I miss the definition of "attention" in this context?     (2D5)

[12:17] Alessandro Oltramari: it could be based on a spatio-temporal window     (2D6)

[12:17] David Whitten: Was 'attention' defined by DARPA since they funded ?     (2D7)

[12:19] Alessandro Oltramari: in this case is pretty much spatial for natural language, attention in deep learning models focuses the model on specific sequences of words, for instance.     (2D8)

[12:20] David Whitten: So the architecture doesn't look for particular objects unless it is asked about them?     (2D9)

[12:21] Alessandro Oltramari: this particular architecture does seem to actually look for objects - which may be the innovative aspect     (2D10)

[12:21] David Whitten: so there is "heatmap" attention and "object attention" ?     (2D11)

[12:22] Alessandro Oltramari: the heat map being the corresponding sub-symbolic pattern     (2D12)

[12:22] David Whitten: So there is a 'scene graph' is this standardized? is there an ontology of scene graph components?     (2D13)

[12:23] Alessandro Oltramari: this is a great question - and something I've been working on in the last couple of months I believe a scene ontology is much needed     (2D14)

[12:23] Mark Underwood: Attention as practiced here begs issues reviewed in the Forum series on context     (2D15)

[12:26] David Whitten: People were wrong because they said "bears" instead of "bear" ? seems a bit off     (2D16)

[12:32] David Whitten: So if the machine is unpredictable, then it doesn't do a good job explaining? Why does the accuracy of prediction matter? Why isn't the accuracy of the explanation what is being measured?     (2D17)

[12:32] BobbinTeegarden: Is a 'scene ontology' another way of saying context or contextual ontology? Is it another instance of context?     (2D18)

[12:32] Mark Underwood: David: Yeah, some psychometrics issues here (prob not worth fussing over for us)     (2D19)

[12:33] David Whitten: If scene graph handles things in the 'background' I would expect a high overlap with context.     (2D20)

[12:33] Mike: @Bobbin everything in an ontology is the context for something. The question is what kinds of somethings are defined in a scene ontology. Well worth exploring IMV.     (2D21)

[12:34] Terry Longstreth: @Todd: in the introduction, Giedrius explained (I paraphrase - don't know how accurate) that one goal of their work was to create useful explanations without impacting performance of the system; in systems engineering we usually have to tradeoff function (accuracy) and system resources. I would assume that a challenge of moving these techniques out of the laboratory by, for example, training the system with data collected by a Web crawler, or ranging over concept trees and correlated intentions in 'the' Linked Open Data graphs.     (2D22)

[12:38] Mark Underwood: AMT Amazon Mechanical Turk     (2D23)

[12:49] David Whitten: When he introduced the grey blobs, the picture of the surfer looked (at a quick glance) like a penguin to me.     (2D24)

[12:51] Mike: Re the surfing answer - Is that an incorrect answer though? I would think that the picture does show surfing, but without a surfer. Only the surfer was removed. Like a grin without a cat.     (2D25)

[12:52] David Whitten: @Mike, I would argue that if you can get that wave pattern on a picture without a surfer to disrupt the water, then there is a problem saying it is surfer.     (2D26)

[12:53] Mike: Yes my point was it is exactly the waves as disturbed by an activity of surfing, with the agent removed but not the activity     (2D27)

[12:56] David Whitten: can he talk about the scene graphs? I apparently am muted. Especially as they are interconnected with context and explanations.     (2D28)

[12:56] ToddSchneider: David, Zoom does not show your phone connection as muted.     (2D29)

[12:57] David Whitten: And yet, no one can hear me when I ask.     (2D30)

[12:59] Mike: @David seconded - I would love to hear more about this idea of an ontology for scenes - I know people working in that area who could do with one. To John's point re testing, and the surfer example - it seems that an issue is what ontology is implied in the right v wrong indications we give to an AI when training. Needs to be a disciplined ontology not a folksonomy or we will indeed see the safety issues JFS mentions.     (2D31)

[01:03] ToddSchneider: WordNet is not an ontology.     (2D32)

[01:03] David Whitten: Okomoto?     (2D33)

[01:03] ToddSchneider: WordNet is based on Synsets.     (2D34)

[01:04] David Whitten: Provisional Genome and seven W's ?     (2D35)

[01:05] ToddSchneider: WordNet can be a good starting point.     (2D36)

[01:05] John Sowa: I would trust WordNet more than BFO     (2D37)

[01:05] David Whitten: Visional Genome.     (2D38)

[01:06] David Whitten: Concept map is very noisy     (2D40)

[01:06] John Sowa: I don't trust driverless cars, but BFO would be the worst possible choice.     (2D41)

[01:07] Mike: @John I did but I don't now. Agree re BFO. We need the right ontology for the right job.     (2D42)

[01:07] Mark Underwood: Thanks for the shout-out to our context summit :)     (2D43)

Resources     (2E)

Previous Meetings     (2F)

Next Meetings     (2G)