From OntologPSMW

Jump to: navigation, search
[ ]
    (1)
Session Overview of Explainable AI
Duration 1 hour60 minute
3,600 second
0.0417 day
Date/Time Nov 28 2018 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ram D. Sriram and Ravi Sharma

Contents

Ontology Summit 2019 Overview of Explainable AI     (2)

Agenda     (2A)

  • Derek Doran "Okay but Really... What is Explainable AI? Notions and Conceptualizations of the Field"     (2A2)
    • Derek Doran is an Associate Professor of Computer Science and Engineering at Wright State University. His research interests are in developing statistical, deep learning and topological data analysis (TDA) methods for the study of complex web and cyber-systems. Of current interest is in endowing and augmenting deep learning algorithms and TDA models to make them inherently explainable and comprehensible to users. His research is supported by NSF, AFRL, ORISE, and the Ohio Federal Research Network. Derek is author of over 75 publications, most of which are in AI and ML venues. He is on the editorial board for Social Network Analysis and Mining and the International Journal of Web Engineering and Technologies. Derek is also a founding chair of the Neural-Symbolic Learning and Reasoning special interest group on Explainable AI (http://daselab.cs.wright.edu/nesy/sig-xai/). More information at: http://derk--.github.io     (2A2B)

Conference Call Information     (2B)

Attendees     (2C)

Proceedings     (2D)

[12:37] John Sowa: General principle: note that *every* explanation is an answer to a how or why question.     (2D2)

[12:39] BruceBray: Great presentation! Doctor Analogy resonates with me as a healthcare provider and informaticist     (2D3)

[12:39] John Sowa: That'w a much simpler definition of XAI: (1) Can it answer how and why questions? and (2) Can it answer a follow up question?     (2D4)

[12:43] Gary Berg-Cross: Much of what Derek covered relates to what Torsten and I will also talk about next week on the connection to commonsense (knowledge and reasoning).     (2D5)

[12:44] RaviSharma: does slide explanation in this example follows decision rules     (2D6)

[12:52] John Sowa: Comment about question-answering systems: They only answer questions that begin with who, what, when, or where. An XAI system must answer how and why quest     (2D7)

[12:53] John Sowa: Follow up questions are *always* in the context of the current topic,     (2D8)

[12:56] Gary Berg-Cross: @John, yes and your point involves the idea that the AI will understand and keep a context for the conversation.     (2D9)

[12:57] Ram D. Sriram: @Gary: Without Context it will be hard to explain     (2D10)

[12:58] TerryLongstreth: @Gary B-C: background knowledge in human situations is negotiable between agents; essentially establishing correspondence points between or among incumbent individual contexts     (2D11)

[13:00] ToddSchneider: So potential explanations are additional competencies questions?     (2D12)

[13:01] Gary Berg-Cross: The system will need to understand linguistic terms but reasoning might be more successful with formal terms.     (2D13)

[13:05] BruceBray: @Gary - agree, a great example of why ontology can collaboratively enhance AI applications     (2D14)

[13:09] Gary Berg-Cross: We will need to consider how an AI will learn from interactions with users.     (2D15)

[13:11] RaviSharma: thanks Derek for a wonderful presentation     (2D16)

Resources     (2E)

Previous Meetings     (2F)


Next Meetings     (2G)