From OntologPSMW

(Difference between revisions)
Jump to: navigation, search
(Resources)
(Resources)
 
(25 intermediate revisions by 3 users not shown)
Line 31: Line 31:
  
 
== Agenda ==
 
== Agenda ==
* Introduction: [[KenBaclawski|Ken Baclawski]]
+
* Introduction: Ken Baclawski (See summary below)
 
* The track co-champions will give summaries of their respective tracks:
 
* The track co-champions will give summaries of their respective tracks:
** Commonsense  
+
** Commonsense (See [http://ontologforum.org/index.php/ConferenceCall_2019_02_27#Resources Resources Section] below)
** Narrative  
+
** Narrative
** Financial Explanations  
+
** Financial Explanations  
** Medical Explanations  
+
** Medical Explanations  
** Explainable AI [[RamSriram| Ram D. Sriram]] and [[RaviSharma|Ravi Sharma]] [http://bit.ly/2XsRY5z Synthesis]
+
** Explainable AI Ram D. Sriram and Ravi Sharma [http://bit.ly/2XsRY5z Synthesis]
* Discussion
+
* [http://bit.ly/2SvgoIf Video Recording]
 +
 
 +
== Summary of Ontology Summit 2019 Sessions ==
 +
 
 +
There were 9 sessions so far.  Each session had the proceedings (from the chat
 +
room) and a recording (one audio recording and the rest video recordings).  The
 +
following are the speakers with links to their presentation slides (when they
 +
were provided) and the recordings.
 +
 
 +
 
 +
<table cellpadding="5" border="1">
 +
<tr><th>Date</th><th>Speaker</th><th>Topic</th><th>Presentation</th><th>Recording</th></tr>
 +
 
 +
<tr><td>11/14</td><td>John Sowa</td><td>Explanations and help facilities designed for people</td><td>[http://bit.ly/2Tigkx4 Slides]</td><td>[http://bit.ly/2Tciyhn Video]</td></tr>
 +
 
 +
<tr><td rowspan="2">11/28</td><td>Ram D. Sriram and Ravi Sharma</td><td>Introductory Remarks on XAI</td><td>[http://bit.ly/2TWYzUr Slides]</td><td rowspan="2">[http://bit.ly/2AxosRB Video]</td></tr>
 +
<tr><td>Derek Doran</td><td>Okay but Really... What is Explainable AI?  Notions and Conceptualizations of the Field</td><td>[http://bit.ly/2TM2141 Slides]</td></tr>
 +
 
 +
<tr><td>12/05</td><td>Gary Berg-Cross and Torsten Hahmann</td><td>Introduction to Commonsense Knowledge and Reasoning</td><td>[http://bit.ly/2Uhbxwh Slides]</td><td>[http://bit.ly/2ATAWTM Video]</td></tr>
 +
 
 +
<tr><td rowspan="6">1/16</td><td>Ken Baclawski</td><td>Introductory Remarks</td><td>[http://bit.ly/2VRt6DV Slides]</td><td rowspan="6">[http://bit.ly/2VZzA3v Video]</td></tr>
 +
<tr><td>Gary Berg-Cross and Torsten Hahmann</td><td>Commonsense</td><td>[http://bit.ly/2VZa5zg Slides]</td></tr>
 +
<tr><td>Donna Fritzsche and Mark Underwood</td><td>Narrative</td><td>[http://bit.ly/2QTRd19 Slides]</td></tr>
 +
<tr><td>Mark Underwood and Mike Bennett</td><td>Financial Explanation</td><td></td></tr>
 +
<tr><td>Ram D. Sriram and David Whitten</td><td>Medical Explanation</td><td></td></tr>
 +
<tr><td>Ram D. Sriram and Ravi Sharma</td><td>Explainable AI</td><td>[http://bit.ly/2VWplgA Slides]</td></tr>
 +
 
 +
<tr><td rowspan="2">1/23</td><td>Michael Gr&uuml;ninger</td><td>Ontologies for the Physical Turing Test</td><td>[http://bit.ly/2RiQSFz Slides]</td><td rowspan="2">[http://bit.ly/2WenX9a Video]</td></tr>
 +
<tr><td>Benjamin Grosof</td><td>An Overview of Explanation:  Concepts, Uses, and Issues</td><td>[http://bit.ly/2R9iD30 Slides]</td></tr>
 +
 
 +
<tr><td rowspan="3">1/30</td><td>Donna Fritzsche</td><td>Introduction to Narrative</td><td></td><td rowspan="3">[http://bit.ly/2WJJt64 Audio only]</td></tr>
 +
<tr><td>Ken Baclawski</td><td>Proof as Explanation and Narrative</td><td>[http://bit.ly/2RqQJQ5 Slides]</td></tr>
 +
<tr><td>Mark Underwood</td><td>Bag of Verses: Frameworks for Narration from Cognitive Psychology</td><td>[http://bit.ly/2Wv0eBN Slides]</td></tr>
 +
 
 +
<tr><td rowspan="3">2/6</td><td>Mike Bennett</td><td>Financial Explanations Introduction</td><td>[http://bit.ly/2WLic31 Slides]</td><td rowspan="3">[http://bit.ly/2RY64rQ Video]</td></tr>
 +
<tr><td>Mark Underwood</td><td>Explanation Use Cases from Regulatory and Service Quality Drivers in Retail Credit Card Finance</td><td>[http://bit.ly/2WLidE7 Slides]</td></tr>
 +
<tr><td>Mike Bennett</td><td>Financial Industry Explanations</td><td>[http://bit.ly/2RIViFQ Slides]</td></tr>
 +
 
 +
<tr><td rowspan="3">2/13</td><td>David Whitten</td><td>Introduction to Medical Explanation Systems</td><td></td><td rowspan="3">[http://bit.ly/2S1nRyq Slides]</td></tr>
 +
<tr><td>Augie Turano</td><td>Review and Recommendations from past Experience with Medical Explanation Systems</td><td>[http://bit.ly/2WZfM0P Slides]</td></tr>
 +
<tr><td>Ram D. Sriram</td><td>XAI for Biomedicine</td><td>[http://bit.ly/2RYhM5N Slides]</td></tr>
 +
 
 +
<tr><td>2/20</td><td>William Clancey</td><td>Explainable AI Past, Present, and Future&ndash;A Scientific Modeling Approach</td><td>[http://bit.ly/2Scjvo6 Slides]</td><td>[http://bit.ly/2SjazO0 Video]</td></tr>
 +
 
 +
</table>
  
 
== Conference Call Information ==
 
== Conference Call Information ==
Line 56: Line 100:
  
 
== Attendees ==
 
== Attendees ==
 +
* [[AlessandroOltramari|Alessandro Oltramari]]
 +
* [[AlexShkotin|Alex Shkotin]]
 +
* [[AndreaWesterinen|Andrea Westerinen]]
 +
* [[BobbinTeegarden|Bobbin Teegarden]]
 +
* [[DaveWhitten|Dave Whitten]]
 +
* [[DouglasRMiles|Douglas R Miles]]
 +
* [[GaryBergCross|Gary Berg-Cross]]
 +
* [[JanetSinger|Janet Singer]]
 +
* [[JohnSowa|John Sowa]]
 +
* [[KenBaclawski|Ken Baclawski]]
 +
* [[MarkFox|Mark Fox]]
 +
* [[MarkUnderwood|Mark Underwood]]
 +
* [[MikeBennett|Mike Bennett]]
 +
* [[RamSriram|Ram D. Sriram]]
 +
* [[RaviSharma|Ravi Sharma]]
 +
* [[RussellReinsch|Russell Reinsch]]
 +
* [[SpencerBreiner|Spencer Breiner]]
 +
* [[SteveRay|Steve Ray]]
 +
* [[TerryLongstreth|Terry Longstreth]]
 +
* [[ToddSchneider|Todd Schneider]]
 +
* [[TorstenHahmann|Torsten Hahmann]]
 +
* [[WilliamClancey|William Clancey]]
  
 
== Proceedings ==
 
== Proceedings ==
 +
TBD
  
 
== Resources ==
 
== Resources ==
 
Here are some ideas for a working synthesis outline for Explanations (Gary Berg-Cross)
 
Here are some ideas for a working synthesis outline for Explanations (Gary Berg-Cross)
  
1. Meaning of Explanation – there are range of these  
+
1. Meaning of Explanation [An explanation is the answer to the question "Why?" as well the answers to followup questions such as "Where do I go from here?"] – there are range of these  
* Gosof deductive Proof , with a formal knowledge representation (KR) – is the gold standard, but there are many types with different representations
+
* Grosof deductive Proof , with a formal knowledge representation (KR) – is the gold standard, but there are many types with different representations
 
** – E.g., natural deduction –HS geometry there is also  probabilistic  
 
** – E.g., natural deduction –HS geometry there is also  probabilistic  
2. Causal model Explanations
+
* Causal model Explanations
  
2. There are a range of concepts related to explanation
+
There are a range of concepts related to explanation
1. Source or provenance, say of a rule
+
* Source or provenance, say of a rule
2. Transparency in origin
+
Transparency in origin
3. Ability to explore and drill down
+
Ability to explore and drill down
4. Focus on the subject on hand
+
Focus on the subject on hand
5. Understand-ability and presentability
+
Additional Aspects/Modifiers of explanation:
3. Trending-Up concepts of explanation
+
** Summarization, grain (coarse vs. fine), drill-down, elaboration
1. Influentiality – , heavily weighted hidden nodes and edges effect
+
**  Partial vs. complete
2.  Reconstruction – simpler / easier-to-comprehend model  
+
**  Approximate vs. precise
3.  Lateral relevance – interactivity for exploration  
+
**  Structuring of inference in presentation
4. Affordance of  Conversational human-computer interaction (HCI)   
+
** Assumptions and presumptions
4. Good explanations quickly get into issue of understanding
+
** Targeting to user knowledge and goals (i.e., user model)
1. What does it mean to understand , follow and explain a set of instructions?  
+
**  Natural language (NL) generation
2. Problems and issues
+
** Graphical presentation
1. From GOFAI
+
**  Use of Terminology (source and validity)
1. An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do.
+
* Understand-ability and presentability
2. early AI demonstrated that the nature and scale of the problem was difficult.  
+
Trending-Up concepts of explanation
3. People seemed to need a vast store of everyday knowledge for common tasks. A variety of background knowledge was needed to understand & explain
+
* Influentiality – , heavily weighted hidden nodes and edges effect
1. has small is the common ontology that we mostly all share for representing and reasoning about the physical world?  
+
Reconstruction – simpler / easier-to-comprehend model  
4. Additional Aspects/Modifiers of explanation:
+
* Lateral relevance – interactivity for exploration  
1. Summarization, grain (coarse vs. fine), drill-down, elaboration
+
Affordance of  Conversational human-computer interaction (HCI)   
2. Partial vs. complete
+
* Good explanations quickly get into issue of understanding & meaning since much meaning involves background knowledge and commonsense and lies in the implicit and unspoken.
3.  Approximate vs. precise
+
**  What does it mean to understand , follow and explain a set of instructions?  
4.  Structuring of inference in presentation
+
2. Problems and issues
5.  Assumptions and presumptions
+
* From GOFAI
6. Targeting to user knowledge and goals (i.e., user model)
+
** An early goal of AI was to teach/program computers with enough factual (often commonsense) knowledge about the world so that they could reason about it in the way people do which is not strictly logical is some circumstances
7.  Natural language (NL) generation • Graphical presentation
+
* early AI demonstrated that the nature and scale of the problem was difficult.  
8. Use of Terminology (source and validity)
+
** one reason is that simple, direct approaches like rule based systems were brittle.
5. it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)  
+
* People seemed to need a vast store of everyday, background knowledge for common tasks. A variety of background knowledge was needed to understand & explain decisions
1. PRAxIS work (Perception, Reasoning, and Action across Intelligent Systems)  
+
*  Do we have a small, common ontology that we mostly all share for representing and reasoning about the physical world?  
2. From XAI
+
*  
1. Concept of Humagic Knowledge  
+
# it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)  
2.
+
** PRAxIS work (Perception, Reasoning, and Action across Intelligent Systems)  
3. Bridging from sub-symbolic to symbolic -ontologies help constrain options
+
3. From XAI
3. Application areas
+
* Performance vs. Explainability: DARPA XAI Program
1. Medicine
+
** While DL performs well is is very low on explainability which decision trees are the reverse
2. Finance
+
** One context of recent work is that of Deep machine-learning systems.  Explanations for their decisions can be problematic since one cay say what they learn is a dimension space of numbers that have no words of any kind, so explanation in natural language is not immediately available.
1. Automated Decision Support for Financial Regulatory/Policy Compliance  
+
** One approach to handling this problem is called Deep Explanation which uses modified deep learning techniques to learn explainable features as part of what it learns.
1. Has requirements like competency Qs it needs to explain
+
** More traditional GOFAI approaches may develop Interpretable Models using techniques that learn more  structured, interpretable,  causal models.
2. Examples of successes?  Rulelog’s Core includes Restraint bounded rationality  
+
* Concept of Humagic Knowledge  
4. Relevance and relation to context
+
*  TBD
5. Synergies with commonsense reasoning
+
Bridging from sub-symbolic to symbolic -ontologies help constrain options
1. Spatial and physical reasoning are good areas.  
+
4. Application areas
6. Success stories/systems
+
* Medicine
1. ErgoAI Architecture ?
+
Finance
7. Issues Today in the Field of Explanation /Questions  
+
** Automated Decision Support for Financial Regulatory/Policy Compliance  
1. How do we evaluate these ontologies supporting explanations and commonsense understanding?  
+
**  Has requirements like competency Qs it needs to explain
2.  How are these explanations ontologies related to existing upper ontologies?  
+
Examples of successes?  Rulelog’s Core includes Restraint bounded rationality  
8. Conclusions
+
5. Relevance and relation to context
1. Benefits of Explanation (Grosof)
+
* TBD
1. Semi-automatic decision support  
+
6.   Synergies with commonsense reasoning
2. Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.
+
Spatial and physical reasoning are good areas.  
3. Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge  
+
7. Success stories/systems
4. Accountability • Knowledge debugging in KB development  
+
ErgoAI Architecture ?
5. Trust in systems – Competence and correctness – Ethicality, fairness, and legality  
+
Issues Today in the Field of Explanation /Questions  
6. Supports Human-machine interaction and User engagement  (see Sowa also)  
+
* How do we evaluate these ontologies supporting explanations and commonsense understanding?  
7. Supports Reuse / and guide choice for transfer of knowledge  
+
* How are these explanations ontologies related to existing upper ontologies?  
2. Contemporary Issues
+
8. Conclusions
1. Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community  
+
* In the future, we’ll share meanings with
2. Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.  
+
computers, AIs, and robots. And that makes
3.  Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs  
+
meanings matter even more. But it remains a hard problem.
4.  Disconnect between users and investors  
+
** Smart systems may have to be embodied & have
5. (Ignorance of past relevant work)
+
sentience—the capacity to feel, perceive,
 +
or experience subjectively.
 +
* Benefits of Explanation (Grosof)
 +
** Semi-automatic decision support  
 +
** Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.
 +
**  Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge  
 +
** Accountability • Knowledge debugging in KB development  
 +
**  Trust in systems – Competence and correctness – Ethicality, fairness, and legality  
 +
**  Supports Human-machine interaction and User engagement  (see Sowa also)  
 +
**  Supports Reuse / and guide choice for transfer of knowledge  
 +
9. Contemporary Issues
 +
* Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community  
 +
Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.  
 +
Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs  
 +
Disconnect between users and investors  
 +
* (Ignorance of past relevant work)
 +
* Some envision a fruitful marriage between classic logical      approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar to the human brain.
 +
[[http://ontologforum.org/index.php/OntologySummit2019/CommonSenseTrackSynthesis A preliminary summary of the track on CommonSense Knowledge and Reasoning along with some discussion of its Relation to Explanation.]]
  
 
== Previous Meetings ==
 
== Previous Meetings ==

Latest revision as of 09:41, 24 April 2019

[ ]
    (1)
Session Synthesis Session 1
Duration 1 hour
Date/Time 27 February 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ken Baclawski

Contents

[edit] Ontology Summit 2019 Synthesis Session 1     (2)

[edit] Abstract     (2A)

The aim of this week's session is to synthesize the lessons learned so far on the tracks that are under way. Each track has met once, and so we will have gained insights from a combination of invited speakers, chat log comments and blog page discussions.     (2A1)

A second synthesis session will take place after the tracks have met again, and that, along with today's session outcome, will form the basis of this year's Ontology Summit Communiqué.     (2A2)

[edit] Summary of Ontology Summit 2019 Sessions     (2C)

There were 9 sessions so far. Each session had the proceedings (from the chat room) and a recording (one audio recording and the rest video recordings). The following are the speakers with links to their presentation slides (when they were provided) and the recordings.     (2C1)


    (2C2)
DateSpeakerTopicPresentationRecording
11/14John SowaExplanations and help facilities designed for peopleSlidesVideo
11/28Ram D. Sriram and Ravi SharmaIntroductory Remarks on XAISlidesVideo
Derek DoranOkay but Really... What is Explainable AI? Notions and Conceptualizations of the FieldSlides
12/05Gary Berg-Cross and Torsten HahmannIntroduction to Commonsense Knowledge and ReasoningSlidesVideo
1/16Ken BaclawskiIntroductory RemarksSlidesVideo
Gary Berg-Cross and Torsten HahmannCommonsenseSlides
Donna Fritzsche and Mark UnderwoodNarrativeSlides
Mark Underwood and Mike BennettFinancial Explanation
Ram D. Sriram and David WhittenMedical Explanation
Ram D. Sriram and Ravi SharmaExplainable AISlides
1/23Michael GrüningerOntologies for the Physical Turing TestSlidesVideo
Benjamin GrosofAn Overview of Explanation: Concepts, Uses, and IssuesSlides
1/30Donna FritzscheIntroduction to NarrativeAudio only
Ken BaclawskiProof as Explanation and NarrativeSlides
Mark UnderwoodBag of Verses: Frameworks for Narration from Cognitive PsychologySlides
2/6Mike BennettFinancial Explanations IntroductionSlidesVideo
Mark UnderwoodExplanation Use Cases from Regulatory and Service Quality Drivers in Retail Credit Card FinanceSlides
Mike BennettFinancial Industry ExplanationsSlides
2/13David WhittenIntroduction to Medical Explanation SystemsSlides
Augie TuranoReview and Recommendations from past Experience with Medical Explanation SystemsSlides
Ram D. SriramXAI for BiomedicineSlides
2/20William ClanceyExplainable AI Past, Present, and Future–A Scientific Modeling ApproachSlidesVideo

[edit] Conference Call Information     (2D)

[edit] Attendees     (2E)

[edit] Proceedings     (2F)

[edit] Resources     (2G)

Here are some ideas for a working synthesis outline for Explanations (Gary Berg-Cross)     (2G1)

1. Meaning of Explanation [An explanation is the answer to the question "Why?" as well the answers to followup questions such as "Where do I go from here?"] – there are range of these     (2G2)

  • Grosof deductive Proof , with a formal knowledge representation (KR) – is the gold standard, but there are many types with different representations     (2G3)
    • – E.g., natural deduction –HS geometry there is also probabilistic     (2G3A)
  • Causal model Explanations     (2G4)

There are a range of concepts related to explanation     (2G5)

Additional Aspects/Modifiers of explanation:     (2G10)

Trending-Up concepts of explanation     (2G13)

  • Influentiality – , heavily weighted hidden nodes and edges effect     (2G14)
  • Reconstruction – simpler / easier-to-comprehend model     (2G15)
  • Lateral relevance – interactivity for exploration     (2G16)
  • Affordance of Conversational human-computer interaction (HCI)     (2G17)
  • Good explanations quickly get into issue of understanding & meaning since much meaning involves background knowledge and commonsense and lies in the implicit and unspoken.     (2G18)
    • What does it mean to understand , follow and explain a set of instructions?     (2G18A)

2. Problems and issues     (2G19)

  • From GOFAI     (2G20)
    • An early goal of AI was to teach/program computers with enough factual (often commonsense) knowledge about the world so that they could reason about it in the way people do which is not strictly logical is some circumstances     (2G20A)
  • early AI demonstrated that the nature and scale of the problem was difficult.     (2G21)
    • one reason is that simple, direct approaches like rule based systems were brittle.     (2G21A)
  • People seemed to need a vast store of everyday, background knowledge for common tasks. A variety of background knowledge was needed to understand & explain decisions     (2G22)
  • Do we have a small, common ontology that we mostly all share for representing and reasoning about the physical world?     (2G23)
  1. it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)     (2G25)

3. From XAI     (2G27)

  • Performance vs. Explainability: DARPA XAI Program     (2G28)
    • While DL performs well is is very low on explainability which decision trees are the reverse     (2G28A)
    • One context of recent work is that of Deep machine-learning systems. Explanations for their decisions can be problematic since one cay say what they learn is a dimension space of numbers that have no words of any kind, so explanation in natural language is not immediately available.     (2G28B)
    • One approach to handling this problem is called Deep Explanation which uses modified deep learning techniques to learn explainable features as part of what it learns.     (2G28C)
    • More traditional GOFAI approaches may develop Interpretable Models using techniques that learn more structured, interpretable, causal models.     (2G28D)
  • Concept of Humagic Knowledge     (2G29)
  1. Bridging from sub-symbolic to symbolic -ontologies help constrain options     (2G31)

4. Application areas     (2G32)

  1. Examples of successes? Rulelog’s Core includes Restraint bounded rationality     (2G35)

5. Relevance and relation to context     (2G36)

6. Synergies with commonsense reasoning     (2G38)

7. Success stories/systems     (2G40)

  1. Issues Today in the Field of Explanation /Questions     (2G42)
  • How do we evaluate these ontologies supporting explanations and commonsense understanding?     (2G43)
  • How are these explanations ontologies related to existing upper ontologies?     (2G44)

8. Conclusions     (2G45)

computers, AIs, and robots. And that makes meanings matter even more. But it remains a hard problem.     (2G47)

sentience—the capacity to feel, perceive, or experience subjectively.     (2G49)

  • Benefits of Explanation (Grosof)     (2G50)
    • Semi-automatic decision support     (2G50A)
    • Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.     (2G50B)
    • Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge     (2G50C)
    • Accountability • Knowledge debugging in KB development     (2G50D)
    • Trust in systems – Competence and correctness – Ethicality, fairness, and legality     (2G50E)
    • Supports Human-machine interaction and User engagement (see Sowa also)     (2G50F)
    • Supports Reuse / and guide choice for transfer of knowledge     (2G50G)

9. Contemporary Issues     (2G51)

  • Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community     (2G52)
  • Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.     (2G53)
  • Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs     (2G54)
  • Disconnect between users and investors     (2G55)
  • (Ignorance of past relevant work)     (2G56)
  • Some envision a fruitful marriage between classic logical approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar to the human brain.     (2G57)

[edit] Previous Meetings     (2H)


[edit] Next Meetings     (2I)