From OntologPSMW

(Difference between revisions)
Jump to: navigation, search
m (Resources)
m (Resources)
Line 67: Line 67:
 
* Causal model Explanations
 
* Causal model Explanations
  
There are a range of concepts related to explanation
+
There are a range of concepts related to explanation
 
* Source or provenance, say of a rule
 
* Source or provenance, say of a rule
 
*  Transparency in origin
 
*  Transparency in origin
Line 93: Line 93:
 
** Assumptions and presumptions  
 
** Assumptions and presumptions  
 
** Targeting to user knowledge and goals (i.e., user model)  
 
** Targeting to user knowledge and goals (i.e., user model)  
7. Natural language (NL) generation Graphical presentation  
+
** Natural language (NL) generation  
8. Use of Terminology (source and validity)
+
Graphical presentation  
5. it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)  
+
**  Use of Terminology (source and validity)
 +
# it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)  
 
** PRAxIS work (Perception, Reasoning, and Action across Intelligent Systems)  
 
** PRAxIS work (Perception, Reasoning, and Action across Intelligent Systems)  
 
# From XAI
 
# From XAI
Line 101: Line 102:
 
*  TBD
 
*  TBD
 
#  Bridging from sub-symbolic to symbolic -ontologies help constrain options
 
#  Bridging from sub-symbolic to symbolic -ontologies help constrain options
3. Application areas
+
# Application areas
1. Medicine
+
* Medicine
2. Finance
+
Finance
1. Automated Decision Support for Financial Regulatory/Policy Compliance  
+
** Automated Decision Support for Financial Regulatory/Policy Compliance  
1. Has requirements like competency Qs it needs to explain
+
**  Has requirements like competency Qs it needs to explain
2. Examples of successes?  Rulelog’s Core includes Restraint bounded rationality  
+
Examples of successes?  Rulelog’s Core includes Restraint bounded rationality  
4. Relevance and relation to context
+
Relevance and relation to context
5. Synergies with commonsense reasoning
+
Synergies with commonsense reasoning
1. Spatial and physical reasoning are good areas.  
+
Spatial and physical reasoning are good areas.  
6. Success stories/systems
+
Success stories/systems
1. ErgoAI Architecture ?
+
ErgoAI Architecture ?
7. Issues Today in the Field of Explanation /Questions  
+
Issues Today in the Field of Explanation /Questions  
1. How do we evaluate these ontologies supporting explanations and commonsense understanding?  
+
* How do we evaluate these ontologies supporting explanations and commonsense understanding?  
2.  How are these explanations ontologies related to existing upper ontologies?  
+
* How are these explanations ontologies related to existing upper ontologies?  
8. Conclusions
+
Conclusions
1. Benefits of Explanation (Grosof)
+
* Benefits of Explanation (Grosof)
1. Semi-automatic decision support  
+
** Semi-automatic decision support  
2. Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.
+
** Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.
3. Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge  
+
**  Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge  
4. Accountability • Knowledge debugging in KB development  
+
** Accountability • Knowledge debugging in KB development  
5. Trust in systems – Competence and correctness – Ethicality, fairness, and legality  
+
**  Trust in systems – Competence and correctness – Ethicality, fairness, and legality  
6. Supports Human-machine interaction and User engagement  (see Sowa also)  
+
**  Supports Human-machine interaction and User engagement  (see Sowa also)  
7. Supports Reuse / and guide choice for transfer of knowledge  
+
**  Supports Reuse / and guide choice for transfer of knowledge  
2. Contemporary Issues
+
# Contemporary Issues
1. Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community  
+
* Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community  
2. Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.  
+
Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.  
3.  Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs  
+
Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs  
4.  Disconnect between users and investors  
+
Disconnect between users and investors  
5. (Ignorance of past relevant work)
+
* (Ignorance of past relevant work)
  
 
== Previous Meetings ==
 
== Previous Meetings ==

Revision as of 12:41, 27 February 2019

[ ]
    (1)
Session Synthesis Session 1
Duration 1 hour
Date/Time 27 February 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ken Baclawski

Contents

Ontology Summit 2019 Synthesis Session 1     (2)

Abstract     (2A)

The aim of this week's session is to synthesize the lessons learned so far on the tracks that are under way. Each track has met once, and so we will have gained insights from a combination of invited speakers, chat log comments and blog page discussions.     (2A1)

A second synthesis session will take place after the tracks have met again, and that, along with today's session outcome, will form the basis of this year's Ontology Summit Communiqué.     (2A2)

Agenda     (2B)

Conference Call Information     (2C)

Attendees     (2D)

Proceedings     (2E)

Resources     (2F)

Here are some ideas for a working synthesis outline for Explanations (Gary Berg-Cross)     (2F1)

1. Meaning of Explanation – there are range of these     (2F2)

  • Gosof deductive Proof , with a formal knowledge representation (KR) – is the gold standard, but there are many types with different representations     (2F3)
    • – E.g., natural deduction –HS geometry there is also probabilistic     (2F3A)
  • Causal model Explanations     (2F4)

There are a range of concepts related to explanation     (2F5)

3. Trending-Up concepts of explanation     (2F11)

  • Influentiality – , heavily weighted hidden nodes and edges effect     (2F12)
  • Reconstruction – simpler / easier-to-comprehend model     (2F13)
  • Lateral relevance – interactivity for exploration     (2F14)
  • Affordance of Conversational human-computer interaction (HCI)     (2F15)
  • Good explanations quickly get into issue of understanding     (2F16)
    • What does it mean to understand , follow and explain a set of instructions?     (2F16A)
  1. Problems and issues     (2F17)
  • From GOFAI     (2F18)
    • An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do.     (2F18A)
  • early AI demonstrated that the nature and scale of the problem was difficult.     (2F19)
  • People seemed to need a vast store of everyday knowledge for common tasks. A variety of background knowledge was needed to understand & explain     (2F20)
  • Do we have asmall is the common ontology that we mostly all share for representing and reasoning about the physical world?     (2F21)
  • Additional Aspects/Modifiers of explanation:     (2F22)
Graphical presentation 
  1. it remains challenging to design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks. (evidence from modified Physical Turing Tests)     (2F24)
  1. From XAI     (2F26)
  1. Bridging from sub-symbolic to symbolic -ontologies help constrain options     (2F29)
  2. Application areas     (2F30)
  1. Examples of successes? Rulelog’s Core includes Restraint bounded rationality     (2F33)
  2. Relevance and relation to context     (2F34)
  3. Synergies with commonsense reasoning     (2F35)
  1. Success stories/systems     (2F37)
  1. Issues Today in the Field of Explanation /Questions     (2F39)
  • How do we evaluate these ontologies supporting explanations and commonsense understanding?     (2F40)
  • How are these explanations ontologies related to existing upper ontologies?     (2F41)
  1. Conclusions     (2F42)
  • Benefits of Explanation (Grosof)     (2F43)
    • Semi-automatic decision support     (2F43A)
    • Might lead to fully-automatic decision making – E.g., in deep deduction about policies and legal – especially the business and medicine topics.     (2F43B)
    • Useful for Education and training, i.e., e-learning – E.g., Digital Socrates concept by Janine Bloomfield of Coherent Knowledge     (2F43C)
    • Accountability • Knowledge debugging in KB development     (2F43D)
    • Trust in systems – Competence and correctness – Ethicality, fairness, and legality     (2F43E)
    • Supports Human-machine interaction and User engagement (see Sowa also)     (2F43F)
    • Supports Reuse / and guide choice for transfer of knowledge     (2F43G)
  1. Contemporary Issues     (2F44)
  • Confusion about concepts – Esp. among non-research industry and media – But needs to be addressed first in the research community     (2F45)
  • Mission creep, i.e., expansivity of task/aspect – Esp. among researchers. E.g., IJCAI-18 workshop on explainable AI.     (2F46)
  • Ignorance of what’s already practical – E.g., in deep policy/legal deduction for decisions: full explanation of extended logic programs, with NL generation and interactive drill-down navigation – E.g., in cognitive search: provenance and focus and lateral relevance, in extended knowledge graphs     (2F47)
  • Disconnect between users and investors     (2F48)
  • (Ignorance of past relevant work)     (2F49)

Previous Meetings     (2G)


Next Meetings     (2H)