From OntologPSMW

(Difference between revisions)
Jump to: navigation, search
(Agenda)
Line 22: Line 22:
 
== Agenda ==
 
== Agenda ==
 
In this session, we continue development of the Communiqué, especially the findings and challenges.
 
In this session, we continue development of the Communiqué, especially the findings and challenges.
 +
* [http://bit.ly/2ZXIrou Current Communiqué]
  
 
== Conference Call Information ==
 
== Conference Call Information ==

Revision as of 11:17, 12 June 2019

[ ]
    (1)
Session Communique
Duration 1 hour60 minute
3,600 second
0.0417 day
Date/Time June 12 2019 14:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ken Baclawski

Contents

Agenda     (2A)

In this session, we continue development of the Communiqué, especially the findings and challenges.     (2A1)

Conference Call Information     (2B)

Participants     (2C)

Proceedings     (2D)

Resources     (2E)

Sowa on 11/14     (2E1)

Basic problems:     (2E1A)

  • People talk and think in some version of natural language.     (2E1B)
  • Precise definitions of system terms can't help if people don't know what to ask for.     (2E1C)
  • Machine-learning systems are even worse: they learn a maze of numbers that have no words of any kind.     (2E1D)

We need systems that can explain how, what, and why.     (2E1E)

Ram on 11/28     (2E2)

For the future, we envision a fruitful marriage between classic logical approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar as the human brain.     (2E2A)

Maybe one step further is in linking probabilistic learning methods with large knowledge representations (ontologies) and logical approaches, thus making results re-traceable, explainable and comprehensible on demand.     (2E2B)

Derek Doran on 11/28     (2E3)

Symbolic systems can prove a fact is true given a knowledge base of facts, and show a user an inference sequence to prove the fact holds.     (2E3A)

Sub-symbolic systems (statistical ML):     (2E3B)

  • White-box: You can see model mechanisms simple enough to be able to trace how inputs map to outputs     (2E3C)
  • Grey-box: You have some vision mechanisms, but parameters are numerous, decisions are probabilistic, or inputs get “lost” (transformed)     (2E3D)
  • Black-box: The model is so complex with so large a parameter space that it is not easily decipherable     (2E3E)

Types of sub-symbolic systems:     (2E3F)

  • White Box: regression, DTs, rule mining, linear SVMs     (2E3G)
  • Grey Box: clustering, Bayesian nets, genetic algs, logic programming     (2E3H)
  • Black Box: DNNs, matrix factorizations, non-linear dimensionality reduction     (2E3I)

The problem: Most white, some grey box models provide quantitative explanations (“variance explained”) that is, generally, not useful to a layman responsible for an action recommended by the model.     (2E3J)

Gary and Torsten on 12/5     (2E4)

Conclusion is that together commonsense and NL understanding are key to a robust AI explanation system.     (2E4A)

Proofs found by Automated Theorem Provers provide a map from inputs to outputs.     (2E4B)

  • But do these make something clear?     (2E4C)
  • They may know the “how” but not the “why.”     (2E4D)
  • And scripts and rule-based systems got so complex that the trace was hard to follow with conditional paths.     (2E4E)

CS reasoning is needed to deal with incomplete and uncertain information in dynamic environments responsively and appropriately.     (2E4F)

So we support the thesis that explanation systems need to understand the context of the user and need to be able to communicate to a person and be trusted.     (2E4G)

Ontologies are needed to make explicit structural, strategic and support knowledge which enhanced the ability to understand and modify the system as well as support suitable explanations.     (2E4H)

Common sense is "perhaps the most significant barrier" between the focus of AI applications today, and the human-like systems we dream of.     (2E4I)

Ken on 1/16     (2E5)

Challenges:     (2E5A)

  • Automated generation of explanations based on an ontology     (2E5B)
  • Objective evaluation of the quality of explanations     (2E5C)

Gary and Torsten on 1/16     (2E6)

Some Frequently Recurring Questions (but not specifically about explanation):     (2E6A)

1. How can we leverage the best of various approaches to achieving commonsense?     (2E6B)

2. How to best inject commonsense knowledge into machine learning approaches?     (2E6C)

3. How to bridge formal knowledge representations (formal concepts and relations as axiomatized in logic) and NLP techniques and language disambiguation     (2E6D)

Michael Gruninger on 1/23     (2E7)

Challenge: How can we design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks?     (2E7A)

Discussion Questions     (2E7B)

  • What does it mean to understand a set of instructions?     (2E7C)
  • Is there a common ontology that we all share for representing and reasoning about the physical world?     (2E7D)

  • The PRAxIS Ontologies are intended to support the representation of commonsense knowledge about the physical world – perception, objects, and processes.     (2E7F)

Challenges:     (2E7G)

Ben Grosof on 1/23     (2E8)

Issues in the field of explanation today     (2E8A)

Confusion about concepts     (2E8B)

Mission creep, i.e., expansivity of task/aspect     (2E8E)

Ignorance of what’s already practical     (2E8G)

Disconnect between users and investors     (2E8H)

  • Users often perceive critical benefits/requirements for explanation     (2E8I)
  • Investors (both venture and enterprise internal) often fail to perceive value of explanation     (2E8J)

Mark Underwood on 2/6     (2E9)

Financial Explanation is harder than it seems     (2E9A)

Call Center employees are knowledge workers but may not have the best tools available to them     (2E9B)

Tools do not operate in real time or systems do not interoperate     (2E9C)

Explanation suitability is often context - and scenario - dependent     (2E9D)

Compliance, retention and profit must be reconciled in decision support software     (2E9E)

And explanations must follow suit or at least help CSR’s do so     (2E9F)

Mike Bennett on 2/6     (2E10)

The ontology itself needs to be explained     (2E10A)

Challenges in understanding regulations     (2E10F)

Challenges in explaining ontologies     (2E10L)

  • Ontologies help in explanation in many financial scenarios (lending, credit, risk and exposures)     (2E10P)
  • Reasoning with ontologies adds a further aspect that itself needs to be explained     (2E10Q)
  • Ontology concepts still need to be presented to end users in an explainable way     (2E10R)
  • Not all things to be explained are first order (Rules, mathematical constructs etc.)     (2E10S)

Augie Turano on 2/13     (2E11)

Conclusion (about Medical AI, not specifically explanations):     (2E11A)

AI in medical diagnostics is still a novelty, with many clinicians still left to be convinced of its reliability, sensitivity and integration into clinical practice without undermining clinical expertise     (2E11B)

Possibilities are endless, chatbots, interpretation of cell scans, slides, images etc, diagnosis from EHR data (genotypes, phenotypes) and there is a huge push from Venture capital investment companies.     (2E11C)

Combination of ML and AI will probably yield most useful outcomes, not accuracy alone.     (2E11D)

Ram on 2/13     (2E12)

Clancey on 2/20     (2E13)

Common Interactive Systems Today Cannot Explain Behavior or Advice     (2E13A)

Operations are dynamic and interactive in a system of people, technology, and the environment – not linear and placeless.     (2E13C)

Explanation involves discourse, follow - up and mutual learning; requires a "user model" of interests and knowledge.     (2E13D)

"Explanation" is not a module – rather it drives the design process; needs are empirically discovered in prototype experiments.     (2E13E)

Explanations might address shortcomings of symbolic AI     (2E13F)

Identifying domain representations as "knowledge" obscured system - modeling methods and hence the domain - general scientific accomplishment     (2E13G)

Ongoing tuning and extension required "Knowledge Engineers"     (2E13H)

Brittle – boundaries not tested; system not reflective     (2E13I)

Not integrated with legacy systems and work practice     (2E13J)

Niket Tandon on 3/6     (2E14)

Challenges:     (2E14A)

Underlying assumption: commonsense representation is DL friendly     (2E14H)

For these reasons, commonsense aware models may:     (2E14I)

Challenges     (2E14N)

Conclusion     (2E14Q)

  • Commonsense for Deep Learning can help overcome challenges making models more robust and more amicable to explanation     (2E14R)

Tiddi on 3/13     (2E15)

Why do we need (systems generating) explanations?     (2E15A)

  • to learn new knowledge     (2E15B)
  • to find meaning (reconciling contradictions in our knowledge)     (2E15C)
  • to socially interact (creating a shared meaning with the others)     (2E15D)
  • ...and because GDPR says so Users have a "right to explanation" for any decision made about them     (2E15E)

Sargur Srihari on 4/10     (2E16)

Summary and Conclusion     (2E16A)

  • Next wave of AI is probabilistic AI (Intel)     (2E16B)
  • Most Probable Explanation (MPE) of PGMs is useful for XAI     (2E16C)
  • Forensics demands explainability     (2E16D)
  • Forensic Impression evidence lends itself to a combination of deep learning and probabilistic explanation     (2E16E)

Arash Shaban-Nejad on 4/17     (2E17)

Previous Meetings     (2F)


Next Meetings     (2G)