From OntologPSMW

(Difference between revisions)
Jump to: navigation, search
(Conference Call Information)
(Resources)
 
(12 intermediate revisions by one user not shown)
Line 14: Line 14:
 
| 4:00pm GMT/5:00pm CET
 
| 4:00pm GMT/5:00pm CET
 
|-
 
|-
! scope="row" | Convener & Co-Champions:
+
! scope="row" | Co-Champions:
 
| [[RamDSriram|Ram D. Sriram]] and [[DavidWhitten|David Whitten]]
 
| [[RamDSriram|Ram D. Sriram]] and [[DavidWhitten|David Whitten]]
 
|}
 
|}
Line 21: Line 21:
  
 
== Agenda ==
 
== Agenda ==
We will be having two speakers:
+
The speaker today is:
 
+
* Ugur Kursuncu and Manas Gaur
* Lawrence D. Nadel, Ph.D., PMP
+
** Explainability of Medical AI through Domain Knowledge
** Electronics Engineer | ITL/IAD/Image Group
+
** [http://bit.ly/2YuPUe3 Video Recording]
** National Institute of Standards and Technology (NIST)
+
** Wright State University
 
+
* Arash Shaban-Nejad, Ph.D., MPH
+
** Assistant Professor,
+
** UTHSC-ORNL Center for Biomedical Informatics
+
** UTHSC Department of Pediatrics
+
** Adjunct Faculty, The Bredesen Center for Interdisciplinary Research, The University of Tennessee, Knoxville.
+
  
 
== Conference Call Information ==
 
== Conference Call Information ==
Line 49: Line 43:
  
 
== Attendees ==
 
== Attendees ==
 +
* [[BruceBray|Bruce Bray]]
 +
* [[KenBaclawski|Ken Baclawski]]
 +
* [[RaviSharma|Ravi Sharma]]
 +
* [[TerryLongstreth|Terry Longstreth]]
 +
* [[ToddSchneider|Todd Schneider]]
  
 
== Proceedings ==
 
== Proceedings ==
 +
[12:09] Ken Baclawski: The recording will be posted after the session, and an outline of the slide content is posted below.
 +
 +
[12:16] RaviSharma: with help of live chat I can now see his slides.
 +
 +
[12:18] ToddSchneider: How should we understand the notion 'concept based information'?
 +
 +
[12:22] RaviSharma: Ugur - what is either the improvement in diagnosis with use of All AI data, multimodal medical data vs only social media data?
 +
 +
[12:23] RaviSharma: or improvement in probability of social media based only data?
 +
 +
[12:26] ToddSchneider: Is the mapping of the 'personal data' into/with the medical knowledge based on natural language terms or phrases?
 +
 +
[12:26] RaviSharma: how does openness of patient in soc media affect the result?
 +
 +
[12:28] RaviSharma: medical entity data?
 +
 +
[12:28] TerryLongstreth: At what point do the subjects (patients..?) know that their social media accounts are being scanned/extracted?  Did the research control for intentional misdirection on the part of subjects after they learned? Or was the use of social media data covert/hidden from the subjects?
 +
 +
[12:32] ToddSchneider: Is there an assumption of the existence of social media data for a person?
 +
 +
[12:32] RaviSharma: if you limit the social interaction among the similar patients what do you expect the result to be compared to social media data?
 +
 +
[12:36] ToddSchneider: Perhaps a better question is 'How much personal data is needed' (for the system to be 'useful')?
 +
 +
[12:50] Ken Baclawski: Arash Shaban-Nejad Semantic "Analytics for Global Health Surveillance" will be speaking on April 17. Slides are available at http://bit.ly/2YvlHLK
 +
 +
[12:53] TerryLongstreth: PSQ9 - questionnaire
 +
 +
[12:54] RaviSharma: thanks ken
 +
 +
[12:59] ToddSchneider: I have to get to another meeting. Thank you.
 +
 +
[13:02] RaviSharma: please upload speaker slides, thanks
 +
 +
[13:06] RaviSharma: thanks to speakers
  
 
== Resources ==
 
== Resources ==
 +
The following is an outline of the slide content, not including the images.
 +
 +
<span style="color:blue">1. Explainability of Medical AI through Domain Knowledge</span>
 +
 +
* Ugur Kursuncu and Manas Gaur
 +
with Krishnaprasad Thirunarayan and Amit Sheth
 +
* Kno.e.sis Research Center
 +
** Department of Computer Science and Engineering
 +
** Wright State University, Dayton, Ohio USA
 +
 +
<span style="color:blue">2. Why AI systems in Medical Systems</span>
 +
 +
* Growing need for clinical expertise
 +
* Need for rapid and accurate analysis for growing healthcare big data including Patient Generated Health Data and Precision Medicine data
 +
** Improve productivity, efficiency, workflow, accuracy and speed, both for doctors and for patients
 +
** Patient empowerment through smart (actionable) health data
 +
 +
<span style="color:blue">3. Why Explainability in Medical AI Systems</span>
 +
 +
* Trust in AI systems by clinicians and other stakeholders
 +
* Major healthcare consequences
 +
* Legal requirements; need to adhere to guidelines/protocols
 +
* More significant for some specific medical fields, such as mental health
 +
 +
<span style="color:blue">4. Patient-Doctor Relationship</span>
 +
 +
* Cultural and political reason for ownership of personal data
 +
* Privacy concerns for personal data:
 +
** Two stages for permission to use: model creation, personal health decision-making
 +
** Incomplete data due to privacy concerns
 +
* How would AI systems treat patients?
 +
* For personalized healthcare: Researchers or analyzers or doctors need such personal data to provide explainable decisions supported by AI systems
 +
 +
<span style="color:blue">5. How will AI assist humans in medical domain?</span>
 +
 +
* Intelligent assistants through conversational AI (chatbots)
 +
* Multimodal personal data
 +
** Text, voice, image, sensors, demographics
 +
* Help physician burnouts
 +
* Legal implications
 +
 +
<span style="color:blue">6. Challenges</span>
 +
 +
* Common ground and understanding between machines and humans.
 +
** Forming cognitive associations
 +
* Big Multimodal data
 +
* Ultimate goal: Recommending or Acting?
 +
 +
<span style="color:blue">7. Problem: Reasoning over the outcome</span>
 +
 +
* How were the conclusions arrived at?
 +
* If some unintuitive/erroneous conclusions were obtained, how can we trace back and reason about them?
 +
 +
<span style="color:blue">8. A Mental Health Use Case</span>
 +
 +
* Clinician
 +
** Previsit
 +
** In-Visit
 +
** Post-Visit
 +
 +
* Patient
 +
** Recommendations on:
 +
*** Cost
 +
*** Location
 +
*** Relevance to disease
 +
 +
* Big multimodal data for humans!
 +
** Capacity
 +
** Performance
 +
** Efficiency
 +
** Explainability
 +
 +
* Explainability is required as to how data is relevant and significant with respect to the patient situation
 +
 +
<span style="color:blue">9. Explainability vs Interpretability</span>
 +
 +
* Explainability is the combination of interpretability and traceability via a knowledge graph
 +
 +
<span style="color:blue">10. A Mental Health Use Case</span>
 +
 +
* From Patient to Social Media to Clinician to Healthcare
 +
 +
<span style="color:blue">11. A Mental Health Use Case</span>
 +
 +
* From Patient to Clinician via a Black box AI system
 +
 +
<span style="color:blue">12. A Mental Health Use Case</span>
 +
 +
* What is the severity level of suicide risk of a patient?
 +
* ML can be applied to a variety of input data: Text, image, network, sensor, knowledge
 +
 +
<span style="color:blue">13. Explainability with Knowledge</span>
 +
 +
* Explainability through incorporation of knowledge graphs in machine learning processes
 +
** Knowledge enhancement before model is trained
 +
** Knowledge harnessing after model is trained
 +
** Knowledge infusing while model is trained
 +
 +
<span style="color:blue">14. Explanation through knowledge enhancement</span>
 +
 +
<span style="color:blue">15. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)</span>
 +
 +
<span style="color:blue">16. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)</span>
 +
 +
* Explanation through word features that is created through Semantic Encoding and Decoding Optimization technique
 +
** Semantic encoding of personal data into knowledge space
 +
** Semantic decoding of knowledge into personal data space
 +
 +
<span style="color:blue">17. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)</span>
 +
 +
<span style="color:blue">18. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)</span>
 +
 +
<span style="color:blue">19. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)</span>
 +
 +
<span style="color:blue">20. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)</span>
 +
 +
* Progression of users through severity levels of suicide risk
 +
 +
<span style="color:blue">21. Explanation through Knowledge Harvesting</span>
 +
 +
<span style="color:blue">22. Relevant Research: Explaining the prediction wisdom of crowd (WebInt 2018)</span>
 +
 +
<span style="color:blue">23. Explanation through Knowledge Infusion</span>
 +
 +
<span style="color:blue">24. Explanation through Knowledge Infusion</span>
 +
 +
* Learning what specific medical knowledge is more important as the information is processed by the model
 +
* Measuring the importance of such infused knowledge
 +
* Specific functions and how they can be operationalized for explainability
 +
** Knowledge-Aware Loss Function (K-LF)
 +
** Knowledge-Modulation Function (K-MF)
 +
 
 +
<span style="color:blue">25. Evaluation</span>
 +
 +
* ROC & AUC
 +
** Assessments of true positives and false positive rates, to properly measure feature importance
 +
* Inverse Probability Estimates
 +
** Estimate the counterfactual or potential outcome if all patients in dataset were assigned either label or have close estimated probabilities
 +
* PRM: Perceived Risk Measure
 +
** The ratio of disagreement between the predicted and actual outcomes summed over disagreements between the annotators multiplied by a reduction factor that reduces the penalty if the prediction matches any other annotator
 +
 +
<span style="color:blue">26. Evaluation</span>
 +
 +
<span style="color:blue">27. Mental Health Ontology</span>
 +
 +
* Extensively used in this research
 +
* Built based on DSM-5, which is the main guideline documentation for psychiatrists
 +
* Includes: SNOMED-CT, Drug Abuse Ontology and Slang terms
 +
 +
<span style="color:blue">28. Key Takeaways</span>
 +
 +
* Medical explainability is a necessity to form trust for medical community
 +
* Three ways of explainability with knowledge
 +
* Interpretability and traceability are necessary and sufficient conditions for explainability
 +
* Infusing knowledge would further enhance the reasoning capabilities
 +
 +
<span style="color:blue">29. Questions?</span>
  
 
== Previous Meetings ==
 
== Previous Meetings ==

Latest revision as of 17:32, 17 June 2019

[ ]
    (1)
Session Medical
Duration 1.5 hour90 minute
5,400 second
0.0625 day
Date/Time Mar 27 2019 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Co-Champions: Ram D. Sriram and David Whitten

Contents

[edit] Ontology Summit 2019 Medical Explanation Session 2     (2)

The speaker today is:     (2A1)

[edit] Conference Call Information     (2B)

[edit] Attendees     (2C)

[edit] Proceedings     (2D)

[12:09] Ken Baclawski: The recording will be posted after the session, and an outline of the slide content is posted below.     (2D1)

[12:16] RaviSharma: with help of live chat I can now see his slides.     (2D2)

[12:18] ToddSchneider: How should we understand the notion 'concept based information'?     (2D3)

[12:22] RaviSharma: Ugur - what is either the improvement in diagnosis with use of All AI data, multimodal medical data vs only social media data?     (2D4)

[12:23] RaviSharma: or improvement in probability of social media based only data?     (2D5)

[12:26] ToddSchneider: Is the mapping of the 'personal data' into/with the medical knowledge based on natural language terms or phrases?     (2D6)

[12:26] RaviSharma: how does openness of patient in soc media affect the result?     (2D7)

[12:28] RaviSharma: medical entity data?     (2D8)

[12:28] TerryLongstreth: At what point do the subjects (patients..?) know that their social media accounts are being scanned/extracted? Did the research control for intentional misdirection on the part of subjects after they learned? Or was the use of social media data covert/hidden from the subjects?     (2D9)

[12:32] ToddSchneider: Is there an assumption of the existence of social media data for a person?     (2D10)

[12:32] RaviSharma: if you limit the social interaction among the similar patients what do you expect the result to be compared to social media data?     (2D11)

[12:36] ToddSchneider: Perhaps a better question is 'How much personal data is needed' (for the system to be 'useful')?     (2D12)

[12:50] Ken Baclawski: Arash Shaban-Nejad Semantic "Analytics for Global Health Surveillance" will be speaking on April 17. Slides are available at http://bit.ly/2YvlHLK     (2D13)

[12:53] TerryLongstreth: PSQ9 - questionnaire     (2D14)

[12:54] RaviSharma: thanks ken     (2D15)

[12:59] ToddSchneider: I have to get to another meeting. Thank you.     (2D16)

[13:02] RaviSharma: please upload speaker slides, thanks     (2D17)

[13:06] RaviSharma: thanks to speakers     (2D18)

[edit] Resources     (2E)

The following is an outline of the slide content, not including the images.     (2E1)

1. Explainability of Medical AI through Domain Knowledge     (2E2)

with Krishnaprasad Thirunarayan and Amit Sheth     (2E4)

2. Why AI systems in Medical Systems     (2E6)

  • Growing need for clinical expertise     (2E7)
  • Need for rapid and accurate analysis for growing healthcare big data including Patient Generated Health Data and Precision Medicine data     (2E8)
    • Improve productivity, efficiency, workflow, accuracy and speed, both for doctors and for patients     (2E8A)
    • Patient empowerment through smart (actionable) health data     (2E8B)

3. Why Explainability in Medical AI Systems     (2E9)

4. Patient-Doctor Relationship     (2E14)

  • Cultural and political reason for ownership of personal data     (2E15)
  • Privacy concerns for personal data:     (2E16)
    • Two stages for permission to use: model creation, personal health decision-making     (2E16A)
    • Incomplete data due to privacy concerns     (2E16B)
  • How would AI systems treat patients?     (2E17)
  • For personalized healthcare: Researchers or analyzers or doctors need such personal data to provide explainable decisions supported by AI systems     (2E18)

5. How will AI assist humans in medical domain?     (2E19)

6. Challenges     (2E24)

7. Problem: Reasoning over the outcome     (2E28)

  • How were the conclusions arrived at?     (2E29)
  • If some unintuitive/erroneous conclusions were obtained, how can we trace back and reason about them?     (2E30)

8. A Mental Health Use Case     (2E31)

  • Explainability is required as to how data is relevant and significant with respect to the patient situation     (2E35)

9. Explainability vs Interpretability     (2E36)

  • Explainability is the combination of interpretability and traceability via a knowledge graph     (2E37)

10. A Mental Health Use Case     (2E38)

  • From Patient to Social Media to Clinician to Healthcare     (2E39)

11. A Mental Health Use Case     (2E40)

12. A Mental Health Use Case     (2E42)

  • What is the severity level of suicide risk of a patient?     (2E43)
  • ML can be applied to a variety of input data: Text, image, network, sensor, knowledge     (2E44)

13. Explainability with Knowledge     (2E45)

14. Explanation through knowledge enhancement     (2E47)

15. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E48)

16. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E49)

  • Explanation through word features that is created through Semantic Encoding and Decoding Optimization technique     (2E50)

17. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E51)

18. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E52)

19. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)     (2E53)

20. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)     (2E54)

  • Progression of users through severity levels of suicide risk     (2E55)

21. Explanation through Knowledge Harvesting     (2E56)

22. Relevant Research: Explaining the prediction wisdom of crowd (WebInt 2018)     (2E57)

23. Explanation through Knowledge Infusion     (2E58)

24. Explanation through Knowledge Infusion     (2E59)

  • Learning what specific medical knowledge is more important as the information is processed by the model     (2E60)
  • Measuring the importance of such infused knowledge     (2E61)
  • Specific functions and how they can be operationalized for explainability     (2E62)

25. Evaluation     (2E63)

  • ROC & AUC     (2E64)
    • Assessments of true positives and false positive rates, to properly measure feature importance     (2E64A)
  • Inverse Probability Estimates     (2E65)
    • Estimate the counterfactual or potential outcome if all patients in dataset were assigned either label or have close estimated probabilities     (2E65A)
  • PRM: Perceived Risk Measure     (2E66)
    • The ratio of disagreement between the predicted and actual outcomes summed over disagreements between the annotators multiplied by a reduction factor that reduces the penalty if the prediction matches any other annotator     (2E66A)

26. Evaluation     (2E67)

27. Mental Health Ontology     (2E68)

  • Extensively used in this research     (2E69)
  • Built based on DSM-5, which is the main guideline documentation for psychiatrists     (2E70)
  • Includes: SNOMED-CT, Drug Abuse Ontology and Slang terms     (2E71)

28. Key Takeaways     (2E72)

  • Medical explainability is a necessity to form trust for medical community     (2E73)
  • Three ways of explainability with knowledge     (2E74)
  • Interpretability and traceability are necessary and sufficient conditions for explainability     (2E75)
  • Infusing knowledge would further enhance the reasoning capabilities     (2E76)

29. Questions?     (2E77)

[edit] Previous Meetings     (2F)


[edit] Next Meetings     (2G)