From OntologPSMW

Jump to: navigation, search
[ ]
Session Medical
Duration 1.5 hour
Date/Time Mar 27 2019 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Co-Champions: Ram D. Sriram and David Whitten


Ontology Summit 2019 Medical Explanation Session 2     (2)

Agenda     (2A)

The speaker today is:     (2A1)

Conference Call Information     (2B)

Attendees     (2C)

Proceedings     (2D)

[12:09] Ken Baclawski: The recording will be posted after the session, and an outline of the slide content is posted below.     (2D1)

[12:16] RaviSharma: with help of live chat I can now see his slides.     (2D2)

[12:18] ToddSchneider: How should we understand the notion 'concept based information'?     (2D3)

[12:22] RaviSharma: Ugur - what is either the improvement in diagnosis with use of All AI data, multimodal medical data vs only social media data?     (2D4)

[12:23] RaviSharma: or improvement in probability of social media based only data?     (2D5)

[12:26] ToddSchneider: Is the mapping of the 'personal data' into/with the medical knowledge based on natural language terms or phrases?     (2D6)

[12:26] RaviSharma: how does openness of patient in soc media affect the result?     (2D7)

[12:28] RaviSharma: medical entity data?     (2D8)

[12:28] TerryLongstreth: At what point do the subjects (patients..?) know that their social media accounts are being scanned/extracted? Did the research control for intentional misdirection on the part of subjects after they learned? Or was the use of social media data covert/hidden from the subjects?     (2D9)

[12:32] ToddSchneider: Is there an assumption of the existence of social media data for a person?     (2D10)

[12:32] RaviSharma: if you limit the social interaction among the similar patients what do you expect the result to be compared to social media data?     (2D11)

[12:36] ToddSchneider: Perhaps a better question is 'How much personal data is needed' (for the system to be 'useful')?     (2D12)

[12:50] Ken Baclawski: Arash Shaban-Nejad Semantic "Analytics for Global Health Surveillance" will be speaking on April 17. Slides are available at     (2D13)

[12:53] TerryLongstreth: PSQ9 - questionnaire     (2D14)

[12:54] RaviSharma: thanks ken     (2D15)

[12:59] ToddSchneider: I have to get to another meeting. Thank you.     (2D16)

[13:02] RaviSharma: please upload speaker slides, thanks     (2D17)

[13:06] RaviSharma: thanks to speakers     (2D18)

Resources     (2E)

The following is an outline of the slide content, not including the images.     (2E1)

1. Explainability of Medical AI through Domain Knowledge     (2E2)

with Krishnaprasad Thirunarayan and Amit Sheth     (2E4)

2. Why AI systems in Medical Systems     (2E6)

  • Growing need for clinical expertise     (2E7)
  • Need for rapid and accurate analysis for growing healthcare big data including Patient Generated Health Data and Precision Medicine data     (2E8)
    • Improve productivity, efficiency, workflow, accuracy and speed, both for doctors and for patients     (2E8A)
    • Patient empowerment through smart (actionable) health data     (2E8B)

3. Why Explainability in Medical AI Systems     (2E9)

4. Patient-Doctor Relationship     (2E14)

  • Cultural and political reason for ownership of personal data     (2E15)
  • Privacy concerns for personal data:     (2E16)
    • Two stages for permission to use: model creation, personal health decision-making     (2E16A)
    • Incomplete data due to privacy concerns     (2E16B)
  • How would AI systems treat patients?     (2E17)
  • For personalized healthcare: Researchers or analyzers or doctors need such personal data to provide explainable decisions supported by AI systems     (2E18)

5. How will AI assist humans in medical domain?     (2E19)

6. Challenges     (2E24)

7. Problem: Reasoning over the outcome     (2E28)

  • How were the conclusions arrived at?     (2E29)
  • If some unintuitive/erroneous conclusions were obtained, how can we trace back and reason about them?     (2E30)

8. A Mental Health Use Case     (2E31)

  • Explainability is required as to how data is relevant and significant with respect to the patient situation     (2E35)

9. Explainability vs Interpretability     (2E36)

  • Explainability is the combination of interpretability and traceability via a knowledge graph     (2E37)

10. A Mental Health Use Case     (2E38)

  • From Patient to Social Media to Clinician to Healthcare     (2E39)

11. A Mental Health Use Case     (2E40)

12. A Mental Health Use Case     (2E42)

  • What is the severity level of suicide risk of a patient?     (2E43)
  • ML can be applied to a variety of input data: Text, image, network, sensor, knowledge     (2E44)

13. Explainability with Knowledge     (2E45)

14. Explanation through knowledge enhancement     (2E47)

15. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E48)

16. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E49)

  • Explanation through word features that is created through Semantic Encoding and Decoding Optimization technique     (2E50)

17. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E51)

18. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)     (2E52)

19. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)     (2E53)

20. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)     (2E54)

  • Progression of users through severity levels of suicide risk     (2E55)

21. Explanation through Knowledge Harvesting     (2E56)

22. Relevant Research: Explaining the prediction wisdom of crowd (WebInt 2018)     (2E57)

23. Explanation through Knowledge Infusion     (2E58)

24. Explanation through Knowledge Infusion     (2E59)

  • Learning what specific medical knowledge is more important as the information is processed by the model     (2E60)
  • Measuring the importance of such infused knowledge     (2E61)
  • Specific functions and how they can be operationalized for explainability     (2E62)

25. Evaluation     (2E63)

  • ROC & AUC     (2E64)
    • Assessments of true positives and false positive rates, to properly measure feature importance     (2E64A)
  • Inverse Probability Estimates     (2E65)
    • Estimate the counterfactual or potential outcome if all patients in dataset were assigned either label or have close estimated probabilities     (2E65A)
  • PRM: Perceived Risk Measure     (2E66)
    • The ratio of disagreement between the predicted and actual outcomes summed over disagreements between the annotators multiplied by a reduction factor that reduces the penalty if the prediction matches any other annotator     (2E66A)

26. Evaluation     (2E67)

27. Mental Health Ontology     (2E68)

  • Extensively used in this research     (2E69)
  • Built based on DSM-5, which is the main guideline documentation for psychiatrists     (2E70)
  • Includes: SNOMED-CT, Drug Abuse Ontology and Slang terms     (2E71)

28. Key Takeaways     (2E72)

  • Medical explainability is a necessity to form trust for medical community     (2E73)
  • Three ways of explainability with knowledge     (2E74)
  • Interpretability and traceability are necessary and sufficient conditions for explainability     (2E75)
  • Infusing knowledge would further enhance the reasoning capabilities     (2E76)

29. Questions?     (2E77)

Previous Meetings     (2F)

Next Meetings     (2G)