From OntologPSMW

Revision as of 20:33, 28 February 2019 by DavidWhitten (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
[ ]
    (1)
Session Medical
Duration 1.5 hour90 minute
5,400 second
0.0625 day
Date/Time Feb 13 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener & Co-Champions: Ram D. Sriram and David Whitten

Contents

Ontology Summit 2019 Medical Explanation Session 1     (2)

Explainable AI in Medicine has several facets.     (2A)

  • One facet is explaining a suggested decision made by an computer system and what justification supports it to the medical provider who is diagnosing a malady.     (2B)
  • Another is explaining a medical process to the patient who might be a participant and their support team.     (2C)
  • Similarly, there is the explanation and justification to the billed party of such a process and why it is coded the way it is, whether that billed party is the patient, an insurance company or some other financier.     (2D)
  • A final facet is explaining the medical record to a future reader tracing the process of care.     (2E)

Agenda     (2F)

Conference Call Information     (2G)

Attendees     (2H)

Proceedings     (2I)

[12:03] DavidWhitten: Augie Turano was involved in the Department of Veterans Affairs effort to teach IBM Watson about medical records.     (2I1)

[12:05] DavidWhitten: Many of the current AI efforts are using neural networks to train recognition of data and patterns in data.     (2I2)

[12:07] DavidWhitten: The hope is that an explainable system would help doctors reconciling a diagnosis and matching it up against the data in the medical record.     (2I3)

[12:09] DavidWhitten: There are some great advantages in recognizing a diagnosis as early as possible. Perhaps using the automated pathology to look at cells to help look for irregularities that might indicate cancer etc.     (2I4)

[12:10] DavidWhitten: Processing a medical record involves recognizing pre-defined fields and data as well as using natural language processing on notes about patients from providers.     (2I5)

[12:11] DavidWhitten: Augie works with Data Warehousing in the VA, which gets an almost realtime (within 5 minutes or so) data sample size quickly from the providers interactions with patients     (2I6)

[12:13] DavidWhitten: explanations from neural nets require recognition of features and tying them to neural nets local levels of the net.     (2I7)

[12:13] Gary: Shadows are good examples of phenomena that humans experience embedded in the world and have to interpret them while artificial systems do not typically get this "experience".     (2I8)

[12:17] DavidWhitten: It is easy to build in biases in the neural net's classifications that if your data that trains the neural net is heavily weighted in some dimension, such as most data is related to males can generate gender biased nets.     (2I9)

[12:19] RaviSharma: Augie - do understandable things mostly happen in neural layers     (2I10)

[12:20] RaviSharma: or that there are cross layer understanding and how do that class of cross layer learning differ from those within a layer?     (2I11)

[12:21] DavidWhitten: Ravi, I think Augie is not on chat, so we'll have to ask questions here and then bring them up after the two speakers during their Q & A time.     (2I12)

[12:21] DavidWhitten: Analysis for Knowledge Representation is tied to the learning problems     (2I13)

[12:24] Gary: Estevam Hruschka (Associate Professor at Federal University of Sao Carlos DC-UFSCar & adjunct Professor at Carnegie Mellon University) spoke on work growing out of Never-Ending Language Learning (NELL) at our 2017 Summit. Slides at https://s3.amazonaws.com/ontologforum/OntologySummit2017/TrackA/OverviewOfNELL--EstevamHruschka_20170308.pdf     (2I14)

[12:28] RaviSharma: David OK i am just documenting what to ask     (2I15)

[12:29] DavidWhitten: You can activate an environment to do AI recognition using TensorFlow and many docker images exist too.     (2I16)

[12:30] RaviSharma: Augie- what are other tools beside PET and EEG and fMRI ? for neural activities dynamics monitoring?     (2I17)

[12:31] DavidWhitten: between 1 million and 1.5 million notes come in daily, 4.5 billion at least exist already, 14 Terabytes requires a fast natural language processing.     (2I18)

[12:32] RaviSharma: Augie -Is there a ML or AI success story or correlation between above measurements and levels or extent of learning?     (2I19)

[12:36] RaviSharma: Augie- slide 13 where there are multiple diagnoses does AI diagnose based on likelihood and does it rely on patient databases for determining likelihood?     (2I20)

[12:37] RaviSharma: Augie - multiple diagnoses based on match with SNOMED, MEDLINE, Other authentic Sources?     (2I21)

[12:41] DavidWhitten: The VA record system has ties between an encounter with the patients and standard coding systems such as ICD9, ICD10, CPT, SNOMED CT, etc.     (2I22)

[12:42] Gary: Have the difficulties with this effort been documented and published with lesson learned??     (2I23)

[12:44] DavidWhitten: I expect that most information is anecdotal. When a system is going well, it is easy to find resources to document and publish. When a system doesn't meet expectations, it is much harder to find the resources.     (2I24)

[12:44] DavidWhitten: Generating labeled data is a lot of work.     (2I25)

[12:45] RaviSharma: Augie - Does a set of genes correlate with behavior type? then what is the indicator of changed behavior or mood swing, is it dynamics in Gene or is it jumping genes?     (2I26)

[12:55] Gary: Have to go will check the chat and recording later.     (2I27)

[12:58] TerryLongstreth: Me too     (2I28)

[12:59] RaviSharma: Ram -while image processing deals with pixel or change in pixel type of likely diagnoses, what are successes in dynamic visualizations and dynamic patterns?     (2I29)

[13:04] Mark Underwood: Unfortunate overloading of "SSID"     (2I30)

[13:14] RaviSharma: Ram- microspectroscopic imaging I understand that they identify anomaly and by doing Raman spectroscopy in situ analyze the content of anomaly?     (2I31)

[13:19] RaviSharma: Ram - Medical diagnosis requires multi-modal reasoning - yes but where we find success we ought to reuse the same sequence or pattern?     (2I32)

[13:20] DavidWhitten: Isn't a rule a way to classify a sequence or pattern so it can be reused ?     (2I33)

[13:42] Mark Underwood: "tedious" - point taken. And it uses up the budget.     (2I34)

[13:43] Mark Underwood: Sorry HL7 FHIR isn't helping in some way. A lot of interop work is going into making that work for specific workflows.     (2I35)

Resources     (2J)

Previous Meetings     (2K)


Next Meetings     (2L)