From OntologPSMW

Jump to: navigation, search
[ ]
    (1)
Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 05 May 2021 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ram D. Sriram
Track C

Contents

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies     (2)

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.     (2A)

Agenda     (2B)

  • 12:00 - 12:30 EDT Pavan Kapanipathi, IBM, Getting AI to Reason: Using NeuroSymbolic AI for Knowledge-based Question Answering Slides Video Recording     (2B1)
    • Abstract: Knowledge Base Question Answering (KBQA), is one of the prominent tasks in Natural Language Processing with real-world applications. In this talk, I will discuss our neuro-symbolic question answering system (NSQA) that translates natural language questions into logic, hence facilitating the use of neuro-symbolic reasoners such as Logical Neural Networks. NSQA is presently the state of the art on two prominent KBQA datasets. Furthermore knowledge bases are naturally incomplete and a KBQA system has to address this challenge. I will introduce extensions of our NSQA system on two secondary tasks: knowledge base completion and complex query answering over incomplete knowledge graphs.     (2B1A)
  • 12:30 - 13:00 EDT Spencer Breiner, NIST, Ontology and the Bayesian Brain Slides Video Recording     (2B2)
    • Abstract: The Bayesian brain program aims to identify biologically plausible formal models for neural processes using the language of information geometry and statistical mechanics. The resulting theory of inference and behavior has implications for engineered systems that aim to maintain "homeostasis" far from a dynamical equilibrium and in the face of environmental fluctuations. I will briefly review the core mathematical components, and how these are interpreted to give theories of perception (predictive coding) and action (active inference). I will close with some thoughts on the complementarity between ontological and Bayesian methods.     (2B2A)
    • Bio: Dr. Spencer Breiner is a mathematician at the US National Institute for Standards and Technology, working in the Software & Systems Division of the Information Technology Lab. His research focuses on applications of category theory to problems in systems modeling and interoperability. Dr. Breiner received his Ph.D. from Carnegie Mellon University in 2013 before joining NIST in 2015.     (2B2B)
  • 13:00 - 13:30 EDT Discussion     (2B3)

Conference Call Information     (2C)

Attendees     (2D)

Discussion     (2E)

[12:18] RaviSharma: how do you find match between KG and AMR? thru SPARQL or other deeper algorithms?     (2E1)

[12:19] RaviSharma: IS it Bidirectional AMR and KG     (2E2)

[12:23] RaviSharma: relations linking is hard but what is neural relation as compared to relation between 2 entities (triples?)     (2E3)

[12:26] RaviSharma: what corresponds to stimulus like a current or voltage for a gate, what are the stimuli for neural gates?     (2E4)

[12:31] ToddSchneider: Nationality is a legal construct and need not be determined by a person's nation of birth.     (2E5)

[12:32] Douglas R. Miles: if the inference engine would have needed to add 899 plus 677 together to answer an inference, does it need the neural network to have an answer? Or will the system call a secondary system to perform the math?     (2E6)

[12:32] RaviSharma: 1000 train and then only a few hundred Queries is small QA?     (2E7)

[12:35] MikeBennett: Todd's point means that the underlying ontology is not right. Since it's just a example, no big deal but it does raise the interesting question, how do you curate the ontology to use for these arrangements?     (2E8)

[12:36] RaviSharma: logic embedding is like solvers in tools such as flow modeling where math or equations are embedded and parameters value is supplied by tool?     (2E9)

[12:38] RaviSharma: how do you take care of uncertainties - Bayesian     (2E10)

[12:39] Spencer Breiner: Great talk! Many questions...     (2E11)

[12:40] Spencer Breiner: How does the system address ambiguity b/w different valid answers? For example, "What is the birthplace of Michael Jordan?" can reasonably be answered by either "US" or "Brooklyn". How do you choose between them?     (2E12)

[12:40] RaviSharma: replied - activation function is like stimulus.     (2E13)

[12:41] ToddSchneider: Are the neural nets simulated or realized in hardware (e.g. ASICs)?     (2E14)

[12:41] Spencer Breiner: How does one-hop analysis address conflicting evidence from the knowledge base? For example, nationality for someone who was born one place but whose spouse is from another?     (2E15)

[12:44] TerryLongstreth: How to control for the connection trap in your binary relations?     (2E16)

[13:06] RaviSharma: Spencer - Helmholtz was ophthalmologist and he is here worrying about interaction of photon with eye, i think!     (2E17)

[13:08] RaviSharma: could the sensor be multilevel so that there are many links between eye and hands?     (2E18)

[13:08] ToddSchneider: What is 'Energy' in 'this context'?     (2E19)

[13:09] ToddSchneider: FEM = Free Energy Minimization(?)     (2E20)

[13:11] ToddSchneider: 'Generative' of what?     (2E21)

[13:11] RaviSharma: temperature rises so sweat glands open up     (2E22)

[13:12] TerryLongstreth: Generative state can be overloaded and impact actions?     (2E23)

[13:13] RaviSharma: are autonomous systems aware of experiences     (2E24)

[13:18] RaviSharma: are autonomous systems using historic (DNA ?) experiences to determine environmental changes influencing towards homeostasis     (2E25)

[13:24] RaviSharma: Spencer- probabilistic ontologies prescribed methods and tools from you are really a new and realistic need for ontologies to be real.     (2E26)

[13:24] Andrea Westerinen: Probabilities can be added directly to entities via attributes and to predicates via RDF* properties.     (2E27)

[13:24] RaviSharma: Can you also build strengths in relationships to mimic or model Nature?     (2E28)

[13:25] TerryLongstreth: @Spencer - p32 - probabilities in ontologies include dynamic adjustments to probabilities?     (2E29)

[13:25] Andrea Westerinen: @Spenser - probabilities are dynamic but specific in context (updated with data).     (2E30)

[13:26] RaviSharma: rarer events require more probabilistic and realistic models ontologies?     (2E31)

[13:26] Douglas R. Miles: if an agent was to create a textual log of its stochastic decisions, would the statistical methods also be sufficient for guessing what should be or been in the log?     (2E32)

[13:27] ToddSchneider: How should 'probabilistic ontology' be understood?     (2E33)

[13:27] RaviSharma: Hurray and everything at end is praiseworthy?     (2E34)

[13:28] MikeBennett: Can you replace the RDF triple with an analog value (a weighting) - would resemble an artificial neuron and represent a probability?     (2E35)

[13:28] Andrea Westerinen: @Todd - if you are inferring the existence of an entity, then probability makes sense.     (2E36)

[13:29] Andrea Westerinen: Probability can be 1     (2E37)

[13:30] MikeBennett: @Andrea - Right ; = 'true' in a binary data set-up.     (2E38)

[13:30] ToddSchneider: Sorry. late for another meeting. Thank you.     (2E39)

[13:30] Andrea Westerinen: @Mike Why not attach probability to a predicate via RDF* - that is its weighting.     (2E40)

[13:30] MikeBennett: @Andrea That would work. Or property graphs.     (2E41)

[13:31] TerryLongstreth: Probability can't be 1, but can approach it as .9999999999999999999999999....     (2E42)

[13:31] MikeBennett: Or a quad store     (2E43)

[13:31] Andrea Westerinen: @Mike Since a property graph is an IRI, it can have a probability as well.     (2E44)

[13:31] BobbinTeegarden: Please have him send the slides. More from him would be great!     (2E45)

[13:31] Andrea Westerinen: @Terry Agree. You never want 0/1 to allow for uncertainty.     (2E46)

[13:33] RaviSharma: Andrea - Yes I agree but that is what logicians always want     (2E47)

[13:33] Douglas R. Miles: The big question is whether probability is about making a decision or merely about hiding mistakes a decision procedure     (2E48)

[13:34] Janet Singer: Axiomatically, probability is well defined. But to relate probability to logic and ontology one needs to commit to an interpretation, and that is tricky https://en.m.wikipedia.org/wiki/Probability_interpretations     (2E49)

[13:34] Douglas R. Miles: hiding the mistakes implicit in the decision procedure     (2E50)

[13:36] Andrea Westerinen: @Ravi The concept of defaults vs less likely but possible entities/relations also comes into play.     (2E51)

Resources     (2F)

Previous Meetings     (2G)


Next Meetings     (2H)