Actions

Ontolog Forum

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 05 May 2021 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ram D. Sriram
Track C

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.

Agenda

  • 12:00 - 12:30 EDT Pavan Kapanipathi, IBM, Getting AI to Reason: Using NeuroSymbolic AI for Knowledge-based Question Answering Slides Video Recording
    • Abstract: Knowledge Base Question Answering (KBQA), is one of the prominent tasks in Natural Language Processing with real-world applications. In this talk, I will discuss our neuro-symbolic question answering system (NSQA) that translates natural language questions into logic, hence facilitating the use of neuro-symbolic reasoners such as Logical Neural Networks. NSQA is presently the state of the art on two prominent KBQA datasets. Furthermore knowledge bases are naturally incomplete and a KBQA system has to address this challenge. I will introduce extensions of our NSQA system on two secondary tasks: knowledge base completion and complex query answering over incomplete knowledge graphs.
  • 12:30 - 13:00 EDT Spencer Breiner, NIST, Ontology and the Bayesian Brain Slides Video Recording
    • Abstract: The Bayesian brain program aims to identify biologically plausible formal models for neural processes using the language of information geometry and statistical mechanics. The resulting theory of inference and behavior has implications for engineered systems that aim to maintain "homeostasis" far from a dynamical equilibrium and in the face of environmental fluctuations. I will briefly review the core mathematical components, and how these are interpreted to give theories of perception (predictive coding) and action (active inference). I will close with some thoughts on the complementarity between ontological and Bayesian methods.
    • Bio: Dr. Spencer Breiner is a mathematician at the US National Institute for Standards and Technology, working in the Software & Systems Division of the Information Technology Lab. His research focuses on applications of category theory to problems in systems modeling and interoperability. Dr. Breiner received his Ph.D. from Carnegie Mellon University in 2013 before joining NIST in 2015.
  • 13:00 - 13:30 EDT Discussion

Conference Call Information

  • Date: Wednesday, 05 May 2021
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1.5 hours
  • The Video Conference URL is https://bit.ly/3i1uPRl
    • iPhone one-tap :
      • +16465588656,,83077436914#,,,,,,0#,,822275# US (New York)
      • +13017158592,,83077436914#,,,,,,0#,,822275# US (Germantown)
    • Telephone:
  • Chat Room: https://bit.ly/39PzQJW
    • If the chat room is not available, then use the Zoom chat room.

Attendees

Discussion

[12:18] RaviSharma: how do you find match between KG and AMR? thru SPARQL or other deeper algorithms?

[12:19] RaviSharma: IS it Bidirectional AMR and KG

[12:23] RaviSharma: relations linking is hard but what is neural relation as compared to relation between 2 entities (triples?)

[12:26] RaviSharma: what corresponds to stimulus like a current or voltage for a gate, what are the stimuli for neural gates?

[12:31] ToddSchneider: Nationality is a legal construct and need not be determined by a person's nation of birth.

[12:32] Douglas R. Miles: if the inference engine would have needed to add 899 plus 677 together to answer an inference, does it need the neural network to have an answer? Or will the system call a secondary system to perform the math?

[12:32] RaviSharma: 1000 train and then only a few hundred Queries is small QA?

[12:35] MikeBennett: Todd's point means that the underlying ontology is not right. Since it's just a example, no big deal but it does raise the interesting question, how do you curate the ontology to use for these arrangements?

[12:36] RaviSharma: logic embedding is like solvers in tools such as flow modeling where math or equations are embedded and parameters value is supplied by tool?

[12:38] RaviSharma: how do you take care of uncertainties - Bayesian

[12:39] Spencer Breiner: Great talk! Many questions...

[12:40] Spencer Breiner: How does the system address ambiguity b/w different valid answers? For example, "What is the birthplace of Michael Jordan?" can reasonably be answered by either "US" or "Brooklyn". How do you choose between them?

[12:40] RaviSharma: replied - activation function is like stimulus.

[12:41] ToddSchneider: Are the neural nets simulated or realized in hardware (e.g. ASICs)?

[12:41] Spencer Breiner: How does one-hop analysis address conflicting evidence from the knowledge base? For example, nationality for someone who was born one place but whose spouse is from another?

[12:44] TerryLongstreth: How to control for the connection trap in your binary relations?

[13:06] RaviSharma: Spencer - Helmholtz was ophthalmologist and he is here worrying about interaction of photon with eye, i think!

[13:08] RaviSharma: could the sensor be multilevel so that there are many links between eye and hands?

[13:08] ToddSchneider: What is 'Energy' in 'this context'?

[13:09] ToddSchneider: FEM = Free Energy Minimization(?)

[13:11] ToddSchneider: 'Generative' of what?

[13:11] RaviSharma: temperature rises so sweat glands open up

[13:12] TerryLongstreth: Generative state can be overloaded and impact actions?

[13:13] RaviSharma: are autonomous systems aware of experiences

[13:18] RaviSharma: are autonomous systems using historic (DNA ?) experiences to determine environmental changes influencing towards homeostasis

[13:24] RaviSharma: Spencer- probabilistic ontologies prescribed methods and tools from you are really a new and realistic need for ontologies to be real.

[13:24] Andrea Westerinen: Probabilities can be added directly to entities via attributes and to predicates via RDF* properties.

[13:24] RaviSharma: Can you also build strengths in relationships to mimic or model Nature?

[13:25] TerryLongstreth: @Spencer - p32 - probabilities in ontologies include dynamic adjustments to probabilities?

[13:25] Andrea Westerinen: @Spenser - probabilities are dynamic but specific in context (updated with data).

[13:26] RaviSharma: rarer events require more probabilistic and realistic models ontologies?

[13:26] Douglas R. Miles: if an agent was to create a textual log of its stochastic decisions, would the statistical methods also be sufficient for guessing what should be or been in the log?

[13:27] ToddSchneider: How should 'probabilistic ontology' be understood?

[13:27] RaviSharma: Hurray and everything at end is praiseworthy?

[13:28] MikeBennett: Can you replace the RDF triple with an analog value (a weighting) - would resemble an artificial neuron and represent a probability?

[13:28] Andrea Westerinen: @Todd - if you are inferring the existence of an entity, then probability makes sense.

[13:29] Andrea Westerinen: Probability can be 1

[13:30] MikeBennett: @Andrea - Right ; = 'true' in a binary data set-up.

[13:30] ToddSchneider: Sorry. late for another meeting. Thank you.

[13:30] Andrea Westerinen: @Mike Why not attach probability to a predicate via RDF* - that is its weighting.

[13:30] MikeBennett: @Andrea That would work. Or property graphs.

[13:31] TerryLongstreth: Probability can't be 1, but can approach it as .9999999999999999999999999....

[13:31] MikeBennett: Or a quad store

[13:31] Andrea Westerinen: @Mike Since a property graph is an IRI, it can have a probability as well.

[13:31] BobbinTeegarden: Please have him send the slides. More from him would be great!

[13:31] Andrea Westerinen: @Terry Agree. You never want 0/1 to allow for uncertainty.

[13:33] RaviSharma: Andrea - Yes I agree but that is what logicians always want

[13:33] Douglas R. Miles: The big question is whether probability is about making a decision or merely about hiding mistakes a decision procedure

[13:34] Janet Singer: Axiomatically, probability is well defined. But to relate probability to logic and ontology one needs to commit to an interpretation, and that is tricky https://en.m.wikipedia.org/wiki/Probability_interpretations

[13:34] Douglas R. Miles: hiding the mistakes implicit in the decision procedure

[13:36] Andrea Westerinen: @Ravi The concept of defaults vs less likely but possible entities/relations also comes into play.

Resources

Previous Meetings

... further results

Next Meetings

... further results