Actions

ConferenceCall 2026 03 25: Difference between revisions

Ontolog Forum

No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 22: Line 22:
** Abstract: Neurosymbolic AI integrates neural machine learning with symbolic reasoning, combining data-driven learning with knowledge representation and logical inference.  This integration addresses a limitation of neural methods--the lack of interpretability and formal semantics--which are essential in safety-critical domains where logic and formal methods have excelled. This talk briefly surveys the foundations, representative frameworks, and tools of neurosymbolic AI.<br/>We examine how neurosymbolic methods are adopted in industrial settings. Finally, we review recent deployments and product developments from industry, an assessment of open problems and current research directions.
** Abstract: Neurosymbolic AI integrates neural machine learning with symbolic reasoning, combining data-driven learning with knowledge representation and logical inference.  This integration addresses a limitation of neural methods--the lack of interpretability and formal semantics--which are essential in safety-critical domains where logic and formal methods have excelled. This talk briefly surveys the foundations, representative frameworks, and tools of neurosymbolic AI.<br/>We examine how neurosymbolic methods are adopted in industrial settings. Finally, we review recent deployments and product developments from industry, an assessment of open problems and current research directions.
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-From-research-to-industry--LuisLamb_20260325.mp4 Video Recording]
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-From-research-to-industry--LuisLamb_20260325.mp4 Video Recording]
** [https://youtu.be/ubC9K7p2hew YouTube Video]
* '''Manas Gaur''' ''Neurosymbolic AI for High Stakes Applications in the LLM Era''
* '''Manas Gaur''' ''Neurosymbolic AI for High Stakes Applications in the LLM Era''
** [https://kai2.umbc.edu/ Website]
** [https://kai2.umbc.edu/ Website]
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-for-High-Stakes-Applications-in-the-LLM-Era--ManasGaur_20260325.mp4 Video Recording]
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-for-High-Stakes-Applications-in-the-LLM-Era--ManasGaur_20260325.mp4 Video Recording]
** [https://youtu.be/zg7Z3teiuNE YouTube Video]
* Discussion
* Discussion
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Discussion_20260325.mp4 Video Recording]
** [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Discussion_20260325.mp4 Video Recording]
** [https://youtu.be/dFLWuLK2ayw YouTube Video]


== Conference Call Information ==
== Conference Call Information ==
Line 37: Line 40:


== Discussion ==
== Discussion ==
12:20:01 Harshit : Could maintaining a traceable lineage from input to output for every step act as a robust way to verify correctness and prevent drift or hallucination?
12:23:57 Ravi Sharma : how do you bring cognition?
12:37:16 Ravi Sharma : How do we understand interplay among reasoning, probability, and cognition especially through verification?
12:47:23 Ravi Sharma : Manas Can we combine Training data based feedback with clustering to clarify Neuro symbolic reasoning?
12:49:41 Harshit : Do we have a universally accepted way to represent knowledge across domains, or are all current schemas inherently domain-specific? If no universal form exists, can using raw surface input with compression–reconstruction act as a more general and reliable validation mechanism?
12:50:23 John Sowa : 99% accuracy is inadequate for banking and guding critical applications in guiding high-performance applications.  We must have 100% accuracy  or a message that admits a failure and requires a retry.  For driving a car or flying an airplane, nobody would accept 1 accident in every 100 trips.
* Luís Lamb : 👍
12:56:50 Bobbin Teegarden : Is 'local source contribution' = context?
* Manas Gaur : Not really context, but some tokens that LLMs think are relevant because they are in the proximity.  Adding context would create a drift from local to global, maybe connecting one layer to another, potentially many layers forward.
12:57:25 Ram D. Sriram (Section M) : @Manas: I think do humans use 10% of the brain is an old question that used to be asked because people thought 90% of the brain consists of glial cells. Now it is about 50%, I believe
* Manas Gaur : 👍
** Yeah. That example was pretty old, I took from TruthfulQA dataset
13:00:14 John Sowa : Glial cells are important for supporting the adjacent neural cells.  In effect, they are performing their function when they support the neurons.
* Luís Lamb :  👍
13:01:53 Phil Jackson : how do Luis Lamb's methods for using reasoning to correct NN's and using NN's to support reasoning and learning, address the issues identified by Manas Gaur?
* Phil Jackson : maybe this question is addressed by Gaur's discussion of KG-path RAG
13:08:34 Ravi Sharma : Manas How is attention factor modeled as extent of cognition?
13:11:44 Harshit : So, current approaches seem to rely on distributed representations and weight adjustments to encode knowledge. Do you think this paradigm is fundamentally scalable, or are we missing a more structural, constraint-based representation of knowledge, closer to how reasoning operates, where correctness can be verified rather than inferred?
13:20:53 Harshit : where could we reach out to the speaker?
* Ram D. Sriram (Section M) : Speaker information should be on the Ontology Website
** Harshit : 👍
* Manas Gaur : Manas@umbc.edu
** Harshit : 👍
13:22:59 John Sowa : Philosophers have been proposing and developing methods of full first-order logic for over 2000 years.
* Manas Gaur : 👍
* Ram D. Sriram (Section M) : Maybe around 3000 years ago
13:24:48 Manas Gaur : Socratic Questioning — a big challenge for LLMs
13:25:26 Manas Gaur : As of the year 2026, this means the Socratic era was roughly 2,425 to 2,496 years ago
13:25:43 Yves KERARON (ISADEUS) : I feel ergo sum
* janet singer : 👍
13:25:52 John Sowa : The early methods of reasoning supported much of FOL, but the notations and methodologies weren't developed with sufficient precision.


== Resources ==
== Resources ==
* [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-From-research-to-industry--LuisLamb_20260325.mp4 Luis Lamb Video Recording]
* [https://youtu.be/ubC9K7p2hew Luis Lamb YouTube Video]
* [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Neurosymbolic-AI-for-High-Stakes-Applications-in-the-LLM-Era--ManasGaur_20260325.mp4 Manas Gaur Video Recording]
* [https://youtu.be/zg7Z3teiuNE Manas Gaur YouTube Video]
* [https://ontologforum.s3.us-east-1.amazonaws.com/OntologySummit2026/AI/Discussion_20260325.mp4 Discussion Video Recording]
* [https://youtu.be/dFLWuLK2ayw Discussion YouTube Video]


== Previous Meetings ==
== Previous Meetings ==

Latest revision as of 02:22, 26 March 2026

Session Ontologies and AI
Duration 1 hour
Date/Time 25 Mar 2026 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Convener Gary Berg-Cross

Ontology Summit 2026 Ontologies and AI

  • Luis Lamb Neurosymbolic AI: From research to industry
    • Abstract: Neurosymbolic AI integrates neural machine learning with symbolic reasoning, combining data-driven learning with knowledge representation and logical inference. This integration addresses a limitation of neural methods--the lack of interpretability and formal semantics--which are essential in safety-critical domains where logic and formal methods have excelled. This talk briefly surveys the foundations, representative frameworks, and tools of neurosymbolic AI.
      We examine how neurosymbolic methods are adopted in industrial settings. Finally, we review recent deployments and product developments from industry, an assessment of open problems and current research directions.
    • Video Recording
    • YouTube Video
  • Manas Gaur Neurosymbolic AI for High Stakes Applications in the LLM Era
  • Discussion

Conference Call Information

Discussion

12:20:01 Harshit : Could maintaining a traceable lineage from input to output for every step act as a robust way to verify correctness and prevent drift or hallucination?

12:23:57 Ravi Sharma : how do you bring cognition?

12:37:16 Ravi Sharma : How do we understand interplay among reasoning, probability, and cognition especially through verification?

12:47:23 Ravi Sharma : Manas Can we combine Training data based feedback with clustering to clarify Neuro symbolic reasoning?

12:49:41 Harshit : Do we have a universally accepted way to represent knowledge across domains, or are all current schemas inherently domain-specific? If no universal form exists, can using raw surface input with compression–reconstruction act as a more general and reliable validation mechanism?

12:50:23 John Sowa : 99% accuracy is inadequate for banking and guding critical applications in guiding high-performance applications. We must have 100% accuracy or a message that admits a failure and requires a retry. For driving a car or flying an airplane, nobody would accept 1 accident in every 100 trips.

  • Luís Lamb : 👍

12:56:50 Bobbin Teegarden : Is 'local source contribution' = context?

  • Manas Gaur : Not really context, but some tokens that LLMs think are relevant because they are in the proximity. Adding context would create a drift from local to global, maybe connecting one layer to another, potentially many layers forward.

12:57:25 Ram D. Sriram (Section M) : @Manas: I think do humans use 10% of the brain is an old question that used to be asked because people thought 90% of the brain consists of glial cells. Now it is about 50%, I believe

  • Manas Gaur : 👍
    • Yeah. That example was pretty old, I took from TruthfulQA dataset

13:00:14 John Sowa : Glial cells are important for supporting the adjacent neural cells. In effect, they are performing their function when they support the neurons.

  • Luís Lamb : 👍

13:01:53 Phil Jackson : how do Luis Lamb's methods for using reasoning to correct NN's and using NN's to support reasoning and learning, address the issues identified by Manas Gaur?

  • Phil Jackson : maybe this question is addressed by Gaur's discussion of KG-path RAG

13:08:34 Ravi Sharma : Manas How is attention factor modeled as extent of cognition?

13:11:44 Harshit : So, current approaches seem to rely on distributed representations and weight adjustments to encode knowledge. Do you think this paradigm is fundamentally scalable, or are we missing a more structural, constraint-based representation of knowledge, closer to how reasoning operates, where correctness can be verified rather than inferred?

13:20:53 Harshit : where could we reach out to the speaker?

  • Ram D. Sriram (Section M) : Speaker information should be on the Ontology Website
    • Harshit : 👍
  • Manas Gaur : Manas@umbc.edu
    • Harshit : 👍

13:22:59 John Sowa : Philosophers have been proposing and developing methods of full first-order logic for over 2000 years.

  • Manas Gaur : 👍
  • Ram D. Sriram (Section M) : Maybe around 3000 years ago

13:24:48 Manas Gaur : Socratic Questioning — a big challenge for LLMs

13:25:26 Manas Gaur : As of the year 2026, this means the Socratic era was roughly 2,425 to 2,496 years ago

13:25:43 Yves KERARON (ISADEUS) : I feel ergo sum

  • janet singer : 👍

13:25:52 John Sowa : The early methods of reasoning supported much of FOL, but the notations and methodologies weren't developed with sufficient precision.

Resources

Previous Meetings

 Session
ConferenceCall 2026 03 18Ontologies and AI
ConferenceCall 2026 03 11Ontologies and AI
ConferenceCall 2026 03 04Ontologies and AI
... further results

Next Meetings

 Session
ConferenceCall 2026 04 01Foundations and Tools
ConferenceCall 2026 04 08Foundations and Tools
ConferenceCall 2026 06 03Cognition