Actions

ConferenceCall 2024 03 27: Difference between revisions

Ontolog Forum

(Created page with "{| class="wikitable" style="float:right; margin-left: 10px;" border="1" cellpadding="10" |- ! scope="row" | Session | session::Foundations and Architectures |- ! scope="row" | Duration | duration::1 hour |- ! scope="row" rowspan="3" | Date/Time | has date::27 Mar 2024 16:00 GMT |- | 9:00am PDT/12:00pm EDT |- | 4:00pm GMT/5:00pm CST |- ! scope="row" | Convener | Ravi Sharma |} = Ontology Summit 2024 {{#show:{{PA...")
 
Line 21: Line 21:


== Agenda ==
== Agenda ==
* '''[[AmitSheth|Amit Sheth]]'''
* '''[[AmitSheth|Amit Sheth]]''' ''Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems''
** In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk. <br/> Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data.
** Problem: Inadequacy of LLMs for Reasoning<br/>LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge).
** Solution: Neurosymbolic AI combined with Custom and Compact Models:<br/>Compact custom language models can be augmented with neurosymbolic methods and external knowledge sources while maintaining a small size. The intent is to support efficient adaptation to changing data and knowledge. By integrating neurosymbolic approaches, these models acquire a structured understanding of data, enhancing interpretability and reliability (e.g., through verifiability audits using reasoning traces). This structured understanding fosters safer and more consistent behavior and facilitates efficient adaptation to evolving information, ensuring agility in handling dynamic environments. Furthermore, incorporating external knowledge sources enriches the model's understanding and adaptability across diverse domains, bolstering its efficiency in tackling varied tasks. The small size of these models enables rapid deployment and contributes to computational efficiency, better management of constraints, and faster re-training/fine-tuning/inference.
** About the Speaker: Professor Amit Sheth (Web, LinkedIn) is an Educator, Researcher, and Entrepreneur. As the founding director of the university-wide AI Institute at the University of South Carolina, he grew it to nearly 50 AI researchers. He is a fellow of IEEE, AAAI, AAAS, ACM, and AIAA. He has co-founded four companies, including Taalee/Semangix which pioneered Semantic Search (founded 1999), ezDI, which supported knowledge-infused clinical NLP/NLU, andCognovi Labs, and emotion AI company. He is proud of the success of over 45 Ph.D. advisees and postdocs he hs advised/mentored.


== Conference Call Information ==
== Conference Call Information ==

Revision as of 20:54, 25 March 2024

Session Foundations and Architectures
Duration 1 hour
Date/Time 27 Mar 2024 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Ravi Sharma

Ontology Summit 2024 Foundations and Architectures

Agenda

  • Amit Sheth Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems
    • In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk.
      Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data.
    • Problem: Inadequacy of LLMs for Reasoning
      LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge).
    • Solution: Neurosymbolic AI combined with Custom and Compact Models:
      Compact custom language models can be augmented with neurosymbolic methods and external knowledge sources while maintaining a small size. The intent is to support efficient adaptation to changing data and knowledge. By integrating neurosymbolic approaches, these models acquire a structured understanding of data, enhancing interpretability and reliability (e.g., through verifiability audits using reasoning traces). This structured understanding fosters safer and more consistent behavior and facilitates efficient adaptation to evolving information, ensuring agility in handling dynamic environments. Furthermore, incorporating external knowledge sources enriches the model's understanding and adaptability across diverse domains, bolstering its efficiency in tackling varied tasks. The small size of these models enables rapid deployment and contributes to computational efficiency, better management of constraints, and faster re-training/fine-tuning/inference.
    • About the Speaker: Professor Amit Sheth (Web, LinkedIn) is an Educator, Researcher, and Entrepreneur. As the founding director of the university-wide AI Institute at the University of South Carolina, he grew it to nearly 50 AI researchers. He is a fellow of IEEE, AAAI, AAAS, ACM, and AIAA. He has co-founded four companies, including Taalee/Semangix which pioneered Semantic Search (founded 1999), ezDI, which supported knowledge-infused clinical NLP/NLU, andCognovi Labs, and emotion AI company. He is proud of the success of over 45 Ph.D. advisees and postdocs he hs advised/mentored.

Conference Call Information

  • Date: Wednesday, 27 March 2024
  • Start Time: 9:00am PDT / 12:00pm EDT / 5:00pm CET / 4:00pm GMT / 1600 UTC
    • ref: World Clock
    • Note: The US and Canada are on Daylight Saving Time while Europe has not yet changed.
  • Expected Call Duration: 1 hour

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

Resources

Previous Meetings

 Session
ConferenceCall 2024 03 20Foundations and Architectures
ConferenceCall 2024 03 13LLMs, Ontologies and KGs
ConferenceCall 2024 03 06LLMs, Ontologies and KGs
... further results

Next Meetings

 Session
ConferenceCall 2024 04 03Synthesis
ConferenceCall 2024 04 10Synthesis
ConferenceCall 2024 04 17Applications
... further results