Actions

OntologySummit2024 and ConferenceCall 2024 04 17: Difference between pages

Ontolog Forum

(Difference between pages)
 
 
Line 1: Line 1:
= Ontology Summit 2024 =
{| class="wikitable" style="float:right; margin-left: 10px;" border="1" cellpadding="10"
== Hybrid Neuro-Symbolic Techniques for and with Ontologies and Knowledge Graphs ==
|-
! scope="row" | Session
| [[session::Applications]]
|-
! scope="row" | Duration
| [[duration::1 hour]]
|-
! scope="row" rowspan="3" | Date/Time
| [[has date::17 Apr 2024 16:00 GMT]]
|-
| 9:00am PDT/12:00pm EDT
|-
| 4:00pm GMT/6:00pm CEST
|-
! scope="row" | Convener
| [[convener::RamSriram|Ram D. Sriram]]
|}


The [[OntologySummit|Ontology Summit]] is an annual series of events that involves the ontology community and communities related to each year's theme chosen for the summit. The Ontology Summit was started by Ontolog and NIST, and the program has been co-organized by Ontolog and NIST along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives.
= [[OntologySummit2024|Ontology Summit 2024]] {{#show:{{PAGENAME}}|?session}} =


As part of Ontolog’s general advocacy to bring ontology science and related engineering into the mainstream, we endeavor to facilitate discussion and knowledge sharing amongst stakeholders and interested parties relevant to the use of ontologies. The results will be synthesized and summarized in the form of the Ontology Summit 2024 Communiqué, with expanded supporting material provided on the web and in journal articles.
== Agenda ==
* '''[[AmitSheth|Amit Sheth]]''' ''Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems'' [https://bit.ly/4aLDy5V Video Recording]
** In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk. <br/> Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data.
** Problem: Inadequacy of LLMs for Reasoning<br/>LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge).
** Solution: Neurosymbolic AI combined with Custom and Compact Models:<br/>Compact custom language models can be augmented with neurosymbolic methods and external knowledge sources while maintaining a small size. The intent is to support efficient adaptation to changing data and knowledge. By integrating neurosymbolic approaches, these models acquire a structured understanding of data, enhancing interpretability and reliability (e.g., through verifiability audits using reasoning traces). This structured understanding fosters safer and more consistent behavior and facilitates efficient adaptation to evolving information, ensuring agility in handling dynamic environments. Furthermore, incorporating external knowledge sources enriches the model's understanding and adaptability across diverse domains, bolstering its efficiency in tackling varied tasks. The small size of these models enables rapid deployment and contributes to computational efficiency, better management of constraints, and faster re-training/fine-tuning/inference.
** About the Speaker: Professor Amit Sheth (Web, LinkedIn) is an Educator, Researcher, and Entrepreneur. As the founding director of the university-wide AI Institute at the University of South Carolina, he grew it to nearly 50 AI researchers. He is a fellow of IEEE, AAAI, AAAS, ACM, and AIAA. He has co-founded four companies, including Taalee/Semangix which pioneered Semantic Search (founded 1999), ezDI, which supported knowledge-infused clinical NLP/NLU, and Cognovi Labs, an emotion AI company. He is proud of the success of over 45 Ph.D. advisees and postdocs he hs advised/mentored.


= Process and Deliverables =
== Conference Call Information ==
Similar to our last seventeen summits, this [[OntologySummit2024|Ontology Summit 2024]] will consist of virtual discourse (over our archived mailing lists), virtual presentations and panel sessions as part of recorded video conference calls.
* Date: '''Wednesday, 17 April 2024'''  
As in prior years the intent is to provide some synthesis of ideas and draft a communique summarizing major points.
* Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
This year began with a '''[[OntologySummit2024_FallSeries|Fall Series]]''' in October and November; the main summit will begin in February.
** ref: [http://www.timeanddate.com/worldclock/fixedtime.html?month=4&day=17&year=2024&hour=12&min=00&sec=0&p1=179 World Clock]
* Expected Call Duration: 1 hour


Meetings are at Noon US/Canada Eastern Time on Wednesdays and last about an hour.
{{:OntologySummit2024/ConferenceCallInformation}}


== Description ==
== Participants ==


The summit will survey current techniques that combine neural network machine learning with symbolic methods, especially methods based on ontologies and knowledge graphs.
== Discussion ==


Ontologies are representations of a knowledge domain. They define the concepts, relationships, properties, axioms and rules within that domain, providing a framework that enables a deep understanding of that subject area. Knowledge graphs are structured representations of semantic knowledge that are stored in a graph. Ontologies and knowledge graphs are used to enable machine reasoning and semantic understanding, allowing a system to draw inferences and to derive new information and relationships between entities.
== Resources ==
 
[https://bit.ly/4aLDy5V Video Recording]
Neural network and other machine learning models, such as LLMs, are trained on large corpora, learning the patterns and connections between words and images. Hence, although their “knowledge base” is broad, it is also sometimes incorrect and/or biased, and doesn't explicitly understand the semantics or relationships in that content.
[https://youtu.be/YbWyNT7O3Jk YouTube Video]
 
Consequently, neural network and traditional AI techniques are complementary. The '''[[OntologySummit2024_FallSeries|Fall Series]]''' of the summit explored the similarities and distinctions between ontologies and LLMs, as well as how they can be used together.  The Main Summit Series will examine the more general topic of neuro-symbolic techniques, especially how one can leverage the complementary benefits of neural networks and of ontologies and knowledge graphs.


Main Series Chair [[KenBaclawski|Ken Baclawski]]
== Previous Meetings ==
 
{{#ask: [[Category:OntologySummit2024]] [[Category:Icom_conf_Conference]] [[<<ConferenceCall_2024_04_17]]
* Track A. Foundations and Architectures
        |?|?Session|mainlabel=-|order=desc|limit=3}}
* Track B. Large Language Models, Ontologies and Knowedge Graphs
* Track C. Applications
== Next Meetings ==
* Track D. Risks and Ethics
{{#ask: [[Category:OntologySummit2024]] [[Category:Icom_conf_Conference]] [[>>ConferenceCall_2024_04_17]]
 
        |?|?Session|mainlabel=-|order=asc|limit=3}}
== Schedule ==
* [[ConferenceCall_2024_02_21|21 February 2024]] ''Kickoff/Overview''
* [[ConferenceCall_2024_02_28|28 February 2024]] Track A Session 1
** '''Gary Marcus''' ''No AGI (and no Trustworthy AI) without Neurosymbolic AI''
** '''[[JohnSowa|John Sowa]]''' ''Without Ontology, LLMs are clueless''
* [[ConferenceCall_2024_03_06|6 March 2024]] Track B Session 1 '''Hamed Babaei Giglou''' ''LLMs4OL: Large Language Models for Ontology Learning''
* [[ConferenceCall_2024_03_13|13 March 2024]] Track B Session 2 '''[[FabianNeuhaus|Fabian Neuhaus]]''' ''Ontologies in the era of large language models – a perspective''
* [[ConferenceCall_2024_03_20|20 March 2024]] Track A Session 2 '''[[TillMossakowski|Till Mossakowski]]''' ''Modular design patterns for neural-symbolic integration: refinement and combination''
* [[ConferenceCall_2024_03_27|27 March 2024]] Track A Session 3 '''Markus J. Buehler''' ''Accelerating Scientific Discovery with Generative Knowledge Extraction, Graph-Based Representation, and Multimodal Intelligent Graph Reasoning''
* [[ConferenceCall_2024_04_03|3 April 2024]] ''First Synthesis''
* [[ConferenceCall_2024_04_10|10 April 2024]] ''Second Synthesis''
* [[ConferenceCall_2024_04_17|17 April 2024]] Track C Applications '''[[AmitSheth|Amit Sheth]]''' ''Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems''
* [[ConferenceCall_2024_04_24|24 April 2024]] Track C Applications to Healthcare
** '''Venkat Venkatasubramanian'''
** '''Kaushik Roy'''
* [[ConferenceCall_2024_05_01|1 May 2024]] Track D Session 1 ''Risk Panel''
* [[ConferenceCall_2024_05_08|8 May 2024]] Track D Session 2 ''Ethics Panel''
* [[ConferenceCall_2024_05_15|15 May 2024]] ''Third Synthesis''
* [[ConferenceCall_2024_05_22|22 May 2024]] ''Communiqu&eacute;''
 
== Resources ==
* [[OntologySummit2024/ConferenceCallInformation|Conference Call Information]]
* [http://bit.ly/34DOmRV Ontology Summit YouTube Channel]


[[Category:OntologySummit]]
[[Category:OntologySummit2024]]
[[Category:OntologySummit2024]]
[[Category:Icom_conf_Conference]]
[[Category:Occurrence| ]]

Revision as of 14:45, 19 April 2024

Session Applications
Duration 1 hour
Date/Time 17 Apr 2024 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/6:00pm CEST
Convener Ram D. Sriram

Ontology Summit 2024 Applications

Agenda

  • Amit Sheth Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems Video Recording
    • In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk.
      Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data.
    • Problem: Inadequacy of LLMs for Reasoning
      LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge).
    • Solution: Neurosymbolic AI combined with Custom and Compact Models:
      Compact custom language models can be augmented with neurosymbolic methods and external knowledge sources while maintaining a small size. The intent is to support efficient adaptation to changing data and knowledge. By integrating neurosymbolic approaches, these models acquire a structured understanding of data, enhancing interpretability and reliability (e.g., through verifiability audits using reasoning traces). This structured understanding fosters safer and more consistent behavior and facilitates efficient adaptation to evolving information, ensuring agility in handling dynamic environments. Furthermore, incorporating external knowledge sources enriches the model's understanding and adaptability across diverse domains, bolstering its efficiency in tackling varied tasks. The small size of these models enables rapid deployment and contributes to computational efficiency, better management of constraints, and faster re-training/fine-tuning/inference.
    • About the Speaker: Professor Amit Sheth (Web, LinkedIn) is an Educator, Researcher, and Entrepreneur. As the founding director of the university-wide AI Institute at the University of South Carolina, he grew it to nearly 50 AI researchers. He is a fellow of IEEE, AAAI, AAAS, ACM, and AIAA. He has co-founded four companies, including Taalee/Semangix which pioneered Semantic Search (founded 1999), ezDI, which supported knowledge-infused clinical NLP/NLU, and Cognovi Labs, an emotion AI company. He is proud of the success of over 45 Ph.D. advisees and postdocs he hs advised/mentored.

Conference Call Information

  • Date: Wednesday, 17 April 2024
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

Resources

Video Recording YouTube Video

Previous Meetings

 Session
ConferenceCall 2024 04 10Synthesis
ConferenceCall 2024 04 03Synthesis
ConferenceCall 2024 03 27Foundations and Architectures
... further results

Next Meetings

 Session
ConferenceCall 2024 04 24Applications
ConferenceCall 2024 05 01Risks and Ethics
ConferenceCall 2024 05 08Risks and Ethics
... further results