Actions

ConferenceCall 2024 02 28: Difference between revisions

Ontolog Forum

Line 25: Line 25:
* 12:35 - 13:05 '''[[JohnSowa|John Sowa]]''' ''Without Ontology, LLMs are clueless''
* 12:35 - 13:05 '''[[JohnSowa|John Sowa]]''' ''Without Ontology, LLMs are clueless''
** Abstract:  Large Language Models (LLMs) are a powerful technology for processing natural languages.  But the results are sometimes good and sometimes disastrous.  The methods are excellent for translation, useful for search, but unreliable in generating new combinations.  Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction.  An ontology of the subject matter is necessary for the test.  With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.  
** Abstract:  Large Language Models (LLMs) are a powerful technology for processing natural languages.  But the results are sometimes good and sometimes disastrous.  The methods are excellent for translation, useful for search, but unreliable in generating new combinations.  Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction.  An ontology of the subject matter is necessary for the test.  With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.  
** [https://bit.ly/3uHQv10 Slides]
* 13:05 - 13:30 Discussion
* 13:05 - 13:30 Discussion



Revision as of 16:29, 28 February 2024

Session Foundations and Architectures
Duration 1.5 hour
Date/Time 28 Feb 2024 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ravi Sharma

Ontology Summit 2024 Foundations and Architectures

Agenda

  • 12:00 - 12:05 Introduction Ravi Sharma
  • 12:05 - 12:35 Gary Marcus
  • 12:35 - 13:05 John Sowa Without Ontology, LLMs are clueless
    • Abstract: Large Language Models (LLMs) are a powerful technology for processing natural languages. But the results are sometimes good and sometimes disastrous. The methods are excellent for translation, useful for search, but unreliable in generating new combinations. Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction. An ontology of the subject matter is necessary for the test. With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.
    • Slides
  • 13:05 - 13:30 Discussion

Conference Call Information

  • Date: Wednesday, 28 February 2024
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1.5 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

Resources

Previous Meetings

 Session
ConferenceCall 2024 02 21Overview
ConferenceCall 2023 11 15Synthesis
ConferenceCall 2023 11 08Broader thoughts
... further results

Next Meetings

 Session
ConferenceCall 2024 03 06LLMs, Ontologies and KGs
ConferenceCall 2024 03 13LLMs, Ontologies and KGs
ConferenceCall 2024 03 20Foundations and Architectures
... further results