Actions

ConferenceCall 2024 02 28: Difference between revisions

Ontolog Forum

No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 21: Line 21:


== Agenda ==
== Agenda ==
* 12:00 - 12:05 Introduction '''[[RaviSharma|Ravi Sharma]]''' [https://bit.ly/49BPE11 Video]
* 12:00-12:05 Introduction '''[[RaviSharma|Ravi Sharma]]'''  
* 12:05 - 12:35 '''Gary Marcus''' [https://bit.ly/42Y5T5X Video] [https://bit.ly/3SXW3MR Discussion]
** [https://bit.ly/49BPE11 Video Recording]
* 12:35 - 13:05 '''[[JohnSowa|John Sowa]]''' ''Without Ontology, LLMs are clueless'' [https://bit.ly/434EMGy Video]
* 12:05-12:40 '''Gary Marcus''' ''No AGI (and no Trustworthy AI) without Neurosymbolic AI''
** [https://bit.ly/42Y5T5X Video Recording]  
* 12:40-12:50 Discussion
** [https://bit.ly/3SXW3MR Discussion Recording]
* 12:50-13:30 '''[[JohnSowa|John Sowa]]''' ''Without Ontology, LLMs are clueless''  
** Abstract:  Large Language Models (LLMs) are a powerful technology for processing natural languages.  But the results are sometimes good and sometimes disastrous.  The methods are excellent for translation, useful for search, but unreliable in generating new combinations.  Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction.  An ontology of the subject matter is necessary for the test.  With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.  
** Abstract:  Large Language Models (LLMs) are a powerful technology for processing natural languages.  But the results are sometimes good and sometimes disastrous.  The methods are excellent for translation, useful for search, but unreliable in generating new combinations.  Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction.  An ontology of the subject matter is necessary for the test.  With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.  
** [https://bit.ly/3uHQv10 Slides]
** [https://bit.ly/3uHQv10 Slides]
* 13:05 - 13:30 Discussion
** [https://bit.ly/434EMGy Video Recording]


== Conference Call Information ==
== Conference Call Information ==
Line 36: Line 40:


== Participants ==
== Participants ==
* [[GaryBergCross|Gary Berg-Cross]]
* Gary Marcus
* [[JanetSinger|Janet Singer]]
* [[JohnSowa|John Sowa]]
* [[KenBaclawski|Ken Baclawski]]
* Marco Neumann
* Phil Jackson
* [[RaviSharma|Ravi Sharma]]
* [[ToddSchneider|Todd Schneider]]


== Discussion ==
== Discussion ==
Line 64: Line 77:


== Resources ==
== Resources ==
* [https://bit.ly/49BPE11 Introduction Video Recording]
* [https://bit.ly/42Y5T5X Gary Marcus Video Recording]
* [https://bit.ly/3uHQv10 John Sowa Slides]
* [https://bit.ly/434EMGy John Sowa Video Recording]
* [https://bit.ly/3SXW3MR Discussion Recording]
* [https://youtu.be/GeN5XVA2e4w Gary Marcus Youtube Video]
* [https://youtu.be/t7wZbbISdyA John Sowa Youtube Video]


== Previous Meetings ==
== Previous Meetings ==

Latest revision as of 21:34, 5 March 2024

Session Foundations and Architectures
Duration 1.5 hour
Date/Time 28 Feb 2024 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ravi Sharma

Ontology Summit 2024 Foundations and Architectures

Agenda

  • 12:00-12:05 Introduction Ravi Sharma
  • 12:05-12:40 Gary Marcus No AGI (and no Trustworthy AI) without Neurosymbolic AI
  • 12:40-12:50 Discussion
  • 12:50-13:30 John Sowa Without Ontology, LLMs are clueless
    • Abstract: Large Language Models (LLMs) are a powerful technology for processing natural languages. But the results are sometimes good and sometimes disastrous. The methods are excellent for translation, useful for search, but unreliable in generating new combinations. Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction. An ontology of the subject matter is necessary for the test. With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.
    • Slides
    • Video Recording

Conference Call Information

  • Date: Wednesday, 28 February 2024
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1.5 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

Questions for Gary Marcus

12:46:10 From bhaugh : Can you recommend any specific approaches to combining symbolic and neural AI?

12:49:48 From ToddSchneider : Where does context fit into the ‘landscape’?

12:50:48 From ToddSchneider : Where does 2nd (or higher) order cognition or reasoning come into play?

12:52:47 From Gary Berg-Cross : Are you familiar with the Sci American article: "How AI Knows Things No One Told It" (MAY 11, 2023 BY GEORGE MUSSER) and if so what did you think of its approach to its claims back then? https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/

Questions and Comments for John Sowa

13:14:40 From Janet Singer : John addresses the language pragmatics as well as semantics. How much of a mistake was it to assume a ‘semantic web’ could be successfully developed without the context of the pragmatics?

13:23:13 From ToddSchneider : Should images be considered ‘symbols’?
13:32:16 From Janet Singer : @ToddSchneider Don’t want to answer for John but diagrams could be icons as opposed to symbols which need rules of a language for interpretation/use (after Peirce)

13:26:29 From Phil Jackson : what about the mental models of people who are blind from birth? are these models based on 2-d images or instead based on 3-d models of spatial volumes, or 4-d models of spatial volumes moving in time? Or some other kind of model, that also includes sounds?

13:33:23 From Gary Marcus : Sorry I have to go! Thanks for having me! and I am very sympathetic to most of what John said!

13:34:42 From Gary Berg-Cross : A friendly "central executive" needed to align with some of "our" values gets us into the vague area of social concepts.

13:39:27 From Marco Neumann : Verisimilitude seems to be the predominant goal for the current generative AI effort. And in our media mediated world this may be a sufficient threshold after all. But if AGI is a future goal, how would you test it? Isn't the Turing test just a test of believability as well?

Resources

Previous Meetings

 Session
ConferenceCall 2024 02 21Overview
ConferenceCall 2023 11 15Synthesis
ConferenceCall 2023 11 08Broader thoughts
... further results

Next Meetings

 Session
ConferenceCall 2024 03 06LLMs, Ontologies and KGs
ConferenceCall 2024 03 13LLMs, Ontologies and KGs
ConferenceCall 2024 03 20Foundations and Architectures
... further results