Actions

Ontolog Forum

Revision as of 00:57, 10 March 2021 by imported>KennethBaclawski
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 24 Feb 2021 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ram D. Sriram
Track C

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.

Agenda

  • 12:00 - 12:10 EST Ram D. Sriram Neuro-Symbolic Learning Technologies Slides
  • 12:10 - 12:40 EST Luis Lamb Neural-symbolic AI: From Turing to Deep Learning Slides
    • Abstract: The integration of learning and reasoning has been the subject of growing research interest in AI. However, both areas have been developed under clearly distinct technical foundations and by separate research communities. Neural-symbolic computing aims at integrating neural learning with symbolic approaches typically used in computational logic and knowledge representation in AI. In this talk, we present an overview of the evolution of neural-symbolic methods, with attention to developments towards integrating machine learning and reasoning into an unified foundation that contributes to explainable AI. We concluded by showing that advances in neural-symbolic computing can lead to the construction of richer AI systems.
    • Prof Luis Lamb, Universidade Federal do Rio Grande do Sul Porto Alegre RS, Brazil http://www.inf.ufrgs.br/~lamb/BioShortLamb.html
  • 12:40 - 13:10 EST Pascal Hitzler Neural-Symbolic Integration and Ontologies Slides
    • Abstract: Deep Learning and logic-based knowledge representation and reasoning are two very complementary approaches to Artificial Intelligence. The former (which are "subsymbolic") can be trained from examples and are generally robust against noise in training or input data, however they are also black boxes in the sense that the systems cannot be inspected in ways that provide insight into their decisions. The latter (which are "symbolic") are brittle in the sense that they are susceptible to data noise or minor flaws in the logical encoding of a problem, however they are transparent as to their inner workings and underlying assumptions. Finding ways for bridging the gap between symbolic and subsymbolic approaches to Artificial Intelligence is a long-standing unresolved challenge in Computer Science and Cognitive Science. In this talk, we report on some recent advancements regarding the bridging of this neural-symbolic gap. In particular, we will present recent results on using deep learning systems to perform ontology reasoning, and on the utilization of class hierarchies for explaining deep learning systems.
    • Pascal Hitzler is the Lloyd T. Smith Creativity in Engineering Chair, Kansas State University http://www.pascal-hitzler.de/
  • 13:10 - 13:30 EST Discussion
  • Video Recording

Conference Call Information

  • Date: Wednesday, 24 Feb 2021
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1.5 hours
  • The Video Conference URL is https://bit.ly/3i1uPRl
    • iPhone one-tap :
      • +16465588656,,83077436914#,,,,,,0#,,822275# US (New York)
      • +13017158592,,83077436914#,,,,,,0#,,822275# US (Germantown)
    • Telephone:
  • Chat Room: https://bit.ly/39PzQJW
    • If the chat room is not available, then use the Zoom chat room.

Attendees

Discussion

[12:10] RaviSharma: @Luis - the tension between reasoning and understanding (learning) is older than Aristotle e.g. attributes and properties relating to universe

[12:15] RaviSharma: @Luis - learning theories is great - certainly, but how child to adult learning process is simulated in ML today?

[12:15] Pascal Hitzler: @Ravi - we so often (incorrectly) think that all intellectual progress started with the Greeks. It's a persistent narrative/myth in world history.

[12:19] RaviSharma: @Pascal - thanks, people like me owe it to slowly translate those language ides and present them like we do about great Greek discoveries.

[12:20] RaviSharma: Thanks to Luis and this track organizers Ram et al that we have more than 58 people attending.

[12:23] RaviSharma: @Luis - yes I see value in capturing workshops such as NeSY

[12:24] Pascal Hitzler: NeSy workshop series (and other things): http://neural-symbolic.org/

[12:24] RaviSharma: I also know that there are short treatises in other languages on predicate logic and related grammar rules, but i am personally yet unable to translate those concepts, but some day soon.

[12:28] RaviSharma: @Luis - is modal logic embedding learning able to extend knowledge and if so how do we verify that extended patterns use is making commonsense.?

[12:31] RaviSharma: @Luis - it is heartening to know that predicate reasoning has uncertainties!

[12:34] Md Kamruzzaman Sarker: From my experience, noise tolerance of logical reasoning can be improved by deep learning, however it loses explainability then. Any known work to achieve both noise tolerance and explainability?

[12:35] RaviSharma: @Luis excellent excellent I am only able to grasp a small part but great work and great review.

[12:40] RaviSharma: @Pascal - welcome, I went to those links and you have long been considering this topic and workshops. Great.

[12:43] ToddSchneider: @Pascal, could you expand on the assertion "... symbolic reasoners do very poorly " (on 'noisy' data?

[12:47] ToddSchneider: @Pascal, could you expand on the notion 'Normalized Vocabulary'?

[12:49] MikeBennett: I'm concerned about how you would normalize between ontologies that have different commitments to TLO constructs. In the example Barack Obama rdf:type President - clearly he's not the current president nor is this an intrinsic feature of the man. A good ontology would have President as a role. Another (like this example) might not. How do you normalize between these?

[12:51] RaviSharma: @Pascal - how universal is it for variations in training sets?

[12:54] RaviSharma: @Pascal- can you randomly go to web and create a training set for a set of keywords?

[13:15] RaviSharma: @Pascal - concept categories of images, how will thematic background and random image annotation only compare?

[13:18] TimFinin: By Wikipedia categories do you mean the Wikidata types?

[13:20] TimFinin: Might Wikidata types be another possibility?

[13:20] M.Drance: @Pascal - You talked about explainability linked to a CNN, could the same be done to extract explanation of a GNN doing graph embedding ?

[13:20] TerryLongstreth: @Pascal - using open Web sources (like Wikipedia) leaves you vulnerable to random (background) changes. Do you have to capture a copy to create a stable baseline?

[13:21] Mehwish Alam: Is error analysis not enough for correcting the errors?

[13:22] Mehwish Alam: Yes, I see.

[13:23] RaviSharma: @Pascal or @Luis: What is the meaning in symbolic reasoning among scalar, vector, tensor (matrix set) usage?

[13:24] Rajat Shinde: In deep deductive reasoners, what does the deep network tries to optimize (in terms of loss function or loss)? Is it the same as what we use for conventional deep learning networks? Thanks in advance.

[13:27] ToddSchneider: From the perspective of implementation on/in a (non-quantum) computer, what is the difference among discrete and continuous?

[13:28] Rajat Shinde: Thank you very much!

[13:28] Frank Grimm: Great talks, thank you.

[13:28] Pascal Hitzler: thanks everybody

[13:28] Pascal Hitzler: feel free to inquire by email

[13:28] Pascal Hitzler: pascal.hitzler@gmail.com

Resources

Previous Meetings

... further results

Next Meetings

... further results