From OntologPSMW

Revision as of 21:26, 11 April 2021 by KenBaclawski (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
[ ]
    (1)
Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 07 Apr 2021 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ram D. Sriram
Track C

Contents

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies     (2)

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.     (2A)

Agenda     (2B)

  • 12:00 - 12:30 EDT Henry Kautz, National Science Foundation, Toward a Taxonomy of Neuro-Symbolic Systems     (2B1)
    • Abstract: Deep learning gains much of its power to handle ambiguity and to reason about similarity through it use of vector representations. In order to perform input and output, however, deep learning systems must convert between vector and more traditional symbolic representations. Might symbols be employed not just at the periphery but within a deep learning system itself? We provide an overview of a number of different architectures for neural-symbolic systems that have appeared in the literature, and discuss their potential advantages and limitations.     (2B1A)
  • 12:30 - 13:00 EDT Amit Sheth Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?     (2B2)
    • Abstract: The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.     (2B2A)
    • Bio: Prof. Amit Sheth is an Educator, Researcher, and Entrepreneur. He is the founding director university-wide AI Institute at the University of South Carolina. He is a Fellow of IEEE, AAAI, AAAS and ACM. He has (co-)founded four companies, three of them by licensing his university research outcomes, including the first Semantic Search company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph. He is particularly proud of the success of his 45 Ph.D. advisees and postdocs in academia, industry research, and entrepreneurs.     (2B2B)
  • 13:00 - 13:30 EDT Discussion     (2B3)

Conference Call Information     (2C)

Attendees     (2D)

Discussion     (2E)

[12:08] RaviSharma: Welcome Dr Kautz     (2E1)

[12:10] RaviSharma: Dr Kautz - hope you will share your slides with us     (2E2)

[12:10] RaviSharma: What is the criterion for shallow or deep learning?     (2E3)

[12:16] RaviSharma: Dr Kautz - Concepts are at best not well classified or complex enough to express the situation, hence how are deep learnings tied too much on concepts?     (2E4)

[12:17] RaviSharma: you are beginning to answer my question.     (2E5)

[12:26] RaviSharma: Is back propagation simply iterative or more improved learning with each reprocess     (2E6)

[12:28] RaviSharma: What is n? an integer?     (2E7)

[12:31] RaviSharma: Tensors also apply to multidimensional spaces, are neural structures amenable to multi dimensional tensor analysis as Quantum states or General relativity.     (2E8)

[12:31] RaviSharma: Welcome Amit Sheth     (2E9)

[12:33] RaviSharma: There is huge difference between aware and knowing or cognitive and conscious state - for humans.     (2E10)

[12:35] RaviSharma: What is the difference between mindfulness and conscious thinking?     (2E11)

[12:38] RaviSharma: Is there parallel processing at advantage in such schema (Attention).     (2E12)

[12:47] RaviSharma: Dr Kautz - maybe the question could be "intellect examining thoughts from mind" and "cognitive thinking"?     (2E13)

[12:53] RaviSharma: Is there a way to capture in NL itself aggregates of NL that represent NL Understanding?     (2E14)

[12:54] RaviSharma: Please describe the transition between knowledge and understanding?     (2E15)

[12:56] Gary Berg-Cross: If we view the various alternatives as modular architectures with components then we can imagine a neuro-cognitive system capable of using the various components - Neuro [symbolic] Neuro; symbolic Neuro (sub-symbolic) etc....They get evoked by cognitive situations.     (2E16)

[12:58] RaviSharma: ML in this case Learning word limits to repeatability of learned entities but in what you are presenting what is difference between learning and understanding?     (2E17)

[13:01] Gary Berg-Cross: @Ravi, there is no guarantee that labeled concepts like "understanding" are coherent enough concepts to easily settle the question of difference. Additional concepts are needed to explain each phenomena. It is easy, however, to posit that not all learning involves conscious understanding.     (2E18)

[13:02] Gary Berg-Cross: Have to leave for another meeting.     (2E19)

[13:04] RaviSharma: Amit - am I correct that you are addressing text (NL) and Data based systems for address Knowledge and understanding as well as Understanding, how these three end objectives are related     (2E20)

[13:07] RaviSharma: In K-IL you are depending on some aspects of knowledge being a node in KG but in reality KGs usually are networks of Entity and relationships but not aggregates which knowledge nodes require? So what else do you need to combine to KG?     (2E21)

[13:11] RaviSharma: Amit - kindly share your slides with Ram / Ken for putting on forum.     (2E22)

[13:14] RaviSharma: In relation to education (exams) oriented learning, is there an easier use case compared to general Learning?     (2E23)

[13:15] RaviSharma: feature cluster vs training set     (2E24)

[13:24] Sudha Ram: How can we incorporate the concept of analogies into the neuro-symbolic approach?     (2E25)

[13:40] RaviSharma: Thanks to Drs Henry Kautz and Amit Sheth for interesting talks.     (2E26)

Resources     (2F)

Previous Meetings     (2G)


Next Meetings     (2H)