From OntologPSMW

Revision as of 08:32, 9 April 2021 by KenBaclawski (Talk | contribs)

Jump to: navigation, search
[ ]
    (1)
Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 07 Apr 2021 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ram D. Sriram
Track C

Contents

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies     (2)

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.     (2A)

Agenda     (2B)

  • 12:00 - 12:30 EDT Henry Kautz, National Science Foundation, Toward a Taxonomy of Neuro-Symbolic Systems     (2B1)
    • Abstract: Deep learning gains much of its power to handle ambiguity and to reason about similarity through it use of vector representations. In order to perform input and output, however, deep learning systems must convert between vector and more traditional symbolic representations. Might symbols be employed not just at the periphery but within a deep learning system itself? We provide an overview of a number of different architectures for neural-symbolic systems that have appeared in the literature, and discuss their potential advantages and limitations.     (2B1A)
  • 12:30 - 13:00 EDT Amit Sheth Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?     (2B2)
    • Abstract: The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.     (2B2A)
    • Bio: Prof. Amit Sheth is an Educator, Researcher, and Entrepreneur. He is the founding director university-wide AI Institute at the University of South Carolina. He is a Fellow of IEEE, AAAI, AAAS and ACM. He has (co-)founded four companies, three of them by licensing his university research outcomes, including the first Semantic Search company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph. He is particularly proud of the success of his 45 Ph.D. advisees and postdocs in academia, industry research, and entrepreneurs.     (2B2B)
  • 13:00 - 13:30 EDT Discussion     (2B3)

Conference Call Information     (2C)

Attendees     (2D)

Discussion     (2E)

Resources     (2F)

Previous Meetings     (2G)


Next Meetings     (2H)