From OntologPSMW

Jump to: navigation, search
[ ]
    (1)
Session Sargur Srihari
Duration 1 hour
Date/Time 11 March 2020 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Convener KenBaclawski
Track How

Contents

Knowledge graphs, closely related to ontologies and semantic networks, have emerged in the last few years to be an important semantic technology and research area. As structured representations of semantic knowledge that are stored in a graph, KGs are lightweight versions of semantic networks that scale to massive datasets such as the entire World Wide Web. Industry has devoted a great deal of effort to the development of knowledge graphs, and they are now critical to the functions of intelligent virtual assistants such as Siri and Alexa. Some of the research communities where KGs are relevant are Ontologies, Big Data, Linked Data, Open Knowledge Network, Artificial Intelligence, Deep Learning, and many others.     (2A)

Agenda     (2B)

A knowledge graph is based on Subject-Predicate-Object (SPO) triples. The SPO triples are combined to form a graph where nodes represent entities (E) that consist of subjects and objects while directed edges represent relationships (R).     (2B2)

Knowledge Graphs typically adhere to some deterministic rules, such as type constraints and transitivity. They also include “softer” statistical patterns or regularities, which are not universally true but nevertheless have useful predictive power.     (2B3)

Probabilistic knowledge graphs incorporate statistical models for relational data. Triples are assumed to be incomplete and noisy. The joint distribution is modeled from a subset D ⊆ExRxE x{0,1} of observed triples.     (2B4)

There are two main types of models: latent feature models and Markov random fields (MRFs). Latent feature models can be trained using deep learning. MRFs can be derived from Markov Logic Representations of facts in a database. The talk will describe learning and inference using probabilistic knowledge graphs.     (2B5)

Conference Call Information     (2C)

Attendees     (2D)

Discussion     (2E)

[12:09] RaviSharma: Sargur please share your slides on meeting page for downloads     (2E1)

[12:10] David Eddy: So... we still 100% ignore UN-NATURAL LANGUAGE? Un-Natural language simply doesn't exist?     (2E2)

[12:12] RaviSharma: Srihari - how many contexts you need to parse generally to get the correct embedding in NLP     (2E3)

[12:12] KenBaclawski: @RaviSharma: Hari will be sending me his slides after the talk and I will post them at that time.     (2E4)

[12:18] RaviSharma: Srihari - what is the Google size now?     (2E5)

[12:22] RaviSharma: Intuitively, how are tensorial elements reduced to vectors in query operation?     (2E6)

[12:27] RaviSharma: Srihari - how do you link probability in tensor representation similar to error bars or what is normal distribution in tensor based or vector data is it like circle of confusion in multidimensional space?     (2E7)

[12:31] RaviSharma: Srihari - adjacency does it imply common nodes, but in tensor 3     (2E8)

[12:32] David Eddy: this elegant math reminds me of conversation with Paul Samuelson in context of the 2008 "kerfuffle"... "Well... we did train these fellows." Missing context here was since there are no measures / datasets about systems, that "variable" is "simply" left out of the equations.     (2E9)

[12:32] RaviSharma: Srihari - if tensor elements are sparse, then can diagonalization or assumption of orthogonal symmetry help?     (2E10)

[12:35] RaviSharma: Srihari - what is meaning of theta parameter and how you identify it, perturbations     (2E11)

[12:42] RaviSharma: RESCAL-ALS algorithm to compute the RESCAL tensor factorization. The solution is a matrix and a core tensor. RESCAL factors a (usually sparse) three-way tensor X such that each frontal slice X_k is factored into X_k = A * R_k * A^T     (2E12)

[12:43] RaviSharma: I searched the term     (2E13)

[12:43] TerryLongstreth: Can we have the URL of your website posted here?     (2E14)

[12:47] KenBaclawski: Hari Srihari website: https://cedar.buffalo.edu/~srihari/     (2E15)

[12:51] David Eddy: As we would pester our high school calculus teacher, after he'd put up a particularly elegant proof... "Very nice... what's the practical value...?"     (2E16)

[12:52] janet singer: You said semantics and syntax are left behind. Are they implicitly present in the data being sufficiently clean, structured and consistent to begin with?     (2E17)

[12:53] RaviSharma: Thanks for how uncertainty is introduced, it seems similar to Bayesian decision in multivariate distribution?     (2E18)

[12:53] Mike Bennett: @David the potential is huge. Consider a triple store but instead of inert presence or absence of a relation you have the capacity to learn. Result is neural learning with explicit (discoverable) semantics.     (2E19)

[12:54] David Eddy: @Janet... if the semantics & syntax are stripped out... how useful can the "data" be?     (2E20)

[12:54] RaviSharma: mike - thanks     (2E21)

[12:55] David Eddy: @MikeB... but what's the value/relevance if (for instance) FIBO ignores the operational systems?     (2E22)

[12:57] Mike Bennett: @David completely different use cases. If I understand the math this looks a lot like a brain. i see no overlap with what you can do with this and what you would do with a semantics standard like FIBO, which explicitly aims to lock down concepts for standardization purposes.     (2E23)

[12:59] RaviSharma: David - what is missing is the new data set for xray detectors     (2E24)

[13:01] TerryLongstreth: Is anyone exploring the Open World issues?     (2E25)

[13:04] ToddSchneider: Our speaker is back online.     (2E26)

[13:06] David Eddy: @Mike... one thing I'm trying to get closer to is how to "bridge the gap" between the elegance of clean ontologies & the messiness of decades of accumulated operational systems.     (2E27)

[13:12] RaviSharma1: thanks Srihari about a priori vs your description of not knowing the dataset till ML or NN tensor 3     (2E28)

[13:13] Mike Bennett: Is classification a matter of applying learning to the Generalization relation?     (2E29)

[13:13] janet singer: Judea Pearl shifted from focus on correlational Bayesian networks to causal reasoning     (2E30)

[13:14] RaviSharma: janet asked correlational vs causal and Bayesian criteria     (2E31)

[13:16] RaviSharma: Srihari said Markov could point us to causality?     (2E32)

[13:17] RaviSharma: cause is gravity for apple falling     (2E33)

[13:18] RaviSharma: Janet - can we discover cause by something missing in datasets or KGs     (2E34)

[13:19] RaviSharma: thanks Ken and others     (2E35)

[13:20] ToddSchneider: Meeting ends @13:20 EDT     (2E36)

Resources     (2F)

Previous Meetings     (2G)


Next Meetings     (2H)