From OntologPSMW

Jump to: navigation, search
[ ]
Session Synthesis II
Duration 1 hour
Date/Time 01 April 2020 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener KenBaclawski


Knowledge graphs, closely related to ontologies and semantic networks, have emerged in the last few years to be an important semantic technology and research area. As structured representations of semantic knowledge that are stored in a graph, KGs are lightweight versions of semantic networks that scale to massive datasets such as the entire World Wide Web. Industry has devoted a great deal of effort to the development of knowledge graphs, and they are now critical to the functions of intelligent virtual assistants such as Siri and Alexa. Some of the research communities where KGs are relevant are Ontologies, Big Data, Linked Data, Open Knowledge Network, Artificial Intelligence, Deep Learning, and many others.     (2A)

Agenda     (2B)

Conference Call Information     (2C)

Attendees     (2D)

Proceedings     (2E)

[11:53] David Eddy: Ah, ha! Learned something new. Chat URL stays the same.     (2E1)

[12:03] David Eddy: walking on water & solving world hunger?     (2E2)

[12:06] BobbinTeegarden: @johns doesn't that mean that all ontologies are contextual (time being an aspect of context)?     (2E3)

[12:15] ToddSchneider: Are there any commonalities among the perspectives Kenneth listed?     (2E6)

[12:15] Gary: I take the implementation view a bit of a "How" look. For example how did you populate the KG? What sources? Did you use ML to extract info from documents?     (2E7)

[12:16] David Eddy: @Gary... what documents are NOT used to populate KG? Why not used?     (2E8)

[12:16] ToddSchneider: What's modeled usually reflects the intended purpose and/or requirements. It is also influenced by the implementation constraints.     (2E9)

[12:17] John Sowa: Bobbin, yes. Context is essential for interpreting anything in any version of language or logic.     (2E10)

[12:17] David Eddy: @Todd >>the intended purpose<< Why constrained to "models." Why not the AS IS?     (2E11)

[12:18] ToddSchneider: David, I don't understand your question.     (2E12)

[12:19] John Sowa: A true top level would be so general that it would provide very little guidance for finding any specific context.     (2E13)

[12:19] David Eddy: @Todd... models are just that: models. In context of information systems, models are prior stop to actually implementing the operational systems.     (2E14)

[12:24] RaviSharma: operational systems are built on assumptions or models     (2E15)

[12:25] RaviSharma: these are implementation view of underlying assumptions (model)     (2E16)

[12:26] RaviSharma: agree with David Eddy     (2E17)

[12:26] David Eddy: @JFS... agree with multiple ambiguous "meanings"     (2E18)

[12:28] John Sowa: David, to determine exactly what you mean, just put an adjective in front.     (2E19)

[12:28] RaviSharma: Gary please show one page at a time cant read     (2E20)

[12:29] John Sowa: For example, mental model, data model, engineering model, Tarsi-style model...     (2E21)

[12:30] John Sowa: Carl Adam Petri (inventor of Petri nets) made the point that all those versions of models can be mapped to one another.     (2E22)

[12:31] John Sowa: you can start with a mental model, draw a sketchy diagram as a tentative design model.     (2E23)

[12:31] John Sowa: then you can make it as precise as you like (or need) by refining it.     (2E24)

[12:32] janet singer: Using ontological as an adjective is less pretentious: ontological theory?     (2E25)

[12:32] David Eddy: @JFS... in context of IT systems... conceptual, logical, physical & (hopefully eventually) production systems... all are different as they move across boundaries     (2E26)

[12:33] Gary: Sorry had to mute due to a phone.     (2E27)

[12:34] Gary: For the General Qs     (2E28)

Why KGs can be used for a variety of information processing and management tasks such as:     (2E29)

1) continue the semantic enhancement of applications such as search, browsing, personalization, findability and access, advice & recommendation, explanation, and summarization.     (2E30)

2) improving integration/interoperation of data, including heterogeneous data found in diverse forms and from many sources.     (2E31)

3) empowering ML and NLP techniques through such things as commonsense enrichment, AI techniques including ML algorithms that learn from pre-labeled examples are acknowledging that data alone is not enough (ref P. Domingos. A Few Useful Things to Know about Machine Learning. Communications of the ACM 55(10): 78 87,2012 ). As discussed in last years summit ( Niket Tandon Commonsense for Deep Learning, ) there is a growing body of work seeking to demonstrate how and how much use of domain knowledge improves the results or effectiveness of state of the art ML and NLP techniques.     (2E32)

4) enabling construction and use of Open Knowledge Networks,     (2E33)

5)improve automation and support intelligent more human-like behavior and activities that may involve conversations or question-answering and interaction with agents such as robots.     (2E34)

[12:34] John Sowa: David, the Tarski-style model is the most precise and it is independent of any notation.     (2E35)

[12:35] RaviSharma: I am In India having trouble with phone but sent material to you     (2E36)

[12:35] Gary: How Kgs construction is facilitated and supported by extraction from semi-structured and structured data available the WWW. These include Dbpedia & YAGO) along with unstructured data extracted by NLP systems like NELL, HTML web pages, books, and Microdata annotations on the Web (Google's Knowledge Vault), public collaborative data like Wikipedia and Freebase (Yahoos KG), collaborative manual editors (Wikidata), etc.     (2E37)

A known issue is that in many of these source the knowledge itself is not of high quality and is inconsistent with information from other sources. Thus for many if not the majority of situations, data curation is crucial to ensure the quality of the KGs and ultimately, their end-use usability     (2E38)

[12:35] John Sowa: All the other models are must have a mapping to and from the Tarski-style model.     (2E39)

[12:35] ToddSchneider: Ravi, can you use your computer's audio?     (2E40)

[12:36] Gary: Use case specifics for individual enterprises and industries     (2E41)

The one use case we heard was on Rich Contexts by Paco Nathan. (Go through the Wh-Qs) So we can ask the W Qs about that project:     (2E42)

What: This project provides secure access to a variety of confidential, online data, for evidence-based policy making. The project is funded by Schmidt Futures, Alfred P Sloan Foundation, and the Overdeck Family Foundation in partnering with Bundesbank, USDA, and others. Collaboration includes SAGE Pub, RePEc, ResearchGate, Digital Science, and others.     (2E43)

[12:36] John Sowa: Definition of the Tarski-style model: A set of entities and a set of relations among those entities.     (2E44)

[12:36] Gary: Why: Empirical research and evidence-based policy decisions increasingly rely on a variety of data include structured and unstructured and microdata. Research publications are well referenced and findable, however structured information on data usage is less available. The Rich Contexts project aspires to change this by building a data-centric ecosystem of linked data with rich context and a community around microdata. One goal is to provide better means of search and discovery for social science researchers and agency analysts.     (2E45)

[12:37] Gary: Why a KG for on Rich Contexts? :     (2E46)

Allow flexibility for metadata representation     (2E47)

Measure metadata quality     (2E48)

Prepare features for ML models     (2E49)

Build recommenders for experts, topics, tools, etc.     (2E50)

Engage the public with automated data inventories     (2E51)

Recommend configurations to new analysts     (2E52)

Identify which datasets get used with others     (2E53)

Quantify impact of datasets on policy     (2E54)

[12:37] John Sowa: You can represent a T-S model in a graph (possibly infinite) or in a possibly infinite set of tables.     (2E55)

[12:37] RaviSharma: feeble phone voice hope you can hear me     (2E56)

[12:38] ToddSchneider: Ravi, can you use your computer's audio?     (2E57)

[12:38] David Eddy: My interest is in supporting poorly documented operational systems... where the people who understand them are headed to retirement, taking their tacit knowledge with them.     (2E58)

[12:38] ToddSchneider: What is it about a 'graph' that's important?     (2E59)

[12:39] Gary: I will clean up my working notes, maybe flesh these out a bit and create a Google doc with that material....or if there is a place on the Ontolog and Summit site I can place them there if people can edit there.     (2E60)

[12:40] John Sowa6: A global T-S model can include smaller T-S models for every context that anybody might consider for any purpose.     (2E61)

[12:40] John Sowa6: Todd, graphs and tables are equally important.     (2E62)

[12:40] David Eddy: @Todd... **** IF **** well populated & maintained, they can be useful in compressing overwhelming details into small "space." Then one (at least in theory) can choose to dive deeper into supporting provenance & details.     (2E63)

[12:40] John Sowa6: The only difference between graphs and tables is the way people look at them.     (2E64)

[12:40] ToddSchneider: John, 'T-S' is to interpreted as 'Tarski-Style'?     (2E65)

[12:41] John Sowa6: Todd, yes. I got tired of typing "Tarski-style".     (2E66)

[12:42] David Eddy: Tarski-style ==> TS... human nature to compress/abbreviate.     (2E67)

[12:43] David Eddy: I find 205 meanings for TS...     (2E68)

[12:44] David Eddy: In contest of this group & discussion, TS makes sense. Remove the context & not so much sense.     (2E69)

[12:44] ToddSchneider: I'm acronym challenged In the context of effective communications, though tedious, acronyms should be avoided.     (2E70)

[12:44] David Eddy: @Todd >> acronyms should be avoided. << Agreed, but good luck with avoiding TLAs     (2E71)

[12:44] John Sowa6: David, every word in every language has a different meaning in every context in which it is used.     (2E72)

[12:45] David Eddy: My teeny tiny largely acronyms "dictionary" is 2,000 terms ==> 68,000 meanings.     (2E73)

[12:46] David Eddy: @JFS... context is often one of the first casualties to disappear     (2E74)

[12:47] David Eddy: @anyone... how does NLP address / deal with acronyms?     (2E75)

[12:48] David Eddy: << silence >>     (2E76)

[12:50] Gary: One distinction we might make is the scope of a KG. Ravi showed Enterprise KGs. NSF is interested more open. maybe domain KGs as part of OKNs.     (2E77)

[12:51] janet singer: @Todd: Graphs at 3 different levels have come together, each with own kind of benefits 1) database level 2) conceptual theory level 3) presentation level for learning and discovery     (2E78)

[12:52] George Hurlburt: In the real-world, many systems are complex and dynamic - graphs represent this well. Properly annotated graphs can more effectively portray state changes implicit in dynamic systems. They need not be limited to static representations, but rather can convey near real-time behaviors. From an operational viewpoint, controlled vocabularies may be necessary enhance utility and contextual understanding.     (2E79)

[12:53] BobbinTeegarden: What's great about a graph is you can see the shape of knowledge, not just single concepts. Shapes (right brain stuff).     (2E80)

[12:53] David Eddy: Kgs are "new" & sexy. What else do you want? There will be something sexier after Kgs     (2E81)

[12:54] PeteRivett: graphs are different in that they are more amenable to federation - not everything need be in one database     (2E82)

[12:55] Gary: Let's ask Ernest Davis how he defines and understands "knowledge" when he presents.     (2E83)

[12:55] PeteRivett: visualization is nothing to do with it IMHO     (2E84)

[12:56] BobbinTeegarden: Shapes: clusters, unpresupposed relationships, perspectives, contexts, density and complexity...     (2E85)

[12:56] Gary: Visualization has to do with human understanding. Like,"what did you understand looking at this graph?"     (2E86)

[12:57] AlexShkotin: @Todd, world itself is dynamic graph.     (2E87)

[12:58] ToddSchneider: Alex, are getting into epistemology?     (2E88)

[12:58] PeteRivett: Likewise sparseness is orthognal to graph - you can do sparse with columnar databases for example     (2E89)

[12:58] AlexShkotin: Todd, it's just observation.     (2E90)

[13:02] janet singer: From Leia: Graph orientation provides a bridge between the human and computer     (2E91)

[13:02] Gary: Have to leave.     (2E92)

[13:02] Leia Dickerson: @Janet--No-- It was Jessica Kent.     (2E93)

[13:03] AlexShkotin: It should be mention that graph is a geometrical figure. And what we a taking usually about is graph topology.     (2E94)

[13:05] janet singer: @Leia: thanks, I was only looking at chat rather than zoom window so assumed it was you     (2E95)

[13:07] BobbinTeegarden: Ram, are you suggesting each of us do an ontology of our view of a knowledge graph?     (2E96)

[13:08] Ram D. Sriram: Knowledge graphs should be viewed at two levels -- a visual graphical perspective and a computational perspective. The visual perspective will provide us conceptual views of knowledge, whereas the computational views will aid us in inferencing.     (2E97)

[13:17] janet singer: We need a commonsense general perspective on an actors use of data (as in the Peirce cycle, OODA loop, semantic technologies value proposition diagrams, etc.) that provides the shared context for grounding any other POV we want to highlight     (2E98)

[13:20] janet singer: Our general version might show how various technologies, other people, etc., can also play roles in that cycle     (2E99)

Resources     (2F)

Previous Meetings     (2G)

Next Meetings     (2H)