From OntologPSMW

Revision as of 17:56, 4 March 2019 by KennethBaclawski (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
[ ]
    (1)
Session Commonsense
Duration 1.5 hour90 minute
5,400 second
0.0625 day
Date/Time Jan 23 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener GaryBergCross and TorstenHahmann

Contents

Ontology Summit 2019 Commonsense Session 1     (2)

An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do. The starting observation is that every ordinary person has "commonsense" or basic knowledge about the real world that is common to all humans. Spatial and physical reasoning are good examples. This is the kind we want to endow our machines with for several reasons including as part of conversation and understanding. System understanding of human perceptual and memory limitations might, for example, be an important thing for a dialog system to know about. Early on this was describe as giving them a capacity for "commonsense". However, early AI demonstrated that the nature and scale of the problem was difficult. People seemed to need a vast store of everyday knowledge for common tasks. A variety of knowledge was needed to understand even the simplest children's story. A feat that children master with what seems an natural process. One resulting approach was an effort like CyC to encode a broad range of human commonsense knowledge as a step to understanding text which would bootstrap further learning. Some believe that today this problem of scale can be addressed in a new ways including via modern machine learning. But these methods do not build in an obvious way to provide machine generated explanations of what they "know." As fruitful explanations appeal to folks understanding of the world, common sense reasoning would be a significant portion of any computer generated explanations. How hard is this to build into smart systems? One difficult aspect of common sense is making sure the explanations are presented at multiple levels of abstraction, i.e. from not too detailed to tracing exact justifications for each inference step. This track will explore these and other issues in light of current ML efforts and best practices for AI explanations.     (2A)

Agenda     (2B)

Conference Call Information     (2C)

  • Date: Wednesday, 23-January-2019     (2C1)
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC     (2C2)
  • Expected Call Duration: ~1.5 hours     (2C3)
    • Instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (2C4A)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (2C4B)
  • This session, like all other Ontolog events, is open to the public. Information relating to this session is shared on this wiki page.     (2C5)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (2C6)

Attendees     (2D)

Proceedings     (2E)

[12:00] Douglas R. Miles: https://zoom.us/j/689971575     (2E1)

[12:00] Gary: Do we have Michael's slides??     (2E2)

[12:03] AlexShkotin: Thank u Douglas     (2E3)

[12:13] Gary: Physical Turing Test involves the idea that an agent being "situated" in the physical world and learning via interaction is important.     (2E4)

[12:22] Gary: As I think of the scenarios I get a sense of how many extant formal ontologies which might include these "objects" lacks the semantics to model what is implied in the scenario.     (2E6)

[12:25] Gary: @Michael will this work lead to some changes in Process Specification Language or will you use domain process modules below PSL?     (2E7)

[12:31] DavidEddy: Is "PSL" as used here related to UMichigan PSL/PSA early CASE tooling?     (2E8)

[12:31] ToddSchneider: David, no. PSL == Process Specification Language (an ISO standard).     (2E9)

[12:31] MichaelGruninger: @DavidEddy: PSL is the Process Specification Language (ISO 18629)     (2E10)

[12:36] Gary: I hope that we will take up your evaluation Q along with the relations to upper ontologies in our synthesis session. I think also there is a Q as how we might enhance knowledge engineering methods to incorporate commonsense K.     (2E11)

[12:40] Mark Underwood: Greetings Michael - Sympathies for the heating #fail. Had to replace ours last year after an oil leak. Alas, both supporting nonrenewables     (2E12)

[12:43] Gary: Q From Benjamin Grosof from the Zoom chat question: what formal KR/logic for specifying these ontologies in Praxis? question continued: and does it handle the need for defeasibility (for representing state fluent change)?     (2E14)

[12:47] ToddSchneider: Is there an assumption that an explanation provides sufficient evidence for accepting it as correct?     (2E15)

[12:48] Mark Underwood: I think even microtheory mappings are nontrivial. There's an ISO onto standard for some elements of auto engineering (thanks, EU) but the commonsense notion / natural language expressions of "is the car still running?" require some bridging. Lay notions and expressions tend to be left out of microtheories (especially standardized a la ISO) that are designed to support domain specialists     (2E16)

[12:49] Gary: Drill down may also be called progressive deepening of explanation.     (2E17)

[12:50] Gary: Focus means staying on topic in a sense that Grice might use.     (2E18)

[12:54] MichaelGruninger: @BenjaminGrosof: All of the PRAxIS ontologies are specified in Common Logic. Defeasible reasoning about the effects of processes is one of the big reasoning problems that we will need to tackle     (2E19)

[12:56] BruceBray: I didn't catch the name of the explainable AI conference he mentioned this past summer     (2E20)

[12:57] John Sowa: Re defeasible: There are two approaches: Nonmonotonic logics and belief (or theory) revision.     (2E21)

[12:58] Mark Underwood: this is a great list (Apps and Benefits of Explanation" (and also a reminder of how hard this gets)     (2E22)

[12:59] John Sowa: All (or nearly all with a few special case exceptions) methods of nonmonotonic logic have a corresponding method of belief revision.     (2E23)

[12:59] Gary: I hadn't read of these explanations in KB building. Are there applications you can reference?     (2E24)

[13:01] John Sowa: Advantage of belief revision: You can view it in terms of a lattice of all possible theories specified in some conventional (monotonic) logic.     (2E25)

[13:02] John Sowa: The revision method corresponds to a walk through the lattice to find a revised theory.     (2E26)

[13:03] Mark Underwood: Tableau? Not sure about that. Speaking as a former Qlik developer, I think the presentation layer of those tools relies upon a set of implicit (less often, explicit) models which the tools neither expose, facilitate or allow the sort of elaboration and interaction you'd want.     (2E27)

[13:05] BruceBray: Thanks @Benjamin - answered my question about XAI workshop: http://ijcai-18.org/     (2E28)

[13:06] Mark Underwood: Aside: In the US, explanation sits somewhere below privacy in priority. In other words, not high.     (2E29)

[13:07] John Sowa: Re focus: Linguists talk about topic/focus. The topic corresponds to a particular theory or microtheory.     (2E30)

[13:07] MichaelGruninger: I need to leave now -- thank you for the opportunity to present!     (2E31)

[13:08] John Sowa: The focus corresponds to a particular concept or instance in the current theory.     (2E32)

[13:08] Mark Underwood: Mike Bennett - He's stealing from our track     (2E33)

[13:08] Gary: EDM work We have a session on explanation in Finance so this is a good thing to know.     (2E34)

[13:09] Mark Underwood: Gary: We're on it. Hope to get a speaker from the FIBO crew     (2E35)

[13:10] Gary: From Benjamin's example one can see the degree of commonsense involved in here as you would expect also from anything involving NL Understanding.     (2E36)

[13:14] MikeBennett: @Mark indeed. I hadn't thought of the Reg W PoC but there are a couple of others I want to try and get a presentation on. Or I can talk to FIBO itself (having originated it) but not sure what the explanation angle is. Perhaps that concept ontologies (concept models) have explanatory power.     (2E37)

[13:18] Gary: I take it that the system knows something about affiliate, subsidiary and the relations "controlled by".     (2E38)

[13:18] Gary: Word of the day Humagic.     (2E39)

[13:20] MikeBennett: @Gary not sure. In the early incarnations of FIBO we looked to have semantic grounding of concepts in semantic primitives under an upper ontology, but this doesn't translate into OWL/Reasoner friendly stuff which is what FIBO now is. Can consider whether non-UO grounded semantics (loosely, correspondence theory as distinct from semantic grounding) still has the effect you are looking for. Either way, to say a system 'knows' anything is begging a whole string of questions anyway.     (2E40)

[13:22] MikeBennett: @Gary so in this case, the relations like Affiliate are grounded in and extend ownership and control primitive relations. These could be further grounded in Searle-based social constructs, but we did not do that in the earlier FIBO, that's a direction I wanted to take things later.     (2E41)

[13:22] Gary: Slide 25 answered by previous Q on defined concepts.     (2E42)

[13:24] Gary: @MikeB and MarkU You might want to invite Benjamin back for you session for some more detail.     (2E43)

[13:24] MikeBennett: Good idea     (2E44)

[13:28] DavidEddy: Implicit assumption that A word/term has a single universal meaning to all parties is a bit thin...     (2E45)

[13:31] MikeBennett: I wonder if the matter of grounding of concepts (as above) is relevant to the Explanation topic?     (2E46)

[13:31] Gary: Narrative is the topic of the next session but the Financial explanation topic is after that.     (2E47)

[13:32] DavidEddy: Easy access to "glossary" is right direction.     (2E48)

[13:32] John Sowa: @David, an example of an ambiguous headline from the 1980s: "British left waffles on Falkland Islands."     (2E49)

[13:32] Gary: Session 2 on Commonsense is Session 2 (March 6th): with Pascal Hitzler (Wright State University) Using micro theories for reasoning and explanations and Niket Tandon (Allen Institute for AI) Automatic acquisition of commonsense knowledge     (2E50)

[13:33] DavidEddy: Example of ambiguity... my "glossary" with an average of 34 meanings for a collection of 2,000 terms.     (2E51)

[13:33] DavidEddy: Noted in context of Professor Gruninger's "I love acronyms..." Don't we all.     (2E52)

[13:34] John Sowa: @David, 34 is an inadequate approximation to infinity.     (2E53)

[13:34] DavidEddy: As always seen through George's Miller's Ambiguous Words - http://www.kurzweilai.net/ambiguous-words     (2E54)

[13:35] DavidEddy: @John... I'm trying to not scare people.     (2E55)

[13:35] Mark Underwood: Thanks Michael, Benjamin, Gary - Good as always     (2E56)

[13:35] TorstenHahmann: Thanks to Michael and Benjamin for the great presentations!     (2E57)

[13:37] ToddSchneider: Meeting ends @13:37 EST     (2E58)

Resources     (2F)

Previous Meetings     (2G)


Next Meetings     (2H)