From OntologPSMW

Jump to: navigation, search
[ ]
    (1)
Session Planning
Duration 1.5 hour90 minute
5,400 second
0.0625 day
Date/Time Oct 25 2017 16:00 GMT
9:00am PDT/12:00pm EDT
6:00pm CEST/5:00pm BST
Convener JohnSowa

Contents

Ontology Summit 2018 Linguistic Contexts     (2)

Agenda     (2A)

This session will examine the use of ontologies for specifying linguistic contexts. The session convener will be John Sowa. Slides in pdf format Audio Recording     (2A1)

Related reading:     (2A2)

Conference Call Information     (2B)

    • Instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (2B5A)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (2B5B)
  • This session, like all other Ontolog events, is open to the public. Information relating to this session is shared on this wiki page.     (2B6)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (2B7)

Attendees     (2C)

Proceedings     (2D)

[12:06] Jeremy B3nszy: Where are the slides?     (2D2)

[12:07] KenBaclawski: Link to the slides: http://bit.ly/2xnHLdy     (2D3)

[12:07] Jeremy B3nszy: Thank you     (2D4)

[12:07] Gary Berg-Cross: Outline of slides     (2D5)

  • 1. Contexts in natural languages Literally, a context is text that accompanies a text. More generally, the context may be any background knowledge that helps explain a text.     (2D6)
  • 2. Situation semantics Situation semantics (Barwise and Perry 1983) is a version of context theory that was developed at Stanford (CSLI).     (2D7)
  • 3. Representing contexts in logic FOL and other logics can represent contexts. The very general IKL extensions to Common Logic can be adapted to other versions of logic.     (2D8)

[12:08] BobbinTeegarden: Could someone please also follow along in BlueJeans?     (2D10)

[12:13] David Whitten: Slide 4: John is explaining ambiguous "it" used in the cartoon.     (2D11)

[12:14] David Whitten: Since "it" is ambiguous, the girl is appealing to a higher power "MOM" to evaluate whether her brother's resolution of "it" as the box is appropriate.     (2D12)

[12:15] janet singer: On slide 4, how do four kinds of contexts relate to linguistics and pragmatics?     (2D13)

[12:16] janet singer: And slide thread slide three says 3 kinds of contexts     (2D14)

[12:16] David Whitten: slide 5: similar ambiguity for "moving" in cartoon. Explaining issues and context in this cartoon.     (2D15)

[12:17] David Whitten: three levels of context: text, syntax, semantics, pragmatics.     (2D16)

[12:18] David Whitten: slide 6: context for toddler is an imaginary context where toys are really eating, even though in reality they cannot eat.     (2D17)

[12:19] David Whitten: Some Robots (perhaps who plan) need to have a not-reality context to reason about the future and understand language.     (2D18)

[12:20] David Whitten: intent is a hard-to-do is shown by cartoon with boy and cookie and box. The boy's intent is not the same as the girl's intent so she and he faced with the same sentence will resolve "it" differently.     (2D19)

[12:21] David Whitten: Slide 7 & 8: modality is a different kind of logic that shapes how interpretations are resolved.     (2D20)

[12:22] JackRing: #8 is misleading. the image of Pierre must not be identical to the image of Pierre in Marie's mind.     (2D21)

[12:23] David Whitten: Slide 9: logics can be interpreted in terms of a real model or a possible world. The main mathematical theory for the models and interpretations start from Tarski's math model.     (2D22)

[12:24] David Whitten: in Slide 9, there is not a single possible world, but a family (possibly a lattice?) of possible worlds.     (2D23)

[12:25] Gary Berg-Cross: Slide 10     (2D24)

[12:25] David Whitten: computers can create virtual possible worlds by including and not including sets of propositions which include facts or laws.     (2D25)

[12:26] David Whitten: a law would be necessarily true (modal operation of necessity) whereas a fact would be true because the system used classical logic to determine they are true.     (2D26)

[12:28] Jeremy B3nszy: How would images be the main currency of the mind?     (2D27)

[12:29] David Whitten: modal logic (Leibnitz, Carnap) says a law is necessarily true when it is true in any possible world in the family of possible worlds.     (2D28)

[12:30] David Whitten: Kripke had rules of accessibility to connect the possible worlds. This may need to handle families of possible rules which are infinite in size.     (2D29)

[12:32] David Whitten: The possible worlds are connected into a general graph, not into a DAG or Lattice.     (2D30)

[12:33] Gary Berg-Cross: slide 12     (2D31)

[12:34] David Whitten: Situation semantics is described in many ways. The infinities are problematic for computers. John McCarthy came up with a situation calculus in the early 1960s. Barwise and Perry also argued for a particular situation method of computing. Barwise & Seligman in 1987 moved to other work. Basic Situation Theory is not an active topic of research, but it still has many possibilities.     (2D32)

[12:36] David Whitten: People can create new situations (such as the hill in the second cartoon) which allow things that aren't true in the current context but which are true in the new situation.     (2D33)

[12:37] Cory Casanave: Why must situation be anchored to agents? A solar system no one is observing would still be a situation.     (2D34)

[12:37] David Whitten: the entailment operation (symbolized by a vertical bar and an equals sign joined on the right side) sometimes is called "stile"     (2D35)

[12:39] janet singer: situations grow?     (2D36)

[12:42] David Whitten: a double bars around the greek "phi" stands for the phrase which expresses a relation between the speaker and discourse to relate to the described situations. This is from 1983, and only used by aficionados.     (2D37)

[12:42] Jeremy B3nszy: What is a discourse situation vs a described situation?     (2D38)

[12:43] David Whitten: The failure of defining what is a "situation" represented by "phi" kept this mathematical notation from spreading.     (2D39)

[12:45] David Whitten: I think the discourse situation must be the context in which the speaker made a statement about the "phi" relations and the described situation is one that is "seen as" a result of the relation. Since it is a relation, there seems to not be an explicit temporal context, but personally, I would find this more understandable.     (2D40)

[12:46] David Whitten: Slide 16: shows a complex situation which is used in evaluation of a patient who describes that situation to determine if the patient notices all the activities present.     (2D41)

[12:46] janet singer: How can the woman maintain discipline if she cant wash the dishes competently? : )     (2D42)

[12:47] David Whitten: Slide 17: an attempt to describe in text the complex situation in slide #16     (2D43)

[12:47] David Whitten: Slide 18 tries to reduce the picture to a controlled English expression of the situation.     (2D44)

[12:48] David Whitten: The picture in Slide #16 is best interpreted when knowledge of intentions is available, but no intentions are explicitly detailed in the picture.     (2D45)

[12:50] David Whitten: A "fixed" unchanging view of the cartoons as a group of selected space-time regions that are connected together (presumably by cause-effect and time) is still too simplistic     (2D46)

[12:51] David Whitten: The situation with the child and the toys depends on the attitude and intentionality of the child and the mother.     (2D47)

[12:53] David Whitten: Slide #20 describes the amount of background knowledge needed to understand the conversation, and resolving ambiguities in interpretation of the separate physical statements which make up the conversation.     (2D48)

[12:54] David Whitten: Situation semantics hasn't met the needs of formalizing the background knowledge and inferences made by participants in the conversation.     (2D49)

[12:55] David Whitten: Keith Devlin has done research in these issues.     (2D50)

[12:57] David Whitten: Every time background knowledge was stated, it brought in new knowledge that needed be considered when interpreting the conversational phrases.     (2D51)

[12:57] David Whitten: Context in reading research papers involves understanding jargon and background information to interpret the jargon.     (2D52)

[13:00] David Whitten: Conceptual Graphs have condensed some connectives in background information to simple pre-defined links (IMAG=image, SCR=description, STMT=statement) which are connecting encodings of the specialized link targets.     (2D53)

[13:00] JackRing: #23 A mouse is fleeing a cat. A mouse is winning a race.     (2D54)

[13:00] David Whitten: Charles S. Pierce used logic to talk about meta-language (language about language) from natural language.     (2D55)

[13:01] JackRing: #24 Is the girl doing the wishing?     (2D56)

[13:01] David Whitten: Complexity is inevitable when doing this work. It has been true since 1898 and C.S. Pierce     (2D57)

[13:02] David Whitten: Tarski added concepts of "meta language" to allow logic about logic.     (2D58)

[13:02] David Whitten: Tarski differentiated formal languages from simple language statements.     (2D59)

[13:04] David Whitten: simple language statements were the universe of discourse, but the meta-language added truth value functions on the original U of D, and then a meta-meta-language to talk about even more meta-statements.     (2D60)

[13:04] David Whitten: (that was slide 25)     (2D61)

[13:06] David Whitten: Slide 26 elaborates various languages and their connection to Common Logic and other computer languages to the system implementing the CL. This seems to conflate Common Logic with the system implementing it.     (2D62)

[13:07] David Whitten: CLCE (Common Logic Controlled English) is a limited English to use statements and variables to express a common logic sentence.     (2D63)

[13:08] Jeremy B3nszy: David, how is change taken into account in these models of situations?     (2D64)

[13:09] David Whitten: Slide #27 & Slide #28 - show various ways of expressing the same situation re Bob's trip to St.Louis via transport by a Chevy.     (2D65)

[13:12] David Whitten: Each system requires connectivity between each other system.     (2D66)

[13:12] David Whitten: CLCE assumed a vocabulary and set of specification related to the words. This includes a sample graph and pattern for converting into CLIF.     (2D67)

[13:13] David Whitten: For many nouns there is a more complex schema/pattern such as a noun used as a verb such as "drive" to the noun specifying that a "drive" has occurred.     (2D68)

[13:14] David Whitten: CLCE uses a base vocabulary of nouns and verbs that are enhanced using a dictionary.     (2D69)

[13:15] David Whitten: The range of expression using adjectives may bring in attributes such as "old" and limiting of the expression by use of "his"     (2D70)

[13:16] David Whitten: A system of analogy that takes a pattern of a known verb's and copies them over to a new verb's definitions.     (2D71)

[13:17] David Whitten: CLCE requires an expert linguistic user that is able to generate the schemas for nouns, verbs, etc.     (2D72)

[13:17] David Whitten: this requires an enhanced definition of a vocabulary that is used in translation between formal notations.     (2D73)

[13:17] ToddSchneider: How might 'competency questions' help constrain aspects of (the intended) context that may need to be 'accounted' for?     (2D74)

[13:18] David Whitten: From wide acceptance natural language to the more formal CLCE has a LOT of assumptions.     (2D75)

[13:21] janet singer: Todd: is your question on competency usefully related to Jacks on error?     (2D76)

[13:21] RaviSharma: Ken sorry I joined late     (2D77)

[13:22] David Whitten: Representatives who are fluent in various logical expression languages mapped their preferred logical language to IKRIS protocols and translation technologies and thence to any other logical language.     (2D78)

[13:23] David Whitten: there were translation programs written but no IKL-specific reasoning programs.     (2D79)

[13:24] RaviSharma: David - thanks for the notes, these will help in the summary along with slides and recording     (2D80)

[13:24] David Whitten: John Sowa: "writing a program is trivial" -- I laughed.     (2D81)

[13:24] David Whitten: JS: the issue of processing syntax is trivial. The complexity is in the semantics.     (2D82)

[13:25] David Whitten: VivoMind has been renamed.     (2D83)

[13:26] David Whitten: Producing a product is much harder. More demos/announcements might come in January 2018.     (2D84)

[13:26] David Whitten: Expert friendly (where the experts know the limits) systems are much easier than products.     (2D85)

[13:27] janet singer: John Better to say processing syntax is trivial, complexity is in the semantics *and the pragmatics*?     (2D86)

[13:28] David Whitten: McCarthy's "Demo Effect" depends on different people's knowledge of the limits of inputs     (2D87)

[13:31] David Whitten: The power of such a system can be tested using the Google translation methods. It is a fact that typing in what is thought was stated requires a human transmutation of the sound into a text string which is then put into a computer.     (2D88)

[13:32] Jim Disbrow: Can you use Latin in a reflexive mode and discuss how this doesn't translate?     (2D89)

[13:32] David Whitten: Google's translation depends on frequency of translation pairs.     (2D90)

[13:32] David Whitten: Statistics is Google's friend.     (2D91)

[13:33] David Whitten: VivoMind is renamed as "Kidney"     (2D92)

[13:33] Jeremy B3nszy: David what is a good introduction to ontology? I'm new to all of this :)     (2D93)

[13:33] RaviSharma: kyndi     (2D94)

[13:34] David Whitten: Kidney uses Prolog, Java, and C. C is used and not C++ to high efficiency coding.     (2D95)

[13:34] RaviSharma: David it in Kyndi?     (2D96)

[13:34] RaviSharma: is it     (2D97)

[13:35] ToddSchneider: Ravi, yes. Kyndi     (2D98)

[13:35] David Whitten: Prolog and Java then call the C code. Java is primarily for user interfaces, C is for rarely changed low level stuff, and An extended Prolog is for main-stream AI stuff available to SWI Prolog.     (2D99)

[13:36] David Whitten: SWI is free for download. C implementations likewise.     (2D100)

[13:36] David Whitten: Java likewise.     (2D101)

[13:37] RaviSharma: was the recording done Todd and ken     (2D102)

[13:38] ToddSchneider: Meeting ended 13:37 EST     (2D103)

Previous Meetings     (2E)


Next Meetings     (2F)