Actions

ConferenceCall 2023 10 04: Difference between revisions

Ontolog Forum

(Added participants)
No edit summary
 
(11 intermediate revisions by 2 users not shown)
Line 18: Line 18:
|}
|}


= [[OntologySummit2024|Ontology Summit 2024]] {{#show:{{PAGENAME}}|?session}} =
= [[OntologySummit2024_FallSeries|Ontology Summit 2024 Fall Series]] {{#show:{{PAGENAME}}|?session}} =


== Agenda ==
== Agenda ==
Line 40: Line 40:


== Participants ==
== Participants ==
* [[AlexShkotin|Alex Shkotin]]
* [[AndreaWesterinen|Andrea Westerinen]]
* [[AndreaWesterinen|Andrea Westerinen]]
* [[ToddSchneider|Todd Schneider]]
* [[ToddSchneider|Todd Schneider]]
Line 50: Line 51:
* Steve Wartik
* Steve Wartik
* [[BartGajderowicz|Bart Gajderowicz]]
* [[BartGajderowicz|Bart Gajderowicz]]
* [MarkFox|Mark Fox]]
* [[MarkFox|Mark Fox]]
* Seungmin Seo
* Seungmin Seo
* JL Valente
* JL Valente
Line 68: Line 69:
* Roberta Ferrario
* Roberta Ferrario
* Chris Novell
* Chris Novell
* Emanuele Bottazzi
* Marco Monti
* [[JanetSinger|Janet Singer]]


== Discussion ==
== Discussion ==
* [[MikeBennett|Mike Bennett]]: Andrea's quote: "Ontologies are the backing definitions behind knowledge graphs" is a great way of describing the distinction between them.
** Emanuele Bottazzi: Or justifications
* [[ToddSchneider|Todd Schneider]]: Many Knowledge Graphs are not based on an ontology.
** [[AlexShkotin|Alex Shkotin]]: but keep it inside
** [[MikeBennett|Mike Bennett]]: I would not characterize such a thing as a knowledge graph, even if it re-uses that label for itself. Whence the claim of 'Knowledge' in KG if not semantics? Might not be an OWL-ology of course.
** [[BartGajderowicz|Bart Gajderowicz]]: An ontology is the “schema” for a knowledge graph, so it may not be designed well but there is a “schema” that defines nodes and edges in some way.
* Steven Wartik: I like to distinguish between a KG and a knowledge base. A KG is a graph. It doesn't necessarily have a schema. A KB is a KG whose schema is an ontology. This is just terminology, but I find it helps my sponsors understand.
* [[AlexShkotin|Alex Shkotin]]: Give me KG and I'll extract it's ontology.
* [[KenBaclawski|Ken Baclawski]]: KGs were covered in Ontology Summit 2020.  The communique has precise definitions: https://ontologforum.s3.amazonaws.com/OntologySummit2020/Communique/OntologySummit2020Communique.pdf
* [[ToddSchneider|Todd Schneider]]: Knowledge =def. “facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject” (from New Oxford American Dictionary)
* [[ToddSchneider|Todd Schneider]]: ‘Meaning’ is an ambiguous term.
** Andrew McCaffrey: To "table" a motion means completely the opposite things in the US and the UK. :D
* Ayya Niyyanika Bhikkhuni: As generative AI hallucinations become an issue, there seems a need for credibility scoring.  I am about a decade out-of-the-loop, but know we were talking about this many summits ago.  This is in regards to trust.
** [[BartGajderowicz|Bart Gajderowicz]]: Explanations WITH hallucinations are a huge problem for LLMs. They sound credible, and may be logically sound, but are completely wrong.
** Emanuele Bottazzi: Perhaps all the probabilistic approaches cannot be explanatory, since they “happen” to be wrong or right
** [[BartGajderowicz|Bart Gajderowicz]]: Ideally the explanation would come from explicit knowledge. Most LLMs just don’t have that. Ensemble ML architectures may include explicit knowledge somewhere, but if the underlying processes and representations are probabilistic we reach a hard limit on explainability. Of course you can have an explanation that provides “certainty” about the answer and explanation, which is often sufficient.
** Emanuele Bottazzi: I would add that ideally the explanation would come from the  explicit _use_ of knowledge and principles
* [[BartGajderowicz|Bart Gajderowicz]]: Do LLMs perform natural language understanding (NLU), or just processing (NLP)?
** [[BartGajderowicz|Bart Gajderowicz]]: Given my definition of knowledge I’d say NLP only. Even a simple Word2vec embedding is able to identify similarity between complex objects, but I would not consider it understanding (or knowledge)
* Ayya Niyyanika Bhikkhuni: “What is really true” is the underlying question when translating ancient text.  The project I am working on is taking translations from humans and Generative AI and it is hoped then that people practicing according to their interpretation of the texts would tune the translations based on ‘tacit knowledge.’
* Marco Monti: QUESTION: if neither LLM models nor Knowledge Graphs allow for compositionality and high contextualization of answers from a chat bot, what are the mechanisms behind the scenes of GPT X to answer so punctually and contextually ?
* [[JanetSinger|Janet Singer]]: Yes, mimicry is the key characterization of what LLMs do. Parallels the 1950s it was thought that mimicry of biological behavior would inevitably lead to a structural model of living systems, and then to artificially generated life itself. See critiques by Robert Rosen.
* [[GaryBergCross|Gary Berg-Cross]]: LLM based systems can learn on the job although you wouldn't call it based on experience.  This has been said about the learning: "When a user interacts with an LLM-based system, the system is able to observe the user's responses and learn from them. This allows the system to improve its ability to generate responses that are relevant to the user's needs.
* [[GaryBergCross|Gary Berg-Cross]]: There are a number of ways that LLM-based systems can be trained using chat responses. One common approach is to use reinforcement learning. In reinforcement learning, the system is rewarded for generating responses that are positive and helpful. This encourages the system to learn what kinds of responses are most likely to be well-received by users."
* [[ToddSchneider|Todd Schneider]]: Could explain “links in OWL are not first class objects”?
**  Steven Wartik: Todd, a first-class object is uniquely identifiable. A reified triple is a 1st-class object.
** Asiyah Yu Lin: I think the knowledge graph users who doesn't care too much about OWL thinking of data level or instance level. The ontology is really about classes. There is a blurred line between what is data and what is class.
** [[MichaelDeBellis|Michael DeBellis]]: @Todd Schneider Suppose you have a model of a highway as a graph where nodes are cities and links are roads. You want to model the time it takes to get from two nodes as information directly on the link. You can do that with Neo4J but now with OWL. With OWL you need to use the design pattern where you reify the relation with a new class.
** [[ToddSchneider|Todd Schneider]]: Michael, thank you for the explanation. Per your example, it could be the case that the representation (of the entities and their relations) was inadequate to support the query (i.e. with reification). Typo “ with reification’ should be ‘Without reification).
** [[MichaelDeBellis|Michael DeBellis]]: @Todd Schneider Yes. My question is how easy is it to take an OWL ontology where you have reified the relations and use graph theoretic algorithms? I don't know because I haven't used these algorithms in a long time. One thing I'm thinking about is creating an extension to OWL (I mean things like new classes and Python or SPARQL) where when you assert a new property value you have the option to create an instance of a Relation class and store data directly on that instance. That way you could treat the OWL ontology as a true graph.
** [[MichaelDeBellis|Michael DeBellis]]: Often you can even ask GPT-4 to create the KIF or CycL or CLIF .. and it will
* Hayden Spence: RE: Generating ontologies with LLMs: https://github.com/monarch-initiative/ontogpt
* [[JanetSinger|Janet Singer]]: Here, mimicry of knowledge-driven behavior is being promoted as inevitably leading to structural models of knowledge and then to ‘emergent consciousness’. Ontologies are structural (good for modeling within their scope); LLMs are behavioral
** [[JanetSinger|Janet Singer]]: Here as in the hype cycle, not by Andrea 🙂
* [[Douglas Miles|Douglas Miles]]: GPT-3 btw seems useless compared to GPT-4 on this front
* Hayden Spence: From my understanding, GPT-4 is multimodal and multimodel in the sense its training is higher parameter, it incorporates more than just text data, and the actual interface is the interaction of multiple GPT models working together.
* [[ToddSchneider|Todd Schneider]]: What is ‘semantic understanding’?
** [[AndreaWesterinen|Andrea Westerinen]]: It is "natural language understanding"
* Hayden Spence: Is the use of established controlled vocabularies that are under license like SNOMED CT, MedDRA, ICD10/0, or standards like FHIR, and the mappings between them -- once embedded -- still restricted? At what point does transformation of information collection become its own separate from the digested information.
* [[Douglas Miles|Douglas Miles]]: i don't have a question at this point.. but love this talk!
* [[JanetSinger|Janet Singer]]: Symbolic and connectionist theories of cognition are both computationalist. Leaves out 4-E embodied cognition perspective


== Resources ==
== Resources ==
* [https://bit.ly/3rCnyC0 Video Recording]
* [https://youtu.be/5uL5HXD4f3w YouTube Video]


== Next Meetings ==
== Next Meetings ==
Line 77: Line 146:
         |?|?Session|mainlabel=-|order=asc|limit=3}}
         |?|?Session|mainlabel=-|order=asc|limit=3}}


[[Category:OntologySummit2024]]
[[Category:OntologySummit2024_FallSeries]]
[[Category:Icom_conf_Conference]]
[[Category:Icom_conf_Conference]]
[[Category:Occurrence| ]]
[[Category:Occurrence| ]]

Latest revision as of 20:53, 16 February 2024

Session Overview
Duration 1 hour
Date/Time 4 Oct 2023 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Conveners Andrea Westerinen and Mike Bennett

Ontology Summit 2024 Fall Series Overview

Agenda

Andrea Westerinen and Mike Bennett

Title: Fall Series Kickoff and Overview

Abstract: The opening session of the Ontology Summit 2024 Fall Series overviews the LLM, ontology and knowledge graph landscapes, as well as introducing the participating speakers. The goal of the Series is to understand, discuss and debate the similarities, differences and overlaps across these landscapes. In addition, we will use these sessions to help to formulate the full 2024 Summit.

Slides

Video Recording

Conference Call Information

  • Date: Wednesday, 4 October 2023
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

  • Mike Bennett: Andrea's quote: "Ontologies are the backing definitions behind knowledge graphs" is a great way of describing the distinction between them.
    • Emanuele Bottazzi: Or justifications
  • Todd Schneider: Many Knowledge Graphs are not based on an ontology.
    • Alex Shkotin: but keep it inside
    • Mike Bennett: I would not characterize such a thing as a knowledge graph, even if it re-uses that label for itself. Whence the claim of 'Knowledge' in KG if not semantics? Might not be an OWL-ology of course.
    • Bart Gajderowicz: An ontology is the “schema” for a knowledge graph, so it may not be designed well but there is a “schema” that defines nodes and edges in some way.
  • Steven Wartik: I like to distinguish between a KG and a knowledge base. A KG is a graph. It doesn't necessarily have a schema. A KB is a KG whose schema is an ontology. This is just terminology, but I find it helps my sponsors understand.
  • Todd Schneider: Knowledge =def. “facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject” (from New Oxford American Dictionary)
  • Todd Schneider: ‘Meaning’ is an ambiguous term.
    • Andrew McCaffrey: To "table" a motion means completely the opposite things in the US and the UK. :D
  • Ayya Niyyanika Bhikkhuni: As generative AI hallucinations become an issue, there seems a need for credibility scoring. I am about a decade out-of-the-loop, but know we were talking about this many summits ago. This is in regards to trust.
    • Bart Gajderowicz: Explanations WITH hallucinations are a huge problem for LLMs. They sound credible, and may be logically sound, but are completely wrong.
    • Emanuele Bottazzi: Perhaps all the probabilistic approaches cannot be explanatory, since they “happen” to be wrong or right
    • Bart Gajderowicz: Ideally the explanation would come from explicit knowledge. Most LLMs just don’t have that. Ensemble ML architectures may include explicit knowledge somewhere, but if the underlying processes and representations are probabilistic we reach a hard limit on explainability. Of course you can have an explanation that provides “certainty” about the answer and explanation, which is often sufficient.
    • Emanuele Bottazzi: I would add that ideally the explanation would come from the explicit _use_ of knowledge and principles
  • Bart Gajderowicz: Do LLMs perform natural language understanding (NLU), or just processing (NLP)?
    • Bart Gajderowicz: Given my definition of knowledge I’d say NLP only. Even a simple Word2vec embedding is able to identify similarity between complex objects, but I would not consider it understanding (or knowledge)
  • Ayya Niyyanika Bhikkhuni: “What is really true” is the underlying question when translating ancient text. The project I am working on is taking translations from humans and Generative AI and it is hoped then that people practicing according to their interpretation of the texts would tune the translations based on ‘tacit knowledge.’
  • Marco Monti: QUESTION: if neither LLM models nor Knowledge Graphs allow for compositionality and high contextualization of answers from a chat bot, what are the mechanisms behind the scenes of GPT X to answer so punctually and contextually ?
  • Janet Singer: Yes, mimicry is the key characterization of what LLMs do. Parallels the 1950s it was thought that mimicry of biological behavior would inevitably lead to a structural model of living systems, and then to artificially generated life itself. See critiques by Robert Rosen.
  • Gary Berg-Cross: LLM based systems can learn on the job although you wouldn't call it based on experience. This has been said about the learning: "When a user interacts with an LLM-based system, the system is able to observe the user's responses and learn from them. This allows the system to improve its ability to generate responses that are relevant to the user's needs.
  • Gary Berg-Cross: There are a number of ways that LLM-based systems can be trained using chat responses. One common approach is to use reinforcement learning. In reinforcement learning, the system is rewarded for generating responses that are positive and helpful. This encourages the system to learn what kinds of responses are most likely to be well-received by users."
  • Todd Schneider: Could explain “links in OWL are not first class objects”?
    • Steven Wartik: Todd, a first-class object is uniquely identifiable. A reified triple is a 1st-class object.
    • Asiyah Yu Lin: I think the knowledge graph users who doesn't care too much about OWL thinking of data level or instance level. The ontology is really about classes. There is a blurred line between what is data and what is class.
    • Michael DeBellis: @Todd Schneider Suppose you have a model of a highway as a graph where nodes are cities and links are roads. You want to model the time it takes to get from two nodes as information directly on the link. You can do that with Neo4J but now with OWL. With OWL you need to use the design pattern where you reify the relation with a new class.
    • Todd Schneider: Michael, thank you for the explanation. Per your example, it could be the case that the representation (of the entities and their relations) was inadequate to support the query (i.e. with reification). Typo “ with reification’ should be ‘Without reification).
    • Michael DeBellis: @Todd Schneider Yes. My question is how easy is it to take an OWL ontology where you have reified the relations and use graph theoretic algorithms? I don't know because I haven't used these algorithms in a long time. One thing I'm thinking about is creating an extension to OWL (I mean things like new classes and Python or SPARQL) where when you assert a new property value you have the option to create an instance of a Relation class and store data directly on that instance. That way you could treat the OWL ontology as a true graph.
    • Michael DeBellis: Often you can even ask GPT-4 to create the KIF or CycL or CLIF .. and it will
  • Janet Singer: Here, mimicry of knowledge-driven behavior is being promoted as inevitably leading to structural models of knowledge and then to ‘emergent consciousness’. Ontologies are structural (good for modeling within their scope); LLMs are behavioral
  • Douglas Miles: GPT-3 btw seems useless compared to GPT-4 on this front
  • Hayden Spence: From my understanding, GPT-4 is multimodal and multimodel in the sense its training is higher parameter, it incorporates more than just text data, and the actual interface is the interaction of multiple GPT models working together.
  • Hayden Spence: Is the use of established controlled vocabularies that are under license like SNOMED CT, MedDRA, ICD10/0, or standards like FHIR, and the mappings between them -- once embedded -- still restricted? At what point does transformation of information collection become its own separate from the digested information.
  • Douglas Miles: i don't have a question at this point.. but love this talk!
  • Janet Singer: Symbolic and connectionist theories of cognition are both computationalist. Leaves out 4-E embodied cognition perspective

Resources

Next Meetings

 Session
ConferenceCall 2023 10 11Setting the stage
ConferenceCall 2023 10 18A look across the industry, Part 1
ConferenceCall 2023 10 25A look across the industry, Part 2
... further results