Actions

Ontolog Forum


Session A look across the industry, Part 1
Duration 1 hour
Date/Time 18 Oct 2023 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Andrea Westerinen and Mike Bennett

Ontology Summit 2024 A look across the industry, Part 1

Agenda

  • Kurt Cagle, Author of The Cagle Report
    • Title: Complementary Thinking: Language Models, Ontologies and Knowledge Graphs
    • Abstract: With the advent of Retrieval Augmented Generators (RAGs), a more or less standardized workflow has become available for integrating large language models such as ChatGPT with knowledge graphs. This in turn has raised the question about the nature of ontologies associated with LLMs and how knowledge graphs can be structured and queried to make integrated data access possible between the two types of systems. In this talk, Editor and AI Explorer Kurt Cagle of The Cagle Report looks at this process and discusses how they affect both knowledge portals and ontology design.
    • Slides
  • Tony Seale, Knowledge graph architect and thought leader (LinkedIn)
    • Title: How Ontologies Can Unlock the Potential of Large Language Models for Business
    • Abstract: LLMs have remarkable capabilities; they can craft letters, analyze data, orchestrate workflows, generate code, and much more. Companies such as Google, Apple, Amazon, Meta, and Microsoft are all investing heavily in this technology. Everything indicates that LLMs have enormous disruptive potential. However, there is a problem: they can hallucinate, and for any serious business, that is a deal-breaker. This is where ontologies can come in. In combination with Knowledge Graphs, they can place guardrails around the LLMs, thus allowing organizations to harness the capabilities of LLMs within the framework of a safely controlled ontological structure.

Slides

Video Recording

Conference Call Information

  • Date: Wednesday, 18 October 2023
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

  • Andrea Westerinen: I like the quote, “Ontologies are the shapes of information and knowledge”
  • Ravi Sharma: Can LLMs and/or ontologies be used to detect AI artifacts?
    • Andrea Westerinen: @Ravi Sharma There are some different articles on this, but basically no.
    • Mike Bennett: Not unless you could write an ontology that defines what it is to be truly human
    • Michael Robbins: Or rebuild the web from the bottom up to embed new frameworks for digital identity and content provenance/authenticity (which is what we need to commit to)
    • Andrea Westerinen: Provenance also aids with detecting bots
  • Michael DeBellis: Someone asked about agents in relation to Data Mesh (can't find the orig comment) IMO Data Mesh could be implemented in terms of Agents but typically isn't.
  • Ravi Sharma: Is there one kind or multiple kinds of connectivity in KG as well as in Ontologies?
  • Ravi Sharma: Container in the sense of domain overlaps especially overlapping vocabs as Venn diagrams?
  • Anh: Why is it hard to build LLMs for other languages? Why can't it be replicated easily when translation work (e.g. Facebook, Google Translate, LinkedIn) has already done a somewhat good job?
    • Andrea Westerinen: It is about the training data.
    • Bart Gajderowicz: [Anh], on longer text, translation looses many of the cultural nuances of a language, and looses context. Also, most training data is in English so most models are trained on English then translated.
    • Anh: Thank you, @Andrea Westerinen & @Bart Gajderowicz. Does it meean it'd cost the same to build a new LLMs for a new language?
    • Andrea Westerinen: I would believe so.
    • Bart Gajderowicz: The volume of English text for training is cheaper, it’s just the web. So I’d imagine finding enough text in your target language would be the biggest cost. Librarians are our friends here 🙂
    • Penny Anderson: this context free is difficult when you start thinking of utility of UI
    • Anh: Could you please elaborate more on utility of UI? @Penny Anderson
    • Penny Anderson: I mean UIs tend to be process driven data-centric is not tightly bound to a particular process that is context free
    • Penny Anderson: other than search ?
    • Anh: I see. Thanks.
    • Penny Anderson: Data providence, Ethical AI ?
    • Andrea Westerinen: @Penny Anderson Much better declared and reasoned against in KGs.
    • Penny Anderson: Zero trust in networks?
  • Ravi Sharma: Kurt when you talk of box, you are essentially stating in and out of scope items or is there a way of capturing the info outside the box and bringing it in?
  • Michael DeBellis: I don't agree that knowledge graphs are a form of data mesh. They are two related but different concepts. Data mesh is essentially an architecture for enterprise data definition and management. Knowledge graphs are a tool that can be used to implement a data mesh. Data mesh IMO brings the philosophy of microservices to data.
    • Alan Morrison: Michael, re: microservices to data, should we be thinking of agents as messengers and KGs as the data resource>
  • Ravi Sharma: What kind of aggregates are these data shapes? Are these only valid for a class of data or can you mix data types in a shape aggregate?
    • Michael DeBellis: Ravi: in SHACL you define similar things as you do with OWL. E.g., does a property have to have exactly one value, the datatypes of a property, other constraints. The difference is OWL is used for reasoning over large knowledge graphs and uses the Open World Assumption. SHACL is for constraining data so it uses the Closed World Assumption. E.g., you can define ss_number as a property that must have exactly one value in either OWL or SHACL but in OWL you will almost never trigger an error if the restriction isn't satisfied due to OWA. With SHACL you will get error messages due to CWA
  • Ayya Niyyanika Bhikkhuni: When weighting a concept is that some sort of credibility score?
  • Michael Robbins: LangChains - I’m thinking about what role ConLangs could serve its intermediaries n revolutionizing language modes and NLP. Happy to have a follow-up discussion with anyone who is interested. https://en.wikipedia.org/wiki/Constructed_language
  • Ravi Sharma: What is the equivalent of Objects in LLM? What are these entities called and can same onto-entity be different in LLM context?
  • Ravi Sharma: Kurt thanks for including wonderful valuable background cultural images, these are inspiring. Are you also conveying the there is external (databased) and internal knowledge such as contemplation?
  • Ravi Sharma: Why are you limiting your examples to RDF why not MOF also?
  • Todd Schneider: The important question is not mapping(s). It’s how can well constructed ontologies be used in the ingestion/training of LLMs.
  • Sus Vejdemo: Question: what are good case studies of KGE and LLM integrations?
    • Andrea Westerinen: I addressed some of this in the opening session. “Hybrid systems” include both where LLMs help ontologies (actually the Oct 25th session) and where ontologies help LLMs (Oct 4 and Nov 1 sessions).
    • Andrea Westerinen: Also, Tony is highlighting a GREAT integration.
  • Michael DeBellis: One question I have is what happens once you load a knowledge graph into an LLM? I know it can be done but once you load say a Turtle file into the LLM then what?
    • Bart Gajderowicz: Generally the LLM builds a small knowledge graph instance inside its memory and you can query it. Ask it to write SPARQL to get some instances, etc. I have not seen it used for large KGs, just small ones.
  • Amit Jain: Will the recording be shared with attendees after the summit?
    • Mike Bennett: Yes, the recording will be uploaded to the session page when it is ready.
  • Cedric Berger: Is there any metrics to measure that indeed KG combines with LLM are less hallucinating?
    • Andrea Westerinen: I will try to provide these in the summary. I have read papers on this as well as blog posts.
    • Bart Gajderowicz: All the work I’ve come across relies on the knowledge graph to provide explicit knowledge. So if you can ground the LLM with a graph, you can verify if the answers the LLM provides are “facts” in the graph
  • Kurt Cagle: Once you load in the ontology into a conversation, it will create (to some extent) an LLM conceptual space for that data. Also keep in mind that getting ALL of a knowledge graph via a RAG is usually not feasible.
  • Ravi Sharma: Tony great diagram for LLM ontology lop to improve each other.why is ontology weak in capturing concepts Vs LLM?
  • Michael DeBellis: One of the most interesting papers I've read is from Lawrence Berkeley Labs on using LLMs to extend an ontology.
  • Ravi Sharma: Tony the built-in ubcertainty in LLMs gives it extra power to apply to real life probabilistic world?
  • Ravi Sharma: Tony LLMs and analog and ontology as Quantum? great way. thanks
  • Mike Bennett: Left brain v right brain is a great way of talking about ontology v LLM. Now you have to create a good corpus collosum.
    • Gary Berg-Cross: A better model than left right hemispheres is by layers - old, mid brain (associative) and neo-cortex. They interconnect in many ways and some by the limbic system.
    • Bart Gajderowicz: I like the System 1 / System 2 analogy
  • Cedric Berger: Aren’t LLMs also kind of discrete as relying on vectors (arrays of numbers) of limited dimensions?
    • Bart Gajderowicz: LLMs are fundamentally probabilistic, not discrete. Some models are trained to provide discrete classifications, but that’s just at the output level. Internally they are probabilistic.
    • Andrea Westerinen: The discreteness pertains to the encoding. But, does the probabilistic nature of LLMs make it more continuous? I am not sure.
    • Mike Bennett: The weightings are a number rather than a logical truth value as in KGs
    • Kurt Cagle: Even with KGs, you can set up reifications that also set up Bayesians that are again more fuzzy (or at least more stochastic).
    • Mike Bennett: If you had an ontology with weightings instead of truth values and can train those values, you have a semantic network like a brain/mind.
    • Kurt Cagle: It's where I think we're heading. People in the semantic space have known for years that knowledge is fuzzy / fractal, but getting there has always been the rub.
    • Mike Bennett: I roughed out an idea for this kind of semantic network application back in the 90s.
  • Ravi Sharma: What are vectors equivalent to in LLM context?
  • Harvey King: Are the embeddings stored across 3 layers?
  • Anh: Does KG consider the time stamp of the assertions/objects? Context of my question: could we use it to mark the originality of posts of similar contents to alert plagiarism.
  • Ravi Sharma: Has anyone done this a roundtrip quality check, learn from LLM and put it in ontologies and the other way around?
    • Benoit Claise: In which context/use case?
    • Michael DeBellis: That paper I posted earlier from Lawrence Berkeley Labs used LLM to extend an ontology but just went in one direction, expanding the ontology not changing the LLM.
  • Ravi Sharma: Can you do reasoning on same concept in both to differentiate their respective strengths and weakness?
  • Ravi Sharma: Tony your tree or chain of thought are great ways
  • Ravi Sharma: Tony and Kurt Can you address feature space Vs training set learning approaches?
  • Liju Fan: Why are the relations in the ontology explicit?
    • Andrea Westerinen: Relations are defined, and they can be inferred, but this is the essence of ontology. Ontologies are “open world” but do need relationships.
    • Michael DeBellis: Because in the ontology you (usually manually) create the relations. In an LLM the relations are inferred by the ML algorithm and usually can't be manually changed
    • Liju Fan: It seems there is a need to be able to rename LLM inferred relations for them to be human-understandable and practically useful.
  • Kurt Cagle: Please note the similarity of Tony's slide with biological cells. Hmmm ...
  • Cedric Berger: Has anyone asked AI to generate an image of a factual made of network base units?
  • Michael Robbins: Agreed, Tony. And we’ve talked about this on LinkedIn. A constellation of domain-specific and ecosystem-based Community Knowledge Graphs and Language Models. #CKGs and #CLMs
    • Mike Bennett: TLO as mitochondria?
    • Michael Robbins: Language is inseparable from culture and context
  • Harvey King: Do KG's take a different nature when dealing with math?
    • Andrea Westerinen: Not different, but with more rules?
    • Michael DeBellis: Yes. An ontology uses explicit models like linear algebra. LLMs use linear algebra but computes answers based on examples, doesn't have a theoretical model of math (or other domains) as an ontology does.
  • Sus Vejdemo: Thanks for a great talk! I love the conceptualization of embeddings as ontologies.
  • Mike Bennett: If you reify every edge you can give each one an analog value
  • Mark Underwood: For those interested in practical cybersec use cases (lots of chatty network data, some NLP), lot of narrow domain-specific emergent ontologies; e.g., Lambda / microservice mesh etc. mark.underwood@syf.com
  • Gary Berg-Cross: I think the working memory graph is a useful early concept bring the LLMs and ontologies together but it is not so easy to capture what is the context for knowledge in this representation. I would guess this is a sub-set of the fluid knowledge of what human cognition employs. Much remains unconscious. But with research AI systems may make more of this explicit.
  • Ravi Sharma: My query is what is the relationship among reification, provenance and context history?
  • Andrea Westerinen: Quote from Tony, “LLMs for compute” and “As much data into graph, then translating the graph paths to NL and adding to the LLM”
  • Gary Berg-Cross: Reality may be atomistically discrete but at such a nano-level that continuous models make better predictions than discrete models that are orders of magnitude too gross rather than fine grained.
  • Andrea Westerinen: Quote from Kurt: “Community Language Models - decentralized, federated, ad hoc network of information”
  • Gary Berg-Cross: Models need to be more than federated. Because we center on semantic accuracy and relevance they need to be semantically harmonized.

Resources

Previous Meetings

 Session
ConferenceCall 2023 10 11Setting the stage
ConferenceCall 2023 10 04Overview

Next Meetings

 Session
ConferenceCall 2023 10 25A look across the industry, Part 2
ConferenceCall 2023 11 01Demos of information extraction via hybrid systems
ConferenceCall 2023 11 08Broader thoughts
... further results