Actions

Ontolog Forum

Revision as of 20:54, 16 February 2024 by Forum (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Setting the stage
Duration 1 hour
Date/Time 11 Oct 2023 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Andrea Westerinen Mike Bennett

Ontology Summit 2024 Fall Series Setting the stage

Agenda

Deborah L. McGuinness
Rennselaer Tetherless World Senior Constellation Chair
Professor of Computer Science, Cognitive Science, and Industrial and Systems Engineering

Title: The Evolving Landscape: Generative AI, Ontologies, and Knowledge Graphs

Abstract: AI is in the news with astonishing regularity and the variety of announcements is often dizzying. In this talk, we will explore some opportunities (as well as threats) from the world of generative AI with respect to semantic technologies. We will explore some questions worth pondering as we plan our ontology and knowledge graph directions and hopefully leave with some mutually beneficial synergies between large language models and classical Ontology Summit topics.

Slides

Video Recording

Conference Call Information

  • Date: Wednesday, 11 October 2023
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

  • Ravi Sharma : Deborah what if the training set is not representative, how do we know the errors are due to lack of a good raining set or bad algorithms such as Bayes implementation?
  • Ravi Sharma : Would like to know how much richness beyond ontologies can be added to KGs?
    • Todd Schneider : What about the converse: How can ontologies be leveraged to improve LLMs?
    • Todd Schneider : Ontologies and knowledge graphs provide explicitness, in contrast to LLMs.
  • Bart Gajderowicz : Following this year’s FOIS in Canada, I submitted a proposal for a FOIS Working Group specifically for an ontology benchmark suite, in case anyone is interested in joining me in the effort.
    • Todd Schneider : Bart, ‘ontology benchmark suite’??
    • Bart Gajderowicz : Yes, it’s exactly what is it sounds like. At FOIS there were a few presentations that used datasets and a few that found issues with published ontologies. It came from these works
  • Ravi Sharma : Are LLM aware of graphic ways of understanding or even ability to create imagelike information from patterns?
  • Mike Bennett : I would expect a wine ontology to have 'Relative' concepts like terroir (land in wine context); vintage (time in wine context); varietal (grape in wine context) etc. i.e. contextually relevant concepts. The reason I mention it: how does any LLM recognize or relate to contextually relative concepts?
  • Ravi Sharma : Seems like Agile development type template
  • Ravi Sharma : In wine ontology, what would be impact of adding one or two more variables?
  • Sus Vejdemo : I'm creating an ontology for the linguistics of temperature terms (how different language communities divide up the sensation of hot vs cold in different ways). I used chatGPT to help me with OWL syntax - but if I hadn't already had a very firm grasp of what I wanted to know, it would have led me pretty wrong.
  • Douglas Miles : LLMs turn out to be the best version of a search engine at finding obscure data
    • Douglas Miles : (not calling *this* obscure, but it is ideal for finding exactly the works we are looking for sometimes)
  • Sus Vejdemo : Beware - chatGPT can sometimes delete classes without informing you if you run bigger ontologies through it.
    • Ravi Sharma : Sue that is why context and prior knowledge are good filters?
  • Douglas Miles : often i will ask "What information was just left out"
    • Douglas Miles : and the LLM usually (with enough harassment) will supply me with what i wanted
  • Mike Bennett : I like ChatBS as a term. LLM is technically bullshit.
  • Phil Jackson : Can LLM's perform 'self-talk' yet, e.g. to emulate artificial consciousness?
    • Bart Gajderowicz : There’s a recent paper that evaluates whether the LLM knows it’s hallucinating, that may be close to its internet “thinking” as it explores different text it wants to generate: Azaria, A., & Mitchell, T. (2023). The Internal State of an LLM Knows When its Lying. http://arxiv.org/abs/2304.13734
    • Phil Jackson : thanks for this reference
    • Sus Vejdemo : We'd want it to have an "evidentiality" signal, like some natural languages do!
  • Douglas Miles : Chat GPT3.5 or 4 there?
  • Mike Bennett : Mansplaining as a service
  • John Sowa : Summary of all these messages: LLMs are flaky. If you're lucky, they're great. If not, you have no idea what went wrong. That is not acceptable for any mission-critical applications.
  • Ravi Sharma : What would have the outcome been if you had experts key inputs to create a new exposures health ontology?
  • Gary Berg-Cross : It will be good to look at these leverage and pain points again in a year
  • John O'Gorman : @Semantium is using a faceted, foundational ontology
  • Douglas Miles : GPT for Python code, it's kinda hit-or-miss.. mostly garbage. But with Prolog and Lisp, it's useful!.. Maybe it's because there's less bad code out there (only one place the CMU AI archive).. Tehre will be a big market for LLM curators
  • Todd Schneider : We don’t even know what ‘context’ is.
    • Mike Bennett : Context is the nexus of time, place, role, event etc. i.e. a bunch of concepts in the ontology (or instances of these)
    • John O'Gorman : @Todd Schneider - Context is the way language reduces ambiguity.
      • Todd Schneider : Yes, but that’s an application of [a] ‘context’.
    • Andrea Westerinen : I have tried to provide ‘context’ in my prompts. For example, summarize this news article assuming it was written by a right-wing publication.
  • Douglas Miles : You have to harass them over and over John .. such as "Give me three completely different translations to CLIF:
  • Andrea Westerinen : My “sandbox” is translating NL to an ontology. Will be talking about it on Nov 1.
    • Mark Underwood (IS Innovation) : Will try to make your talk. May have possible uses for specialized, DSL-type ontologies that are constructed from emerging, fresh domains; e.g., AWS Lambda
  • Gary Berg-Cross : Just as chatGPTs can do some programming they can express concepts in formal languages and thus help with Ontology population.
  • Douglas Miles : "Would that translation you just gave translate back to the same English?"
  • Bart Gajderowicz : John Sowa: Speaking with those in classic literature, translation models do a poor job exactly because of the nuances in classics where meaning is lost due to the poetic styles
  • Douglas Miles : I am in general agreement.. and admit there is no clear way to resolve LLM issues
  • Ayya Niyyanika Bhikkhuni : The metrics are a concern even more when the “research” being done is about what something from an ancient language means and how to apply what it “means” in today’s world. So the pain point of competency metrics is not just the actual textual validity of the translation but also of “context.”
  • Sus Vejdemo : What's a good way to get on the mailing list? I found this seminar from a Taxonomy LinkedIn group post. (I'm a linguist, semanticist)
  • Gary Berg-Cross : I've used chats for competency Qs but not use cases which seems like an interesting possibility as part of KE sessions.

Resources

Previous Meetings

 Session
ConferenceCall 2023 10 04Overview

Next Meetings

 Session
ConferenceCall 2023 10 18A look across the industry, Part 1
ConferenceCall 2023 10 25A look across the industry, Part 2
ConferenceCall 2023 11 01Demos of information extraction via hybrid systems
... further results