Actions

Ontolog Forum

Session Ontologies and AI
Duration 1 hour
Date/Time 11 Mar 2026 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Convener Gary Berg-Cross

Ontology Summit 2026 Ontologies and AI

  • Randy Goebel A (partial) framework for debugging foundation models
  • Abstract: The current most popular mechanisms of AI are Large Language Models (LLMs) despite the reality that they are computer programs that produce incorrect results. If any evolution of AI systems are to be trusted, the possible choices of foundation models must be further developed. We propose a simple framework that admits a number of different formalisms for so-called foundation models, and argue that, while the methods for debugging them are varied, the crucial scientific question should focus on how to provide a foundation for their debugging. The overall hypothesis is that if we want to establish trust in AI system behaviour we must ensure mechanisms to ensure their reliable operation.
    Relevant ideas come from discrete mathematics (e.g., Gödel, Turing), logic and logic programming, Bayesian probability, reinforcement learning, and transformers. Overall, we seek to understand how to choose amongst such methods and how to integrate them, depending on expectations about application correctness (or not).
  • Bio: Randy Goebel is a Professor of Computing Science and adjunct Professor in the Faculty of Medicine at the University of Alberta, and Fellow and Co-founder of the Alberta Machine Intelligence Institute (AMII), one of three Canadian federally-funded AI research organizations. He has had faculty appointments and visiting faculty appointments at the University of Waterloo, University of Regina, University of Tokyo, Hokkaido University (Sapporo, Japan), Multimedia University (Kuala Lumpur, Malaysia), Instituto Tecnológico de Monterrey (Monterrey, Mexico), and has been a visiting researcher at the German Center for AI Research (DFKI), the National Institute for Informatics (NII, Tokyo), and the Volkswagen Data Lab (Munich). His research interests include formal knowledge representation and reasoning (induction, belief revision, explainable AI (XAI)), knowledge visualization, algorithmic complexity, natural language processing (NLP), systems biology, with applications in clinical medicine, legal reasoning, and automated driving. Recently he has founded a new open access research institute, Openmind, a not for profit corporation in Canada and Singapore
  • Slides
  • Video Recording

Conference Call Information

Discussion

12:08:28 Ravi Sharma : Randy, How close are we to have the machine being intelligent in terms of knowledge? Seems they are quite informed to the extent of info in LLMs?

12:16:56 Ravi Sharma : If so then every lack of context is a bug - you mean bug is same as lack of context or complete information?

12:18:53 Ravi Sharma : Randy thanks, this is what we did in a couple of summits, namely context for pre and explanations for post about an ontology result.

12:26:10 Ravi Sharma : I guess fast thinking implies the best inbuilt Logic and Probability but how does mind decide the resulting Action?

12:33:59 Ravi Sharma : if RL has dynamics then are actions updated in machine?

12:37:56 Ravi Sharma : RL then allows dynamic action update due to statistical or results based updates?

12:44:07 Ravi Sharma : My Q is that the objective or target will determine which vertical stack part is important.

12:48:19 Ravi Sharma : Is the role of Agents the selection of the type of foundational framework?

12:51:37 TS : ‘Foundation’ of, or for, what?

12:53:22 Ravi Sharma : what is required of these models to interoperate?

13:18:06 janet singer : Rethinking the ends of the logic to neural net spectrum can be asked in two ways: 1) what distinguishes how they are generated, and 2) how does that constrain (for good or ill) how they are applied

13:19:42 janet singer : Logic end is theory-driven, NN end is data-driven — though both necessarily involve both?

13:21:51 Ram D. Sriram (Section M) : https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/

13:27:59 janet singer : Agentic AI era is bringing issues of affect/evaluation/BDI/pragmatics/social norms to the fore, though there seems still to be a lot of hopeful labeling that is obscuring a scientific and engineering treatment of those ideas key ideas

13:30:20 janet singer : Agency, trust, economies, etc. can’t just be left hopefully to ‘emergence’

13:31:58 Ram D. Sriram (Section M) : BS = Bull S..t == Hallucinations in LLMs

13:35:57 janet singer : Transparency is the key — as in openmind

13:37:59 janet singer : Great talk

Resources

Previous Meetings

 Session
ConferenceCall 2026 03 04Ontologies and AI
ConferenceCall 2026 02 25Retrospective
ConferenceCall 2026 02 18Overview

Next Meetings

 Session
ConferenceCall 2026 03 18Ontologies and AI
ConferenceCall 2026 03 25Ontologies and AI
ConferenceCall 2026 04 01Foundations and Tools