The Ontology Summit is an annual series of events that involves the ontology community and communities related to each year's theme chosen for the summit. The Ontology Summit was started by Ontolog and NIST, and the program has been co-organized by Ontolog and NIST along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives.
As part of Ontolog’s general advocacy to bring ontology science and related engineering into the mainstream, we endeavor to facilitate discussion and knowledge sharing amongst stakeholders and interested parties relevant to the use of ontologies. The results will be synthesized and summarized in the form of the Ontology Summit 2024 Communiqué, with expanded supporting material provided on the web and in journal articles.
Process and Deliverables
Similar to our last seventeen summits, this Ontology Summit 2024 will consist of virtual discourse (over our archived mailing lists), virtual presentations and panel sessions as part of recorded video conference calls. As in prior years the intent is to provide some synthesis of ideas and draft a communique summarizing major points. This year will begin with a Fall Series in October and November; the main summit will begin in February.
Meetings are at Noon US/Canada Eastern Time on Wednesdays and last about an hour.
Fall Series on Ontologies and Large Language Models: Related but Different
Fall Series Co-Chairs: Andrea Westerinen and Mike Bennett
Fall Series Theme
Ontologies and Large Language Models (LLMs) such as OpenAI's GPT-4 represent two different but related concepts within the fields of artificial intelligence and knowledge representation.
Ontologies are representations of a knowledge domain. They define the concepts, relationships, properties, axioms and rules within that domain, providing a framework that enables a deep understanding of that subject area. Ontologies are used to enable machine reasoning and semantic understanding, allowing a system to draw inferences and to derive new information and relationships between entities.
On the other hand, LLMs are machine learning models that aim to generate human-like responses (including text and images) based on an input (“prompt”). They are trained on a large corpora of (mostly online) text, learning the patterns and connections between words and images. Hence, although their “knowledge base” is broad, it is also sometimes incorrect and/or biased. LLMs generate new content based on their training data, but don't explicitly understand the semantics or relationships in that content. This mini-summit explores the similarities and distinctions between ontologies and LLMs, as well as how they can be used together. In addition, the success of LLMs has generated much interest in AI and machine learning. This can be leveraged to promote the benefits of, and increase awareness of, the value of ontologies.
Fall Series Topics
- Semantic Understanding and Knowledge Representation: Both ontologies and LLMs represent “knowledge”. Ontologies explicitly capture data’s semantics regarding entities and their relationships, and are based on a formal, structured, axiom-based/logical representation of a domain of knowledge. LLMs, in contrast, have implicit, probabilistic representations of “knowledge”, based on the patterns in their training data.
- "What do we mean by knowledge representation?” Internal vs external vs formal knowledge, a sliding bar
- Technical Assistance and Hybrid Systems: Both ontologies and LLMs can be used to build question-answering systems, information extraction systems, chatbots and a variety of technical assistants. They can be used together to improve functionality and correctness. For example, LLMs can be used to extract information from text and aid in its mapping to an ontology. LLMs can make ontologies more accessible and usable for non-expert users. In turn, ontologies can be used to formulate prompts to the LLM, or to validate the responses of the LLM.
- 4 October 2023 Kickoff/Overview, Andrea Westerinen and Mike Bennett
- 11 October 2023 Setting the stage, Deborah McGuinness
- Rennselaer Tetherless World Senior Constellation Chair
- Professor of Computer Science, Cognitive Science, and Industrial and Systems Engineering
- Expert in knowledge representation, reasoning languages and systems
- 18 October 2023 A look across the industry, Part 1
- 25 October 2023 A look across the industry, Part 2
- 1 November 2023 Demos of information extraction via hybrid systems
- 8 November 2023 Broader thoughts
- 15 November 2023 Discussion and Synthesis, including questions for the full summit