Actions

Ontolog Forum

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Ontology Summit 2013: (Track-D) "Software Environments for Evaluating Ontologies" - Synthesis

Track Co-champions: Michael Denny, Ken Baclawski & PeterYim

Mission Statement

Through this track, we aim to coordinate the following:

  • provide a venue to bring together individuals and communities who can help define and advance the state-of-the-art in software and systems for evaluating ontologies
  • the collection and enumeration of software environments and tools for evaluating ontologies (with emphasis on those that are open efforts and those that are publicly available)
  • investigations and development work (software prototyping and implementation) focused on the ontology evaluation theme, leading to interim presentations at the symposium, and possibly continued after this Ontology Summit ... (this bullet, which was on our original mission statement is now handled by the Hackathon-Clinics Activities champions - see: OntologySummit2013_Hackathon_Clinics)

see also: OntologySummit2013_Software_Environments_For_Evaluating_Ontologies_CommunityInput


Work-products from this Track


The following is an initial input from track D for the Summit Communique. Not all of these points will necessarily be addressed and included. These are provided for comment. --MikeDenny /2013.03.28

Track D, as "Software Environments for Evaluating Ontologies", falls within the current Communique outline in:

C. The State of the Art of Ontology Evaluation (4) What tool-support is currently available to support the evaluation of the characteristics (identified in C-2) and the best practices (identified in C-3)?

Within this vein, some preliminary Track D concepts that may be developed for inclusion in the Summit communique are, in no special order:

  • The notion of tool support of quality is broader than the track's title and should include "guidance" as well as "evaluation" of those ontology characteristics determining an ontology's quality and fitness. Ontology tools and software environments may intentionally constrain or recommend to the user proper ontology structure and content.
  • Tools may contribute this "evaluation" or "guidance" function at different points along the ontology life cycle, and for a given characteristic, some tools may perform better in one life cycle phase than in another phase where a different tool is better suited. Generally, appreciation of the full cycle of life of an ontology is not well established within the ontology community.
  • There are central aspects of ontology that may not be amenable to software control or assessment. For example, the need for clear, complete, and consistent lexical definitions of ontology terms is not presently subject to software consideration beyond identifying where lexical definitions may be missing entirely. Another area of quality difficult for software determination is the semantic fitness of an ontology to its world domain (reality) or to its application domain. Software guidance may be available for the fitness of candidate ontologies for import and reuse, but not so for the novel content of a new ontology.
  • The design, implementation, and use requirements of an ontology may affect how quality and fitness on a particular ontology characteristic are determined, as well as interpreted and valued. Perhaps all quality and fitness assessments by software should be traceable to stated ontology requirements.
  • Significant new ontology evaluation tools are currently becoming available to users. Carving a link between such tools and existing IT architecture and design tools (e.g., EA and SA) remains a future possibility in order to integrate ontology into mainstream application software development within enterprise or more focused IT environments. This capability could offer a definitive means of connecting ontology quality/fitness characteristics and measures to use case and application software requirements.
  • Approximate lexical and structural matching of a new ontology or ontology component to the content of a repository of known ontologies may offer an effective means of identifying comparable ontology content for:
    • 1) demonstrable coding patterns;
    • 2) confirmation of authoring approach; and
    • 3) identification of reuse candidates.
  • Given sufficient results from the Ontology Quality Software Survey, the degree to which current tool capabilities align with ontology quality priorities expressed by Tracks A-C.
  • Discoveries about the state of ontology evaluation stemming from the Hackathon and Clinic experiences.

--

maintained by the Track-D co-champions: Mike Denny, Ken Baclawski & Peter P. Yim ... please do not edit