From OntologPSMW

Jump to: navigation, search

This is a property of type Text.


Pages using the property "Aim"

Showing 4 pages using this property.

T

TrackA +Ontologies are built to solve problems, and ultimately an ontology���s worth can be measured by the effectiveness with which it helps in solving a particular problem. Nevertheless, as a designed artifact, there are a number of intrinsic characteristics that can be measured for any ontology that give an indication of how ���well-designed��� it is. Examples include the proper use of various relations found within an ontology, proper separation of concepts and facts (sometimes referred to as classes vs. instance distinctions), proper handling of data type declarations, embedding of semantics in naming (sometimes called ���optimistic naming���), inconsistent range or domain constraints, better class/subclass determination, the use of principles of ontological analysis, and many more. This Track aims to enumerate, characterize, and disseminate information on approaches, methodologies, and tools designed to identify such intrinsic characteristics, with the aim of raising the quality of ontologies in the future.
TrackB +The intent is to explore, clarify, and identify gaps, practical and theoretical, in the of evaluation of ontology from a systems perspective using the paradigm of blackbox evaluation. Extrinsic aspects of ontology evaluation includes subjective factors, measures or metrics, and the range of values of quantifiable attributes. In a systems context evaluations are derived from examination of inputs or stimuli (to the blackbox) and the outputs or externally measurable attributes or behaviors, where those behaviors are controlled or influenced by an ontology. The ontology in question may be fully embedded/encapsulated within an entity or system, or may be externally accessible (and potentially shared) among multiple entities or systems. The separation of system or entity behaviors which are not governed by an ontology must be accounted for in any ontology evaluation process.
TrackC +There are two approaches that can be taken to assuring the quality of an ontology: # Measure the quality of the result against the requirements that it should meet. # Use a process or methodology which will ensure the quality of the resultant ontology. If you wait to the end of ontology development to measure the quality, the costs of correction of any errors are likely to be high. Therefore using a process or methodology that builds quality into an ontology can have significant benefits. At present, however, it is unclear if there is any process or methodology that, if followed, is sufficient to guarantee the quality of a resulting ontology, and most of those that do exist are relatively informal and tend to require expert support. A consideration in evaluating ontologies is the different scenarios in which they are used. For example, one might be used as a formal conceptual model to inform development and another might be used in an ontology based application. Both the evaluation criteria and the development methodologies employed may vary widely.
TrackD +Through this track, we aim to coordinate the following: * provide a venue to bring together individuals and communities who can help define and advance the state-of-the-art in software and systems for evaluating ontologies * the collection and enumeration of software environments and tools for evaluating ontologies (with emphasis on those that are open efforts and those that are publicly available) * investigations and development work (software prototyping and implementation) focused on the ontology evaluation theme, leading to interim presentations at the symposium, and possibly continued after this Ontology Summit