Actions

Ontolog Forum

Ontology Summit 2013: Virtual Panel Session-03 - Thu 2013-01-31

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"

Summit Track Title: Track-A: Intrinsic Aspects of Ontology Evaluation

Session Topic: Intrinsic Aspects of Ontology Evaluation: Practice and Theory

  • Session Co-chairs: Dr. SteveRay (CMU) and Dr. LeoObrst (MITRE) ... [ intro slides ]

Panelists / Briefings:

Archives

Abstract

OntologySummit2013 Session-03: "Intrinsic Aspects of Ontology Evaluation: Practice and Theory" - intro slides

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.

During this Ontology Summit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.

In this 3rd virtual panel session of the Summit, we focus on theory and practice for intrinsic aspects of ontology evaluation. Our speakers will present a number of approaches and frameworks for evaluating the quality of ontologies, and some theoretical discussion of what constitutes intrinsic evaluation. Our main goal in this virtual session is to begin to lay out the criteria for intrinsic evaluation of ontologies, some possible metrics, and the rationale for these. We hope that all of the participants in the open discussion and chat will join us in helping to flesh out intrinsic evaluation criteria and their dimensions.

More details about this Ontology Summit is available at: OntologySummit2013 (homepage for this summit)

Briefings

  • Ms. MariaPovedaVillalon (Universidad Politécnica de Madrid), Dr. MariCarmenSuarezFigueroa (Universidad Politécnica de Madrid) and Dr. AsuncionGomezPerez (Universidad Politécnica de Madrid) - "A Pitfall Catalogue and OOPS!: An Approach to Ontology Validation" ... [ slides ]
    • Abstract: ... One of the advantages of using methodologies for building ontologies is to improve the quality of the resultant ontology. However, due to the difficulties involved in ontology modelling, such quality is not totally guaranteed. These difficulties are related to the inclusion of anomalies or worst practices. Our approach contributes to the ontology validation activity by (1) providing a catalogue of common worst practices, which we call pitfalls, and (2) proposing a web-based tool, called "OOPS!". This approach will help developers in two following ways: (1) to avoid the appearance of pitfalls in ontologies, and (2) to improve ontology quality by automatically detecting potential errors.
  • Dr. SamirTartir (Philadelphia University, Amman, Jordan), Dr. IsmailcemBudakArpinar (University of Georgia), Dr. AmitSheth (Wright State University) - "Ontology Evaluation and Ranking using [[ConferenceCall_2013_01_31|OntoQA]]" ... [ slides ]
    • Abstract: ... Ontologies form the cornerstone of the Semantic Web and are intended to help researchers to analyze and share knowledge, and as more ontologies are being introduced, it is difficult for users to find good ontologies related to their work. Therefore, tools for evaluating and ranking the ontologies are needed. In our talk, we present [[ConferenceCall_2013_01_31|OntoQA]], a framework that evaluates ontologies related to a certain set of terms and then ranks them according a set of metrics that captures different aspects of ontologies. Since there are no global criteria defining how a good ontology should be, [[ConferenceCall_2013_01_31|OntoQA]] allows users to tune the ranking towards certain features of ontologies to suit the need of their applications.
  • Dr. JesualdoTomasFernandezBreis (Universidad de Murcia), Dr. AstridDuqueRamos (Universidad de Murcia), Dr. RobertStevens (University of Manchester), Dr. NathalieAussenacGilles (Institute de Recherche en Informatique de Toulouse (IRIT), Université Paul Sabatier) - "The OQuaRE Framework for Ontology Evaluation" ... [ slides ]
    • Abstract: Many Software Engineering methods have been adapted and applied to ontology engineering in the last decades. This has not been the case so far of ontology evaluation despite the availability of international software quality standards. In contrast, the increasing importance of ontologies has resulted in the development of a large number of ontologies, but there is a lack of mechanisms for guiding users in making informed decisions on which ontology to use under given circumstances make hard to ontology users and tool developers selecting ontologies to be used and reused. We propose a framework named OQuaRE for evaluating the quality of ontologies based on the SQuaRE standard for software quality evaluation. The main objective of OQuaRE is to provide an objective, standardized framework for ontology quality evaluation in order to identify the strengths and weaknesses of ontologies. OQuaRE aims at helping users in making informed decisions and ontology experts in evaluating the quality of their ontologies with the support of automatically calculated metrics. We will not only describe the current version of the framework but also the results and evaluation of its application by external experts.

Agenda

OntologySummit2013 - Panel Session-03

  • Session Format: this is a virtual session conducted over an augmented conference call

Proceedings

Please refer to the above

IM Chat Transcript captured during the session

see raw transcript here.

(for better clarity, the version below is a re-organized and lightly edited chat-transcript.)

Participants are welcome to make light edits to their own contributions as they see fit.

-- begin in-session chat-transcript --

[08:23] Peter P. Yim: Welcome to the

Ontology Summit 2013: Virtual Panel Session-03 - Thu 2013-01-31

Summit Theme: Ontology Evaluation Across the Ontology Lifecycle

  • Summit Track Title: Track-A: Intrinsic Aspects of Ontology Evaluation

Session Topic: Intrinsic Aspects of Ontology Evaluation: Practice and Theory

Panelists / Briefings:

  • "A Pitfall Catalogue and OOPS!: An Approach to Ontology Validation"

- Ms. MariaPovedaVillalon (Universidad Politecnica de Madrid)

- Dr. MariCarmenSuarezFigueroa (Universidad Politecnica de Madrid)

- Dr. AsuncionGomezPerez (Universidad Politecnica de Madrid)

  • "Ontology Evaluation and Ranking using [[ConferenceCall_2013_01_31|OntoQA]]"

- Dr. Samir Tartir (Philadelphia University, Amman, Jordan)

- Dr. IsmailcemBudakArpinar (University of Georgia)

- Dr. Amit Sheth (Wright State University)

  • "The OQuaRE Framework for Ontology Evaluation"

- Dr. JesualdoTomasFernandezBreis (Universidad de Murcia)

- Ms. AstridDuqueRamos (Universidad de Murcia)

- Dr. Robert Stevens (University of Manchester)

- Dr. NathalieAussenacGilles (Institute de Recherche en Informatique de Toulouse (IRIT), Universite Paul Sabatier)

Logistics:

  • (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)
  • Mute control: *7 to un-mute ... *6 to mute
  • Can't find Skype Dial pad?
    • for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
    • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)

if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.

Proceedings:

[09:06] anonymous1 morphed into MariaPovedaVillalon

[09:14] anonymous1 morphed into Megan Katsumi

[09:16] anonymous1 morphed into Carmen Chui

[09:19] anonymous morphed into Jim Disbrow

[09:20] anonymous morphed into JesualdoTomasFernandezBreis

[09:20] JesualdoTomasFernandezBreis: Hello all

[09:24] anonymous morphed into Mohammad Aqtash

[09:25] anonymous1 morphed into David Makovoz

[09:26] MeganKatsumi1 morphed into Michael Grüninger

[09:28] Leo Obrst: Hi, Jesualdo, Maria, Mohammad and all!

[09:28] anonymous morphed into Samir Tartir

[09:28] MariaPovedaVillalon: Hi!

[09:28] Leo Obrst: Hi, Samir!

[09:28] MariCarmenSuarezFigueroa: Hello all....

[09:30] Mohammad Aqtash: Hi LeoObrst!

[09:30] astridduque morphed into AstridDuqueRamos

[09:31] anonymous morphed into Mike Denny

[09:31] anonymous morphed into Doug Foxvog

[09:31] anonymous1 morphed into Torsten Hahmann

[09:32] anonymous1 morphed into Clare Paul

[09:32] anonymous morphed into Joseph Tennis

[09:32] anonymous morphed into IsmailcemBudakArpinar

[09:34] anonymous morphed into James Odell

[09:35] Ram D. Sriram: I do have a problem viewing slides on a Mac (using VNC). It seems to work on a

PC.

[09:50] Steve Ray: @Ram: My theory remains that this problem relates to the latest version of Java,

which somehow keeps the VNC from working properly.

[09:52] Todd Schneider: Steve, Ram, I checked and the browser has the Java plug-in disabled.

[09:36] anonymous morphed into JoaoPauloAlmeida

[09:38] Peter P. Yim: == Steve Ray opens the session ... see: [0-Chair] slides

[09:39] List of members: Amanda Vizedom, Anatoly Levenchuk, AstridDuqueRamos, Bobbin Teegarden,

Bob Schloss, IsmailcemBudakArpinar, Carmen Chui, Clare Paul, David Makovoz, David Leal, Doug Foxvog,

Fabian Neuhaus, Fran Lightsom, Henson Graves, James Odell, JesualdoTomasFernandezBreis, Jim Disbrow,

JoaoPauloAlmeida, Joel Bender, Joseph Tennis, Ken Baclawski, Leo Obrst, MariaPovedaVillalon,

MariCarmenSuarezFigueroa, Mark Fox, Matthew West, Megan Katsumi, Michael Grüninger, Mike Denny,

Mike Dean, Mohammad Aqtash, Peter P. Yim, Ram D. Sriram, Samir Tartir, Steve Ray, Terry Longstreth,

Todd Schneider, Torsten Hahmann, vnc2

[09:42] Duane Nickull: Good day

[09:42] Peter P. Yim: Hi Duane, welcome to the session

[09:42] Duane Nickull: Please to be here.

[09:45] Peter P. Yim: == MariaPovedaVillalon presenting ... see: [1-Poveda] slides

[09:45] anonymous1 morphed into Trish Whetzel

[09:52] Steve Ray: We are now on slide 7

[09:54] Bob Schloss: As I listen to the approach Maria has started doing, it reminds me of some work

I did with my colleague Achille Fokoue-Nkoutche in the very early days of the XML Schema language.

We released a tool called IBM XML Schema Quality Checker through the IBM alphaWorks program, it was

very widely used because this kind of document / message / vocabulary modeling was unfamiliar to a

lot of people, and top quality tools for construction of these XML Schemas (such as from companies

such as Altova, Progress Software, IBM Rational) were still not widely available.

[09:56] Bob Schloss: Equally importantly, guidelines, best practices and patterns for XML Schema

development, which were later compiled by a number of people, were not yet documented... so our tool

warned people when they were using a construct that was strictly legal but might limit evolvability

of their schema or reuse by others.

[09:56] Bob Schloss: [I have to leave for another meeting... Will review all slides and the recording

of this chat later. Thanks all]

[09:57] MariCarmenSuarezFigueroa: @BobSchloss, interesting work.

[09:56] Steve Ray: @Maria: Question for later: To recognize your Pitfall #5, it would seem that you

would need to know the intent of a term such as isSoldIn and isBoughtIn. How does your automated

tool do this?

[09:57] MariCarmenSuarezFigueroa: @SteveRay: at this moment our tool OOPS! detect in an automated

way a subset of the pitfalls in the catalogue

[09:58] MariCarmenSuarezFigueroa: @SteveRay for those detected by OOPS! there are different

approaches as Maria is explaining

[09:59] MariCarmenSuarezFigueroa: (is going to explain in next slides)

[09:59] Doug Foxvog: @Steve. The inverse relationship between isSoldIn & isBoughtIn can be determined

to be inconsistent with having the argument types reversed. By noting that the argument types match

(arg1<=>arg1 & arg2<=>arg2) one can suggest that the error is in calling it an inverse relationship

instead of being in mis-assignment of argument types.

[10:01] Steve Ray: @MariCarmenSuarezFigueroa, Doug Foxvog: Ah yes, I see now - Domain and Range

mismatch.

[10:04] Doug Foxvog: Slide 14 suggests possible symmetric or transitive properties if there are equal

domain & range. Such suggestions should not be made if the relations are already defined as

asymmetric or functional.

[10:04] MariCarmenSuarezFigueroa: yes @DougFoxvog

[10:02] MariCarmenSuarezFigueroa: OOPS! is available at http://www.oeg-upm.net/oops

[10:00] anonymous1 morphed into Bruce Bray

[09:59] Samir Tartir: Hello Dr. @Arpinar, @MohammadAqtash... Nice to see you here.

[10:06] Samir Tartir: Some definitions I will be using in the my presentation can be found in this paper:

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4338348&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4338348

[10:06] Terry Longstreth: Maria- have you considered running OOPS! against public domain ontologies,

and publishing the resulting evaluations?

[10:07] MariCarmenSuarezFigueroa: @TerryLongstreth: thanks for the suggestion. Yes, we have already

made an experiment in this sense; and our idea is to evaluate more available ontologies in order to

see the current state of public ontologies.

[10:09] JesualdoTomasFernandezBreis: @MariCarmenSuarezFigueroa: evaluating OBO and BioPortal

ontologies might be interesting

[10:10] MariCarmenSuarezFigueroa: @Jesualdo, thanks! We have already evaluate a subset of OBO and

BioPortal ontologies, but our plan is to made the evaluation for more

[10:10] Todd Schneider: Can the OOPS! source code be obtained?

[10:11] MariCarmenSuarezFigueroa: @ToddSchneider: at this moment the source code is not available.

[10:12] Todd Schneider: Maria, Too bad. Can your team consider working with OOR initiative?

[10:13] MariCarmenSuarezFigueroa: @ToddSchenider: we are analysing how to proceed with our code.

Maybe we can consider your suggestion

[10:13] MariaPovedaVillalon: we will provide web services soon Todd

[10:14] Todd Schneider: Maria's, For performance reasons having a more 'local' instance of OOPS!

would be optimal.

[10:15] MariCarmenSuarezFigueroa: @Todd, yes you are right. As mentioned, we are now analysing

different options for our code :-D

[10:15] MariaPovedaVillalon: @ToddSchneider you are right, as Mari Carmen said we need to think

about to to proceed with the code

[10:15] MariaPovedaVillalon: @Todd I totally agree with the idea of sharing the code

[10:16] MariaPovedaVillalon: @Todd as first step we are creating the web services so that everybody

can include the features in their code, but sure, we also need to share the code

[10:12] Leo Obrst: @Maria: 3 questions: 1) These are all OWL/RDF ontologies; are you considering

other languages, e.g., Common Logic? 2) What if the ontology you are evaluating contains imports or

references to other ontologies, do you track down these and evaluate them? 3) What is "P9. Missing

basic information"?

[10:13] Amanda Vizedom: @Maria: Very nice. I have seen several large projects spend extensive effort

in developing their own, in-house versions of something like this approach. This speaks to the

relevance of the approach to real work-flows. However, due to lack of resources and/or expertise,

those in-house versions usually end up not as good as OOPS! appears to be. They also often generate

repeating cycles of management or collaborator doubt, as their developers cannot point to

independent grounding and acceptance. OOPS! seems to me like a valuable contribution to operational

use of ontologies.

[10:14] MariCarmenSuarezFigueroa: @Amanda, thank you for your comment.

[10:12] Trish Whetzel: Of the ontologies in BioPortal, do you find errors correlated to any of the

groups, such as OBO Foundry or UMLS, and/or ontology format .. which is also somewhat an indicator

of ontology design patterns?

[10:15] JesualdoTomasFernandezBreis: @TrishWhetzel: in another project we have done a systematic

analysis of the labels of bioportal ontologies and we found some problems with formats and

availability of some files

[10:19] Trish Whetzel: @Jesualdo Thanks, I'm interested in these issues

[10:25] JesualdoTomasFernandezBreis: @Trish: I will send you an email with the details

[10:16] Megan Katsumi: @Maria: Sorry if you mentioned this already, but how do you decide if a

particular characteristic qualifies as a pitfall?

[10:17] MariaPovedaVillalon: @Megan we have observed the pitfalls we list as errors in ontologies

when we manually analyzed them, however, the same "characteristic" might not be an error in other

ontology, so at the end the user decide. Sometimes the error does not need to be checked but that is

not always the case.

[10:15] Amanda Vizedom: @Maria: A few questions: (1) Is it correct that OOPS! works specifically on

OWL? Is it further narrowed to specific dialects (such as DL)? Does your group have any plans or

interests in extending to other languages (for example, Common Logic?)

[10:19] Doug Foxvog: @Amanda, re your first question. OOPS! only accepted OWL RDF/XML when I looked

at it in December.

[10:16] Amanda Vizedom: @Maria: (2) Can OOPS! detect errors (or warnings/suggestions) based on

general logical entailments? Does OOPS! make use of, or contain, a general OWL reasoner?

[10:19] MariaPovedaVillalon: @Amanda we think about leaving that decision to the user

[10:20] MariaPovedaVillalon: @Amanda reasoners do already exist and we don't want to rethink the

wheel, however we can benefit from them to detec more pitfall but the computational price will be

too high

[10:34] MariaPovedaVillalon: @Amanda in summary, at any case our idea is that we should leave the

decision of using reasoners to the user (maybe a checkbox in OOPS!) or point to existing reasoners

giving the user some guidelines about which things to check and common errors

[10:22] Amanda Vizedom: @Maria (3) Have you run into any difficulties concerning what seem to be

pitfalls and the behavior of OWL reasoners? An example that comes to mind is the fact if a property

is applied used to relate two things, one of which is not stated to satisfy the range requirements,

for example, it will be inferred that the thing *does* meet the range requirements. But in some

(many?) cases, the omission is actually indicative of an error. Would OOPS! treat as a pitfall or

warning the fact that the thing in the range is not stated to meet the range requirements?

[10:25] MariaPovedaVillalon: @AmandaVizedom excuse me, if I'm not wrong you talk about instances,

right now OOPS! only looks at the schema level

[10:26] MariaPovedaVillalon: @Amanda have I answered your question? I do not think I understood it

properly, maybe...

[10:32] Amanda Vizedom: @Maria: That's fine. I think that last question may be confusing if we are

accustomed to working at different levels of language expressiveness. If you are working in DL it

may be a instance-level issue; less so if working expressiveness beyond DL. Thanks!

[10:17] Peter P. Yim: @MariaPovedaVillalon @MariCarmenSuarezFigueroa - we are contemplating on doing a

hackathon exercise, it will be great if you team can join us in that effort (we have yet to refine

on what exactly would that "hackathon" entail though, so participant input are solicited)

[10:17] MariCarmenSuarezFigueroa: @Peter, yes, count with us

[10:18] Peter P. Yim: @MariCarmenSuarezFigueroa - fantastic! thank you.

[10:18] MariaPovedaVillalon: @Peter sure :)

[10:19] Peter P. Yim: Thanks, Maria.

[10:19] Ken Baclawski: @Maria: I built a system very similar to yours back in 2004. I called the

problems with an ontology symptoms rather than pitfalls, but it is the same idea. It used a

rule-based approach which was extendible. One interesting feature was that the symptoms that were

generated were in a Symptom ontology and we performed reasoning on the symptoms that were generated

to find relationships among symptoms since we found that a single error can generate many symptoms

(which, by the way, is the reason for using the word "symptom"). Here is a reference to the paper:

K. Baclawski, C. Matheus, M. Kokar and J. Letkowski. Toward a Symptom Ontology for Semantic Web

Applications. In ISWC'04, pages 650-667. Lecture Notes in Computer Science 3298:650-667.

Springer-Verlag. (2004)

[10:20] MariCarmenSuarezFigueroa: @KenBaclawski, thanks for the reference.

[10:21] MariaPovedaVillalon: @KenBaclawski, thank you very much for the reference, sounds really

interesting and familiar what you said about one error many symptoms...

[10:13] Doug Foxvog: (re. Maria's slide#14) One can suggest transitivity if the domain is a subclass

of the range. They need not be equal.

[10:28] MariaPovedaVillalon: @DougFoxvog we check what you said about suggestion in slide 14

[10:29] MariaPovedaVillalon: @DougFoxvog we fixed the errors it had in December, OOPS! was supposed

to check it and if everything is fine it should work by now

[10:30] JesualdoTomasFernandezBreis: I think tools like OOPS! are fundamental for ontology

engineers, thanks for your work!

[10:31] MariaPovedaVillalon: Thank you @Jesualdo

[10:31] MariCarmenSuarezFigueroa: @Jesualdo, thank you very much for your comment!

[10:12] Peter P. Yim: == Samir Tartir presenting ... see: [2-Tartir] slides

[10:14] anonymous1 morphed into Dennis Wisnosky

[10:14] anonymous1 morphed into YuvaTarunVarmaDatla

[10:17] anonymous2 morphed into NathalieAussenacGilles

[10:20] Joseph Tennis: can someone extract all the citations being shared and put them in one spot?

[10:21] Steve Ray: @Joseph: There is one spot we are collecting references. Amanda can say more about

that (and did in an earlier session).

[10:22] Joseph Tennis: sweet! thanks!

[10:23] Steve Ray: @Joseph: I should add that it is our shared responsibility to populate it.

[10:28] Todd Schneider: Samir, Would you provide the definitions for each of the variables in your

metrics in the chat?

{{{ [10:28] Amanda Vizedom: @Steve and @Joseph I will post a note to the summit list under the {biblio} subject soon. I've been away for a bit and have begun getting the Zotero library caught up. Meanwhile, the library itself is at https://www.zotero.org/groups/ontologysummit2013/items }}}

[10:33] Doug Foxvog: @Samir: many of your metrics are metrics for knowledge bases. It might be useful

to distinguish the two classes of metrics, and have different scores for ontologies (which define

types and relations) and knowledge bases (which define individuals and provide information about the

individuals by asserting relations that apply to them).

[10:36] Doug Foxvog: @Samir: I see that you do distinguish multiple ranking types in slides 16 & 17.

But defining different sets of rankings for different types of KBs or ontologies might be useful.

[10:34] Leo Obrst: @all: I think these evaluation tools and metrics would be very useful in the Open

Ontology Repository (OOR). Perhaps the speakers would like to join our OOR group and provide

potential services?

[10:36] MariCarmenSuarezFigueroa: @Leo, yes, we can consider to join the OOR group and see how we

can contribute to it

[10:40] MariaPovedaVillalon: @Leo where can we find information to join and contribute to OOR?

[10:42] Todd Schneider: Maria, http://ontolog.cim3.net/cgi-bin/wiki.pl?OpenOntologyRepository

[10:43] MariaPovedaVillalon: thanks

[10:39] Leo Obrst: @Samir: can you provide definitions for your variables?

[10:43] Samir Tartir: @ToddSchneider & @LeoObrst: There is a large number of variables used. E.g. A

set of classes, C., A set of relationships, P. An inheritance function, Hc. A set of class

attributes, Att.

[10:44] Samir Tartir: @ToddSchneider & @LeoObrst: The definitions are all included in the paper I

referenced right before I started presenting.

[10:45] Samir Tartir: I will be more than happy to send you the paper if you'd like.

[10:45] Todd Schneider: Samir, I missed that reference.

[10:46] Samir Tartir: Todd, here it is again:

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4338348&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4338348

[10:46] Todd Schneider: Samir, Great. Thank you.

[10:47] Torsten Hahmann: @Samir: before weighting, are the various metrics standardized (to values

between 0 and 1, for example)? Otherwise two metric with equal weight may still influence the total

score differently

[10:47] Samir Tartir: @Torsten: Yes.

[10:50] Samir Tartir: @DougFoxvog: Not sure what you mean here. Maybe discuss this after the current

speaker finishes.

[10:51] Torsten Hahmann: @Samir: I take your answer as they are standardized.

[10:53] Samir Tartir: @Torsten. Sorry for not being clear. Yes they are standardized.

[10:53] Torsten Hahmann: @Samir: thanks.

[10:38] Peter P. Yim: == AstridDuqueRamos presenting ... see: [3-DuqueRamos] slides

[10:53] Joseph Tennis: This was great! Wish I could stay longer. I look forward to using some of

these metrics for my work. One question I have is what do these metrics look like on different

versions of the same ontology? Do they change dramatically? Or do they not change much at all? It

might be something we can look at here. In case you're interested in my work on versioning, you can

check out my page: http://joseph-t-tennis.squarespace.com/research-streams/ Perhaps we can

collaborate?

[10:53] Joseph Tennis: ciao!

[10:52] Amanda Vizedom: @Astrid (actually, this applies to @Samir's presentation as well): your

approach measures some numeric/ topographic qualities of an ontology, such as depth of inheritance,

breadth of relationships, number of ancestors. I have seen many cases in which ontology teams, or

projects of which they are a part, are required to report such metrics upward to management, and are

held in some sense accountable for them, but it is very hard to see whether they are actually

indicative of quality (and if so how). It may be that they are meaningful given some interpretation,

given some requirements in place, or under some other conditions. What do they mean, in your view?

[10:54] Torsten Hahmann: +1 to @Amanda's comment: that should be one of the goals of this track (in

my opinion)

[10:54] JesualdoTomasFernandezBreis: @Amanda: one of the next slides has some comments I think

related to yours

[10:54] Steve Ray: @Amanda & @Astrid: Amanda, I think you are precisely raising the question about

intrinsic vs. extrinsic evaluation. Management often cares about the latter more than the former,

and sometimes at the expense of attention to the former.

[10:57] Samir Tartir: @Steve, @Amanda & @Astrid: It's relevant to each user or scenario. I think

@Steve 's comment is right on target.

[10:58] Leo Obrst: @Steve, Amanda, and all: Yes, because the current application is local and of

highest priority to management: extrinsic typically is more valued.

[10:59] Amanda Vizedom: @SteveRay: Although I'm very interested in the intrinsic/extrinsic question,

I see *this* question a bit differently. Staying within the intrinsic evaluation topic, there is an

independent question about which, of the many intrinsic characteristics an ontology may be said to

have, are actually measurements of quality. That a quality exists and can be measured, or even that

it has some intuitive or aesthetic appeal, is not enough to establish that it is an aspect of

ontology *quality*. The question is: Are these? And if so, why?

[11:02] Doug Foxvog: +1 @Amanda. many of the mentioned characteristics are *features* of the

ontologies. Whether they are measures of *quality* may in some instances be context dependent.

[11:03] Torsten Hahmann: I agree with @Doug: independent of whether intrinsic metrics are valued by

management, we have to figure whether they are correlated to the intended qualities

[11:03] Leo Obrst: @Amanda: perhaps more to your point, an approach such as OntoClean that uses

ontological analysis more clearly may have higher real value/quality, but is not necessarily

immediately understandable to management, though application value can be demonstrated.

[11:00] Steve Ray: We could discuss (at a later time) whether ontology development environments could

hard-wire evaluation during the ontology development process.

[11:00] Steve Ray: @Amanda: Need to think about your question in a few minutes.

[11:03] Amanda Vizedom: @Steve, et al., it may also be that *extrinsically* relevant characteristics

are so consistently relevant in some domain / context of application that folks trained in that

context believe them to be *intrinsic*. I do not mean to pre-judge the question for the

characteristics I mentioned. Rather that I would be interested in the presenters' thoughts on those,

and whether they can offer particular reasons for considering those intrinsic measurements to be

measures of *quality*.

[11:01] Matthew West: An even better approach is to have a develop method that avoids the quality

problems.

[11:01] Peter P. Yim: == Q&A and Open Discussion ...

[11:01] Peter P. Yim: question for all panelists - how do some of the more rigorously developed

ontologies (like BFO, DOLCE, PSL, SUMO, CYC, etc.) fare, when put through your

evaluation system/tool; anyone tried? observations & insights gained?

[11:15] MariCarmenSuarezFigueroa: We have already evaluate DOLCE with OOPS! (among other

established ontologies)

[11:17] Terry Longstreth: @MariCarmenSuarezFigueroa - is there a URL for the results of the DOLCE

evaluation?

[11:19] MariCarmenSuarezFigueroa: @terry, results are not available yet

[11:15] JesualdoTomasFernandezBreis: We are in the process of evaluating all the BioPortal

ontologies with OQuaRE, but we do not have the results yet

[11:21] Amanda Vizedom: Following up on Peter's question: For all panelists: What are the

expressivity constraints or expectations of these tools? Are they limited to DL ontologies?

OWL-Full? Has anyone applied their techniques to ontologies represented in FOL or higher languages?

[11:22] MariCarmenSuarezFigueroa: (re. Peter's follow-up question on whether they had tried how the

tools scale with larger ontologies like SUMO or CYC) We did not try with Cyc yet, for

example.

[11:03] anonymous1 morphed into Hashem Shmaisani

[11:03] Duane Nickull: (ref. the reverb/echo when Doug Foxvog tried to patch in) Nice audio effects

[11:03] MariaPovedaVillalon: :)

[11:03] Duane Nickull: Very Dr. Who - ish

[11:04] Duane Nickull: Exterminate, exterminate, exterminate.....

[11:04] MariCarmenSuarezFigueroa: :O

[11:04] Amanda Vizedom: Audio sounds like we have fallen down the rabbit hole!

[11:04] Bobbin Teegarden: @DougFoxvog: 'context dependent'... or, in the eye of the beholder?

[11:05] Matthew West: Some very interesting presentations, but I'm afraid I have to go now.

[11:05] anonymous morphed into AsuncionGomezPerez

[11:08] Jim Disbrow: Steve's first point of a measurement being "well-designed" is: "Proper use of

various relations found within an ontology". This has been an issue that has been sorely

underrepresented, but may now be breaking through - as demonstrated by the presentations. Insertion

of reflexivity in relationships, however, was not mentioned as a criteria. Is there any progress in

this ontological concepts implementations? (ref. below [11:28])

[11:11] Leo Obrst: @Doug: I think Samir's analysis of both ontology and KB are very useful for

ontologists, even though KBs will potentially be different across applications, companies, etc.

[11:12] Doug Foxvog: @Leo: I agree. But I'm suggesting that these distinctions should be identified.

[11:15] Terry Longstreth: @Leo - I'm not convinced that there's an objective procedure for separating

the two. Linnean classification requires a 'type instance' to fully describe a species type. Would

that be in the Ontology, or in the KB?

[11:15] Samir Tartir: Thanks Leo. Doug: I agree that it might be useful.

[11:18] Samir Tartir: (re. Amanda's positive verbal remark about the relationship diversity metric)

Thanks @Amanda

[11:19] Leo Obrst: @Terry: to your point, that is an issue. E.g., usually classes are considered

universals, and instances particulars, but some ontologies (and metaphysics) don't makes those

distinctions, e.g., identifying all "ontology" notions as particulars (e.g., tropes, etc.)

[11:20] Doug Foxvog: @Terry: OWL DL does not allow meta-classes, so that the instances of species,

genus, bio-kingdom, etc. are classes, themselves. A system that merely defines these, their

hierarchy, and relations that may apply to them would be, imho, an ontology. However, if data is

provided about these taxons (geological range, endangerment, diet, etc.), then it would be a KB,

even though what is being described are themselves classes.

[11:22] Terry Longstreth: Then I think evaluation has to include the KB if it's required for full

interpretation of the ontology

[11:22] Leo Obrst: @Terry: I agree. Both need to be evaluated.

[11:28] Jim Disbrow: @Steve: In your first slide, your first point of a measurement for being

well-designed is: "Proper use of various relations found within an ontology". This has been an

issue that has been sorely underrepresented, but may now be breaking through - as demonstrated by

the presentations. Insertion of reflexivity in relationships, however, was not mentioned as a

criteria. Similarly, there was no mention of an active "not" verb (not just the English negation

term), concatenated into the middle term of the OWL "triple". A question for the presenters: Is

there any progress in implementing these ontological concepts?

[11:31] Steve Ray: Anybody want to comment on Jim's question about addressing reflexivity?

[11:34] Jim Disbrow: I would offer that an ontology without proper use of relationships cannot claim

to claim "quality".

[11:22] Leo Obrst: @Amanda: (re. Amanda's verbal remark questioning some of the metrics and how they

relate to "quality") Is your issue about the definition of "quality"? I think the notion of quality

will vary between an ontologist and an application user/manager.

[11:25] Amanda Vizedom: @Leo, yes, I am asking whether -- and if so, why -- these characteristics are

intrinsic aspects of *quality*. It could be that they are intrinsic metrics, but the relevance to

quality depends on extrinsic factors.

[11:25] Amanda Vizedom: @Samir, I think that nails it. Thank you. (re. Samir's verbal response.)

[11:25] Samir Tartir: @Amanda: Thank you.

[11:25] MariaPovedaVillalon: In addition there is a temporal aspect on that, one class can have few

instances today, but will populated later

[11:28] Amanda Vizedom: @Leo and @Samir: My reason for wanting the relationship addressed may also be

a reason that some reviewers objected: there is a lot of history of these being treated as quality

metrics, without any obvious reason. @Samir, if that's true, then making clear the relationship you

see, as you articulated it, might well satisfy those critics.

[11:29] Doug Foxvog: @Maria & @Mari: if one has several local ontologies, one that includes the

other, can the combined ontologies be analyzed together?

[11:29] MariaPovedaVillalon: @doug you can either make them available online so that the owl:imports

can be resolved or gather them in one file and paste it into OOPS! textbox

[11:31] Leo Obrst: Behind some of this discussion is the presupposition that evaluation is only about

quality.

[11:31] Peter P. Yim: @Leo - is that presupposition proper (in the context of this summit) or not?

[11:32] Amanda Vizedom: Suggestion: So, we see the likelihood that there are (many, I'd say)

  • intrinsic* characteristics of an ontology such the relevance of each characteristics to quality /

suitability is *extrinsic* (at least partially).

[11:33] Steve Ray: @Amanda: I agree with you.

[11:34] Leo Obrst: @Peter: I think comparison of ontologies is an important issue for ontology

evaluation, and one person's notion of "quality" may vary from another person's, so comparing

different metrics and allowing weighting of various metrics may be useful.

[11:34] Samir Tartir: @Amanda: You mean intrinsic-extrinsic links? That's a good idea.

[11:37] Amanda Vizedom: @Leo, that may be so. Or it may simply be that we want/need to clarify the

relationship. Some evaluations (or evaluation tools) may be designed to rank ontologies by quality

without further information. Those, IMHO, are misguided. What is more promising, IMHO, is a

framework/toolkit with the capability of evaluating many characteristics, perhaps neutrally to their

relevance to quality in specific cases. It could be up to the user to select which characteristics

they care about. Or, in my fantasy system (such as that which Joanne Luciano and other have

proposed), a tool in which the use case could be described and ontologies evaluated according to

relevant metrics.

[11:37] Terry Longstreth: @Leo - any metacharacteristic may be the basis for a quality judgement,

depending on what's important to the user community. Size for example may be impactful in

determining what systems resources will need procurement actions.'

[11:38] Torsten Hahmann: Some addition to the example that Fabian used in his remark (depth may be

useful only in specific contexts): the context also determines how to measure depth. There are

dozens of ways one could measure depth, for example: average, shallowest, deepest, relative to

breadth, standard deviation, etc. Which of those metrics properly measures the intended quality (a

quality "specificity", for example)? Something to explore in the future.

[11:40] Doug Foxvog: @Torsten: that would depend upon the task. It might be interesting to define

desired features of ontologies & KBs using some of the metrics that have been described.

[11:37] Peter P. Yim: great session ... thanks everyone!

[11:37] Samir Tartir: Thank you all. Very interesting, looking forward to more discussions.

[11:37] Jim Disbrow: thanks and bye

[11:37] MariCarmenSuarezFigueroa: Thank you very much for your comments and suggestions.

[11:37] MariCarmenSuarezFigueroa: Bye!!

[11:37] AsuncionGomezPerez: bye

[11:37] Leo Obrst: Thanks all!

[11:37] Mohammad Aqtash: Thanks All Bye!

[11:37] MariaPovedaVillalon: Thank you for your comments :-) bye

[11:37] JesualdoTomasFernandezBreis: Thanks and bye!!

[11:38] Doug Foxvog: This was a very good session! Bye!

[11:38] Peter P. Yim: join us again, same time next week, for Ontology Summit 2013 session-04: "Building

Ontologies to Meet Evaluation Criteria - I" - Co-chairs: Matthew West & Mike Bennett -

http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_02_07

[11:37] Peter P. Yim: -- session ended: 11:37 am PST --

[11:38] List of attendees: Amanda Vizedom, Anatoly Levenchuk, AstridDuqueRamos, AsuncionGomezPerez,

Bob Schloss, Bobbin Teegarden, Bruce Bray, IsmailcemBudakArpinar, Carmen Chui, Clare Paul, David Makovoz,

David Leal, Dennis Wisnosky, Doug Foxvog, Duane Nickull, Fabian Neuhaus, Fran Lightsom, Gerald Radack,

Hashem Shmaisani, Henson Graves, James Odell, JesualdoTomasFernandezBreis, Jim Disbrow,

JoaoPauloAlmeida, Joel Bender, Joseph Tennis, Ken Baclawski, Leo Obrst, MariCarmenSuarezFigueroa,

MariaPovedaVillalon, Mark Fox, Matthew West, Megan Katsumi, Michael Grüninger, Mike Denny, Mike Dean,

Mohammad Aqtash, NathalieAussenacGilles, Peter P. Yim, Ram D. Sriram, Samir Tartir, Steve Ray, Terry Longstreth,

Todd Schneider, Torsten Hahmann, Trish Whetzel, YuvaTarunVarmaDatla, vnc2

-- end of in-session chat-transcript --

  • Further Question & Remarks - please post them to the [ ontology-summit ] listserv
    • all subscribers to the previous summit discussion, and all who responded to today's call will automatically be subscribed to the [ ontology-summit ] listserv
    • if you are already subscribed, post to <ontology-summit [at] ontolog.cim3.net>
    • (if you are not yet subscribed) you may subscribe yourself to the [ ontology-summit ] listserv, by sending a blank email to <ontology-summit-join [at] ontolog.cim3.net> from your subscribing email address, and then follow the instructions you receive back from the mailing list system.
    • (in case you aren't already a member) you may also want to join the ONTOLOG community and be subscribed to the [ ontolog-forum ] listserv, when general ontology-related topics (not specific to this year's Summit theme) are discussed. Please refer to Ontolog membership details at: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
      • kindly email <peter.yim@cim3.com> if you have any question.

Additional Resources


How To Join (while the session is in progress)

Conference Call Details

  • Date: Thursday, 31-Jan-2013
  • Start Time: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC
  • Expected Call Duration: ~2.0 hours
  • Dial-in:
    • Phone (US): +1 (206) 402-0100 ... (long distance cost may apply)
      • ... [ backup nbr: (415) 671-4335 ]
      • when prompted enter Conference ID: 141184#
    • Skype: joinconference (i.e. make a skype call to the contact with skypeID="joinconference") ... (generally free-of-charge, when connecting from your computer)
      • when prompted enter Conference ID: 141184#
      • Unfamiliar with how to do this on Skype? ...
        • Add the contact "joinconference" to your skype contact list first. To participate in the teleconference, make a skype call to "joinconference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 141184# when prompted.
      • Can't find Skype Dial pad? ...
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)
  • Shared-screen support (VNC session), if applicable, will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/
    • view-only password: "ontolog"
    • if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.
    • people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides above (where applicable) and running them locally. The speaker(s) will prompt you to advance the slides during the talk.
  • In-session chat-room url: http://webconf.soaphub.org/conf/room/summit_20130131
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20130131@soaphub.org ... Handy for mobile devices!
  • Discussions and Q & A:
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)
  • RSVP to peter.yim@cim3.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.

Attendees