From OntologPSMW

Jump to: navigation, search
[ ]

Contents

Ontology Summit 2013: Virtual Panel Session-02 - Thu 2013-01-24     (1)

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"     (1A)

Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation     (1B)

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope     (1C)

  • Session Co-chairs: Dr. ToddSchneider (Raytheon) and Mr. TerryLongstreth (Independent Consultant) - intro slides     (1D)

Panelists / Briefings:     (1E)

  • Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "Evaluation Dimensions, A Few" slides     (1F)
  • Mr. HansPolzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies" slides     (1G)
  • Ms. Mary Balboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle" slides     (1H)
  • Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies" slides     (1I)

Abstract     (1K)

OntologySummit2013 Session-02: "Extrinsic Aspects of Ontology Evaluation: Finding the Scope" - intro slides     (1K1)

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."     (1K2)

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.     (1K3)

During this Ontology Summit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqu��.     (1K4)

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.     (1K5)

As the area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community (panelists and participants alike) to provide input for the dimensions of ontology evaluation during this session, and methodologies that can be applied.     (1K6)

More details about this Ontology Summit is available at: OntologySummit2013 (homepage for this summit)     (1K7)

Briefings     (1K8)

  • Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "A Few Evaluation Dimensions" slides     (1K8A)
    • Abstract: ... The area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community to provide input for the dimensions of ontology evaluation.     (1K8A1)
  • Mr. HansPolzer (Lockheed Martin (ret.)) - "Dimensionality of Evaluation Context for Ontologies" slides     (1K8B)
    • Abstract: ... Evaluation of anything, including ontologies, is done for some purpose within some context. Often much of that purpose and context is left implicit because it is assumed to be shared among the participants in the evaluation process. However, as the number and scope of the things being evaluated grows, and as the contexts in which they are evaluated become more diverse, implicit purpose and context dimensions become problematic. The appropriateness of any given set of evaluation attributes and their valuation depends significantly on evaluation purpose and context. This presentation draws on some past experiences with evaluation context issues in related domains to motivate attention to more explicit representation of evaluation context in ontology evaluation. It also suggests some important evaluation context dimensions for consideration by the ontology community as a starting point for further exploration and refinement by the community.     (1K8B1)
  • Ms. Mary Balboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle" slides     (1K8C)
    • Abstract: ... One may approach ontology evaluation as testing in a Black Box paradigm (i.e., the ontology exists within said Black Box). What are some basic Black Box testing methods and applications in the Lifecycle? Can testing a large database be akin to testing an ontology? What are some interesting data points regarding large database Black Box testing, especially if they can relate to ontology testing? Are Security concerns already covered by Black Box testing? This paper broaches these subjects from an engineering point of view, to provoke thoughts and ideas on Black Box testing of a system that may include an ontology.     (1K8C1)
  • Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies" slides     (1K8D)
    • Abstract: ... The design and evaluation of first-order logic ontologies pose multiple challenges. If we consider the ontology lifecycle, two issues of critical importance are the specification of the intended models for the ontology��s concepts (requirements) and the relationship between these models and the models of the ontology��s axioms (verification). This talk presents a methodology in which automated reasoning plays a critical role for the development and verification of first-order logic ontologies. Its focus is on the verification of requirements (intended models), and how the results of this evaluation can be used to both revise the requirements and correct errors in the ontology. The methodology will be illustrated using examples from the Boxworld ontology (available in COLORE). While it is focused on the challenges of the development of first-order logic ontologies, this methodology may also be useful for ontology development in other logical languages.     (1K8D1)

Agenda     (1L)

OntologySummit2013 - Panel Session-02     (1L1)

  • Session Format: this is a virtual session conducted over an augmented conference call     (1L2)

Proceedings     (1M)

Please refer to the above     (1M1)

IM Chat Transcript captured during the session    (1M2)

see raw transcript here.     (1M2A)

(for better clarity, the version below is a re-organized and lightly edited chat-transcript.)     (1M2B)

Participants are welcome to make light edits to their own contributions as they see fit.     (1M2C)

-- begin in-session chat-transcript --     (1M2D)

[09:03] Peter P. Yim: Welcome to the     (1M2E)

Ontology Summit 2013: Virtual Panel Session-02 - Thu 2013-01-24     (1M2F)

Summit Theme: Ontology Evaluation Across the Ontology Lifecycle     (1M2G)

  • Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation     (1M2H)

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope     (1M2I)

Panelists / Briefings:     (1M2K)

  • Ms. Megan Katsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"     (1M2O)

Logistics:     (1M2P)

  • (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)     (1M2R)
    • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)     (1M2V1)

if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.     (1M2W)

Proceedings: == . [08:57] anonymous morphed into Donghuan     (1M2AE)

[09:13] Donghuan morphed into PennState:Qais     (1M2AF)

[09:14] PennState:Qais morphed into PennState     (1M2AG)

[09:17] anonymous1 morphed into Max Petrenko     (1M2AH)

[09:21] anonymous2 morphed into Mary Balboni     (1M2AI)

[09:23] anonymous1 morphed into Carmen Chui     (1M2AJ)

[09:23] anonymous1 morphed into Fabian Neuhaus     (1M2AK)

[09:24] PennState morphed into Donghuan     (1M2AL)

[09:24] Donghuan morphed into Qais     (1M2AM)

[09:24] Qais morphed into PennState     (1M2AN)

[09:26] PennState morphed into Qais_Donghuan     (1M2AO)

[09:24] anonymous morphed into Angela Locoro     (1M2AP)

[09:25] Angela Locoro morphed into Angela Locoro     (1M2AQ)

[09:26] anonymous morphed into John Bilmanis     (1M2AR)

[09:26] anonymous morphed into Steve Ray     (1M2AS)

[09:29] Matthew West: Just a note, but the Session page shows the conference starting at 1630 UTC     (1M2AU)

when it is actually 1730 UTC.     (1M2AV)

[09:55] Peter P. Yim: @MatthewWest - thank you for the prompt ... sorry, everyone, the session     (1M2AW)

start-time should be: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC     (1M2AX)

[09:30] anonymous morphed into RosarioUcedaSosa     (1M2AY)

[09:31] anonymous morphed into Ram D. Sriram     (1M2AZ)

[09:33] anonymous1 morphed into Torsten Hahmann     (1M2AAA)

[09:55] anonymous morphed into laleh     (1M2AAB)

[09:59] Peter P. Yim: @laleh - would be kindly provide your real name (in WikiWord format, if you     (1M2AAC)

please) and morph into the with "Settings" (botton at top center of window)     (1M2AAD)

[10:01] laleh morphed into Laleh Jalali     (1M2AAE)

[10:03] Peter P. Yim: @LalehJalali - thank you, welcome to the session ... are you one of RameshJain's     (1M2AAF)

students at UCI?     (1M2AAG)

[09:34] Peter P. Yim: == [0-Chair] Todd Schneider & Terry Longstreth (co-chairs) opening the session ...     (1M2AAI)

[09:37] anonymous morphed into Frank Olken     (1M2AAJ)

[09:39] Peter P. Yim: == [2-Polzer] Hans Polzer presenting ...     (1M2AAK)

[09:42] anonymous morphed into Trish Whetzel     (1M2AAR)

[09:44] Mike Riben: are we on slide 5?     (1M2AAS)

[09:47] Jack Ring: Pls stop using "Next Slide" and say number of slide     (1M2AAT)

[09:47] anonymous morphed into GaryBergCross     (1M2AAU)

[09:52] Todd Schneider: Jack, Hans is on slide 7.     (1M2AAV)

[09:45] Jack Ring: Is your Evaluation Context different from. Ontology Context?     (1M2AAW)

[09:56] Todd Schneider: Qais, if you have a question would type it in the chat box?     (1M2AAX)

[09:56] Peter P. Yim: @Qais_Donghuan - we will hold questions off till after the presentations are done,     (1M2AAY)

please post your questions on the chat-space (as a placeholder/reminder) for now     (1M2AAZ)

[09:55] Terry Longstreth: On slide 8, Hans mentions reasoners as an aspect of the ontology, but as     (1M2AAAA)

Uschold has pointed out, the reasoner may be used as a test/evaluation tool     (1M2AAAB)

[09:57] Todd Schneider: Terry, the evaluation(s) may need to be redone if the reasoner is changed.     (1M2AAAC)

[10:03] Terry Longstreth: Sure. I was just pointing out that the reasoner may be a tool for extrinsic     (1M2AAAD)

evaluation.     (1M2AAAE)

[10:04] Todd Schneider: Terry, yes a tool used in evaluation and the subject of evaluation itself     (1M2AAAF)

(e.g., performance).     (1M2AAAG)

[10:08] Steve Ray: @Hans: It would help if you could provide some concrete examples that would bring     (1M2AAAH)

your observations into focus.     (1M2AAAI)

[10:10] Michael Grüninger: @Hans: In what sense is ontology compatibility considered to be a rating?     (1M2AAAJ)

[10:09] Peter P. Yim: == [1-Schneider] Todd Schneider presenting, and soliciting input on Ontology     (1M2AAAK)

Evaluation dimensions ...     (1M2AAAL)

[10:01] Jack Ring: (ref. ToddSchneider's solicitation for input on dimensions) Reusefulness of an     (1M2AAAM)

ontology or subset(s) thereof?     (1M2AAAN)

[10:08] Jack Ring: This is a good start toward an ontology of ontology evaluation but we have a     (1M2AAAO)

loooong way to go.     (1M2AAAP)

[10:10] anonymous morphed into Pavithra Kenjige     (1M2AAAQ)

[10:15] Jack Ring: In systems think the three basic dimensions are Quality, Parsimony, Beauty     (1M2AAAR)

[10:15] Todd Schneider: The URL for adding to the list of possible evaluation dimensions is     (1M2AAAS)

[10:15] MariCarmenSuarezFigueroa: In the legal part, maybe we should consider also license (and not     (1M2AAAU)

only copyright)     (1M2AAAV)

[10:15] Terry Longstreth: Thanks Mari Carmen     (1M2AAAW)

[10:16] Fabian Neuhaus: @Todd, we need more than a list. We need definitions of the terms on your     (1M2AAAX)

evaluation dimensions list, because they are not self-explanatory.     (1M2AAAY)

[10:16] Matthew West: Relevance, Clarity, Consistency, Accessibility, timeliness,completeness,     (1M2AAAZ)

accuracy, costs (development, maintenance), Benefits     (1M2AAAAA)

[10:17] Matthew West: Provenance     (1M2AAAAB)

[10:17] Todd Schneider: Fabian, yes we will need definitions, context, and possibly intent. But first     (1M2AAAAC)

I'd like to conduct a simple gathering exercise.     (1M2AAAAD)

[10:18] Matthew West: Modularity     (1M2AAAAE)

[10:17] Fabian Neuhaus: @Todd: it seems that your "evaluation dimensions" are very different from     (1M2AAAAF)

Hans' dimensions.     (1M2AAAAG)

[10:20] Todd Schneider: Fabian, yes. Hans was talking about context. I'm thinking of things more     (1M2AAAAH)

directly related to evaluation criteria. Both Hans and I like metaphors from physics.     (1M2AAAAI)

[10:48] Leo Obrst: @Todd: your second set of slides, re: slide 4: Precision, Recall, Coverage,     (1M2AAAAJ)

Correctness and perhaps others will also be important for Track A Intrinsic Aspects of Ontology     (1M2AAAAK)

Evaluation. Perhaps your metrics will be: Precision With_Respect_To(domain D, requirement R), etc.?     (1M2AAAAL)

Just a thought.     (1M2AAAAM)

[10:21] Peter P. Yim: == [3-Balboni] Mary Balboni presenting ...     (1M2AAAAN)

[10:21] Terry Longstreth: Mary's term: CSCI - Computer Software Configuration Item - smallest unit of     (1M2AAAAO)

testing at some level (varies by customer: sometimes a module, sometimes a capability ...)     (1M2AAAAP)

[10:23] Terry Longstreth: Current speaker - Mary Balboni - slides 3-Balboni     (1M2AAAAQ)

[10:27] Bobbin Teegarden: @Mary, slide 4 testing continuum -- may need to go one more step: critical     (1M2AAAAR)

testing' is in actual usage (step beyond beta) and that feedback loop that creates continual     (1M2AAAAS)

improvement. Might want to extend the thinking to 'usage as a test' and ongoing criteria in field     (1M2AAAAT)

[10:29] Terry Longstreth: @Bobbin - good point and note that in many cases, evaluation may not start     (1M2AAAAV)

until (years?) after the ontology has been put into continuous usage     (1M2AAAAW)

[10:29] Till Mossakowski: how does it work that injection of bugs leads to finding more (real) bugs?     (1M2AAAAX)

Just because there is more overall debugging effort?     (1M2AAAAY)

[10:30] Fabian Neuhaus: @Till: I think it allows you to evaluate the coverage of your tests.     (1M2AAAAZ)

[10:33] Jack Ring: It seems that your testing is focused on finding bugs as contrasted to discovering     (1M2AAAAAA)

dynamic and integrity limits. Instead of "supports system conditions" it should be 'discovers how     (1M2AAAAAB)

ontology limits system envelope"     (1M2AAAAAC)

[10:35] Jack Ring: Once we understand how to examine a model for progress properties and integrity     (1M2AAAAAD)

properties we no longer need to run a bunch of tests to determine ontology efficacy.     (1M2AAAAAE)

[10:29] Steve Ray: @Mary: Some of your testing examples look more like what we would call intrinsic     (1M2AAAAAF)

evaluation. Specifically I'm thinking of your example of finding injected bugs.     (1M2AAAAAG)

[10:59] Mary Balboni: @SteveRay: Injected bugs - yes it is intrinsic to those that inject the     (1M2AAAAAH)

defects, but would be extrinsic to the testers that are discovering defects ...     (1M2AAAAAI)

[11:01] Steve Ray: @Mary: I would agree with you provided that the testers are testing via blackbox     (1M2AAAAAJ)

methods such as performance given certain inputs, and not by examining the code for logical or     (1M2AAAAAK)

structural bugs. Are we on the same page?     (1M2AAAAAL)

[11:03] Mary Balboni: @SteveRay - absolutely!     (1M2AAAAAM)

[10:49] Bobbin Teegarden: @JackRing Would 'effectiveness' fall under beauty? What criteria?     (1M2AAAAAN)

[10:58] Jack Ring: @Bobbin, Effect-iveness is a Quality factor. Beauty is in the eye of the     (1M2AAAAAO)

beer-holder.     (1M2AAAAAP)

[10:37] Terry Longstreth: Example of business rule: ask bank for email when account drops below $200.     (1M2AAAAAQ)

Evaluate by cashing checks until balance below threshold.     (1M2AAAAAR)

[10:36] Todd Schneider: Leo, have you cloned yourself?     (1M2AAAAAS)

[10:37] Leo Obrst: No, I had to reboot firebox and it had some fun.     (1M2AAAAAT)

[10:41] Jack Ring: No one has mentioned the dimension of complexness. Because ontologies quickly     (1M2AAAAAU)

become complex topologies then the response time becomes very important if implemented on a von     (1M2AAAAAV)

Neumann architecture. Therefore the structure of the ontology for efficiency of response becomes an     (1M2AAAAAW)

important dimension.     (1M2AAAAAX)

[10:42] Bobbin Teegarden: At DEC, we used an overlay on all engineering for RAMPSS -- Reliability,     (1M2AAAAAY)

Availability, Maintainability, Performance, Scalability, and Security. Maybe these all apply for     (1M2AAAAAZ)

black box here? Mary has cited some of them...     (1M2AAAAAAA)

[10:56] Mary Balboni: @BobbinTeegarden: re ongoing criteria in field usage - yes during what we call     (1M2AAAAAAB)

sustainment after delivery, upgrades are sent out acceptance tests are repeated and depending on how     (1M2AAAAAAC)

much is changed, the testing may only be regression of specific areas in the system..     (1M2AAAAAAD)

[10:43] Leo Obrst: @MaryBalboni: re: slide 14: back in the day, we would characterize 3 kinds of     (1M2AAAAAAE)

integrity: 1) domain integrity (think value domains in a column, i.e., char, int, etc.), 2)     (1M2AAAAAAF)

referential integrity (key relationships: primary/foreign), 3) semantic integrity (now called     (1M2AAAAAAG)

business rules). Ontologies do have these issues. On the ontology side, they can be handled     (1M2AAAAAAH)

slightly differently: e.g., referential integrity (really mostly structural integrity) will be     (1M2AAAAAAI)

handled differently based on Open World Assumption (e.g., in OWL) or Closed World Assumption (e.g.,     (1M2AAAAAAJ)

in Prolog), with the latter being enforced in general by integrity constraints.     (1M2AAAAAAK)

[10:52] Mary Balboni: @LeoObrst - thanks for feedback - since I am not an expert in Ontology it is     (1M2AAAAAAL)

very nice to see that these testing paradigms are reusable - and tailorable.     (1M2AAAAAAM)

[10:44] Peter P. Yim: == [4-Katsumi] Megan Katsumi presenting ...     (1M2AAAAAAN)

[10:53] Leo Obrst: @Megan: Nicola Guarino for our upcoming (Mar. 7, 2013) Track A session will talk     (1M2AAAAAAO)

along the lines of your slides 8, etc.     (1M2AAAAAAP)

[10:52] Till Mossakowski: Is it always clear what the intended models are? After all, initially you     (1M2AAAAAAQ)

will have only an informal understanding of the domain, which will be refined during the process of     (1M2AAAAAAR)

formalisation. Only in this process, the class of intended models becomes clearer.     (1M2AAAAAAS)

[10:54] Michael Grüninger: @Till: At any point in development, we are working with a specific set of     (1M2AAAAAAT)

intended models, which is why we call this verification. Validation is addressing the question of     (1M2AAAAAAU)

whether or not we have the right set of intended models.     (1M2AAAAAAV)

[10:56] Michael Grüninger: We formalize the ontology's requirements as the set of intended models (or     (1M2AAAAAAW)

indirectly as a set of competency questions). It might not always be clear what the intended models     (1M2AAAAAAX)

are, but this is analogous to the case in software development when we are not clear as to what the     (1M2AAAAAAY)

requirements are.     (1M2AAAAAAZ)

[10:56] Till Mossakowski: @Michael: OK, that is similar as in software validation and verification.     (1M2AAAAAAAA)

But then validation should be mentioned, too.     (1M2AAAAAAAB)

[10:56] Todd Schneider: Michael, so there's a presumption that you have extensive explicit knowledge     (1M2AAAAAAAC)

of the intended model(s), correct?     (1M2AAAAAAAD)

[10:58] Michael Grüninger: @Todd: since intended models are the formalization of the requirements,     (1M2AAAAAAAE)

extensive explicit knowledge of intended models is equivalent to "extensive explicit knowledge     (1M2AAAAAAAF)

about the requirements"     (1M2AAAAAAAG)

[10:57] Leo Obrst: @Till, Michael: one issue is the mapping of the "conceptualization" to the     (1M2AAAAAAAH)

intended models, right? I guess Michael's requirements are in affect statements/notions of the     (1M2AAAAAAAI)

conceptualization. Is that right?     (1M2AAAAAAAJ)

[10:59] Michael Grüninger: @LeoObrst: I suppose there could be the case where someone incorrectly     (1M2AAAAAAAK)

specified the intended models or competency questions that formalize a particular requirement (i.e.     (1M2AAAAAAAL)

the conceptualization is wrong)     (1M2AAAAAAAM)

[10:59] Till Mossakowski: It seems that two axiomatisations (requirements and design) are compared     (1M2AAAAAAAN)

with each other. The requirements describe the intended models. Is this correct?     (1M2AAAAAAAO)

[11:00] Michael Grüninger: @Till: We would say that the intended models describe the requirements.     (1M2AAAAAAAP)

[11:01] Michael Grüninger: @Till: The notion of comparing axiomatizations arises primarily when we     (1M2AAAAAAAQ)

use the models of some other ontology as a way of formalizing the intended models of the ontology we     (1M2AAAAAAAR)

are evaluating     (1M2AAAAAAAS)

[11:02] Till Mossakowski: @Michael: but you cannot give the set of intended models to a prover, only     (1M2AAAAAAAT)

an axiomatisation of it. Hence it seems that you are testing two different axiomaisations against     (1M2AAAAAAAU)

[11:00] Todd Schneider: All, due to a changing schedule I need to leave this session early. Cheers.     (1M2AAAAAAAW)

[11:02] MariCarmenSuarezFigueroa: We could also consider the verification of requirements     (1M2AAAAAAAX)

(competency questions) using e.g. SPARQL queries.     (1M2AAAAAAAY)

[11:04] Peter P. Yim: @MeganKatsumi - ref. your slide#4 ... would you see some "fine tuning" after the     (1M2AAAAAAAZ)

ontology has been committed to "Application" - adjustment to the "Requirements" and "Design"     (1M2AAAAAAAAA)

[11:06] Terry Longstreth: Fabian suggests that Megan's characterization of semantic correctness is     (1M2AAAAAAAAC)

[11:09] Michael Grüninger: @Till: Yes, when we use theorem proving, we need to use the axiomatization     (1M2AAAAAAAAE)

of another theory. However, there are also cases in which we verify an ontology directly in the     (1M2AAAAAAAAF)

metatheory. In terms of COLORE, we need to use this latter approach for the core ontologies.     (1M2AAAAAAAAG)

[11:10] Torsten Hahmann: @Till: but you can give individual models to a theorem prover. It is a     (1M2AAAAAAAAH)

question how to come up with a good set of models to evaluate the axiomatization.     (1M2AAAAAAAAI)

[11:11] Till Mossakowski: OK, but this probably means that you have a set of intended models that is     (1M2AAAAAAAAJ)

more exemplary than exhaustive.     (1M2AAAAAAAAK)

[11:11] Fabian Neuhaus: @Till, Michael. It seems to me that Till has a good point. Especially if the     (1M2AAAAAAAAL)

ontology and the set of axioms that express the requirements both have exactly the same models, it     (1M2AAAAAAAAM)

seems that you just have two equivalent axiom sets (ontologies)     (1M2AAAAAAAAN)

[11:12] Torsten Hahmann: Yes, of course, the same as with software verification.     (1M2AAAAAAAAO)

[11:12] Till Mossakowski: indeed, but sometimes it might just be an implication     (1M2AAAAAAAAP)

[11:15] Till Mossakowski: further dimensions: consistency; correctness w.r.t. intended models (as in     (1M2AAAAAAAAQ)

Megan's talk), completeness in the sense of having intended logical consequences     (1M2AAAAAAAAR)

[11:16] Megan Katsumi: @Leo: I'm not sure that I understand your question, can you give an example?     (1M2AAAAAAAAS)

[11:03] Leo Obrst: @Megan: what if you have 2 or more requirements, e.g., going from a 2-D to a 3-D     (1M2AAAAAAAAT)

[11:17] Peter P. Yim: == Q&A and Open Discussion ... soliciting of additional thoughts on Evaluation     (1M2AAAAAAAAV)

[11:17] Bobbin Teegarden: It seems we have covered correctness, precision, meeting requirements, etc     (1M2AAAAAAAAX)

well, but have we really addressed 'goodness' of an ontology? And certainly haven't addressed an     (1M2AAAAAAAAY)

elegant' ontology, or do we care? Is this akin to Jack's 'beauty' assessment?     (1M2AAAAAAAAZ)

[11:17] Bob Schloss: Because of the analogy we heard with Database Security Blackbox Assessment, I     (1M2AAAAAAAAAA)

wonder if there is an analogy to "normalization" (nth normal form) for database schemas. Is some     (1M2AAAAAAAAAB)

evaluation criteria related to factoring, simplicity, minimalism, straightforwardness.....     (1M2AAAAAAAAAC)

[11:19] Torsten Hahmann: another requirement that I think hasn't been mentioned yet: granularity     (1M2AAAAAAAAAD)

(level of detail)     (1M2AAAAAAAAAE)

[11:21] Leo Obrst: @Torsten: yes, that was my question, i.e., granularity.     (1M2AAAAAAAAAF)

[11:22] Torsten Hahmann: @Leo: I thought so.     (1M2AAAAAAAAAG)

[11:22] MariCarmenSuarezFigueroa: I'm also think granularity is a very important dimension....     (1M2AAAAAAAAAH)

[11:19] Bob Schloss: I am also thinking about issues of granularity and regularity ... If a program     (1M2AAAAAAAAAI)

wants to remove one instance "entity" from a knowledge base, does this ontology make it very simple     (1M2AAAAAAAAAJ)

to just do the remove/delete, or is it so interconnected that removal requires a much more     (1M2AAAAAAAAAK)

complicated syntax....     (1M2AAAAAAAAAL)

[11:24] Bob Schloss: Although this is driven by the domain, some indication of an ontology's rate of     (1M2AAAAAAAAAM)

evolution or degree of stability or expected rate of change may be important to those using     (1M2AAAAAAAAAN)

organizations. If there are 2 ontologies, and one, by being very simple and universal, doesn't have     (1M2AAAAAAAAAO)

as many specifics but will be stable for decades; whereas another, because it is very detailed using     (1M2AAAAAAAAAP)

concepts that are related to current technologies, current business practices, and therefore may     (1M2AAAAAAAAAQ)

need to be updated every year or two... I'd like to know this.     (1M2AAAAAAAAAR)

[11:29] Matthew West: Yes, stability is an important criteria. For me that is about how much the     (1M2AAAAAAAAAS)

existing ontology needs to change when you need to make an addition.     (1M2AAAAAAAAAT)

[11:24] MariCarmenSuarezFigueroa: Sorry I have to go (due to another commitment). Thank you very     (1M2AAAAAAAAAU)

much for the interesting presentations. Best Regards     (1M2AAAAAAAAAV)

[11:28] Bob Schloss: Another analogy to the world of blackbox testing... the software engineers have     (1M2AAAAAAAAAW)

ideas of Orthogonal Defect Classification and more generally, ways of estimating how many remaining     (1M2AAAAAAAAAX)

bugs there are in some software based on the rates and kinds of discovery of new bugs that have     (1M2AAAAAAAAAY)

happened over time up until the present moment. I wonder if there is something for an ontology...     (1M2AAAAAAAAAZ)

one that has a constant level of utilization, but which is having a decrease in reporting of     (1M2AAAAAAAAAAA)

errors.... can we guess how many other errors remain in the ontology? Again... this is an     (1M2AAAAAAAAAAB)

analogy.... some way of estimating "quality"...     (1M2AAAAAAAAAAC)

[11:27] Michael Grüninger: @Fabian: It would be great if we could also focus on criteria and     (1M2AAAAAAAAAAD)

techniques that people are already using in practice with real ontologies and applications.     (1M2AAAAAAAAAAE)

[11:27] Steve Ray: @Michael: +1     (1M2AAAAAAAAAAF)

[11:29] Leo Obrst: Perhaps the main difference between Intrinsic -> Extrinsic is that at least some     (1M2AAAAAAAAAAH)

of the Intrinsic predicates are also Extrinsic predicates with additional arguments, e.g., Domain,     (1M2AAAAAAAAAAI)

Requirement, etc.?     (1M2AAAAAAAAAAJ)

[11:30] Leo Obrst: Must go, thanks, all!     (1M2AAAAAAAAAAK)

[11:31] Peter P. Yim: wonderful session ... really good talks ... thanks everyone!     (1M2AAAAAAAAAAL)

[11:31] Peter P. Yim: -- session ended: 11:30 am PST --     (1M2AAAAAAAAAAM)

-- end of in-session chat-transcript --     (1M2AAAAAAAAAAT)

Additional Resources     (1N)


For the record ...     (1N6)

How To Join (while the session is in progress)     (1O)

  • Dial-in:     (1O4D)
    • Phone (US): +1 (206) 402-0100 ... (long distance cost may apply)     (1O4D1)
    • Skype: joinconference (i.e. make a skype call to the contact with skypeID="joinconference") ... (generally free-of-charge, when connecting from your computer)     (1O4D2)
      • when prompted enter Conference ID: 141184#     (1O4D2A)
      • Unfamiliar with how to do this on Skype? ...     (1O4D2B)
        • Add the contact "joinconference" to your skype contact list first. To participate in the teleconference, make a skype call to "joinconference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 141184# when prompted.     (1O4D2B1)
      • Can't find Skype Dial pad? ...     (1O4D2C)
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"     (1O4D2C1)
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)     (1O4D2C2)
  • Shared-screen support (VNC session), if applicable, will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/     (1O4E)
    • view-only password: "ontolog"     (1O4E1)
    • if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.     (1O4E2)
    • people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides above (where applicable) and running them locally. The speaker(s) will prompt you to advance the slides during the talk.     (1O4E3)
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (1O4F1)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (1O4F2)
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20130124@soaphub.org ... Handy for mobile devices!     (1O4F3)
  • Discussions and Q & A:     (1O4G)
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.     (1O4G1)
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)     (1O4G2)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)     (1O4G3)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)     (1O4G4)
  • RSVP to peter.yim@cim3.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)     (1O4I)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (1O4K)

Attendees     (1P)


This page has been migrated from the OntologWiki - Click here for original page     (1P5)