Actions

Ontolog Forum

Revision as of 06:41, 9 January 2016 by imported>KennethBaclawski (Fix PurpleMediaWiki references)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Ontology Summit 2013: Panel Session-05 - Thu 2013-02-14

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"

Summit Track Title: Track-D: Software Environments for Evaluating Ontologies

Session Topic: Software Environments for Evaluating Ontologies - I

Session Co-chairs: Mr. PeterYim (Ontolog; CIM3) and Dr. MichaelDenny (MITRE) - intro slides

Panelists / Briefings:

  • Dr. MichaelDenny (MITRE) - "Ontology Quality and Fitness: A Survey of Software Support" slides
  • Professor MichaelGruninger (U of Toronto) - "Ontology Evaluation Workflow in COLORE" slides
  • Ms. JeanneHolm (Data.gov; NASA/JPL) - "Evaluating US Open Data for Discovery, Interoperability, and Innovation" slides
  • Mr. GavinMatthews (Vertical Search Works) - "Assuring broad quality in large-scale ontologies" slides

Archives

Abstract

OntologySummit2013 Session-05: "Software Environments for Evaluating Ontologies-I" - intro slides

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.

During this Ontology Summit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.

Further to our Track-D Mission, we will, in this 5th virtual panel session of the Summit, start by laying out a plan to survey the landscape of software environments, systems and tools that feature "ontology evaluation." This will be followed by briefings from our panelists who will share with us their experience and insights on how a few (exemplary) software environments/systems/tools support ontology evaluation, and help assure quality in the development of ontology or ontology-driven application. Among them, we will include software environments that have "real needs" to evaluate whether and which ontologies would best meet their needs, and how practitioners are assuring the quality of those ontologies in "real applications'. A Q&A and open discussion among all panelists and participants will follow the briefings.

More details about this Ontology Summit is available at: OntologySummit2013 (homepage for this summit)

Briefings

  • Dr. MichaelDenny (MITRE) - "Ontology Quality and Fitness: A Survey of Software Support" slides
    • Abstract: ... A survey will be conducted across the Ontology Summit community and perhaps more broadly to identify software that evaluates and/or promotes ontology quality and fitness. Software providers will be asked to describe those capabilities of their software that measure or assure ontology quality and fitness. The survey will be structured around a checklist of those ontology factors recognized in the other Ontology Summit tracks that could potentially be addressed in a software environment. The overall survey method and an initial list of ontology evaluation factors will be discussed in this session.
  • Professor MichaelGruninger (U of Toronto) - "Ontology Evaluation Workflow in COLORE" slides
    • Abstract: ...
COLORE is an open repository of first-order ontologies that serves as a testbed for ontology evaluation and integration techniques, and which supports the design, evaluation, and application of ontologies in first-order logic. This talk will explore the different techniques that are used when an ontology is uploaded into COLORE.
In particular, we consider different workflows (sequences of evaluation steps) that are used to verify and

modularize an ontology. Although not all of these ontology evaluation workflows are completely automated, we will discuss current directions for implementation.

  • Ms. JeanneHolm (Data.gov; NASA/JPL) - "Evaluating US Open Data for Discovery, Interoperability, and Innovation" slides
    • Abstract: ... With the release of the US Digital Strategy and discussions around new open data policies, the US Data.gov project is working at creating the next generation of open data capabilities. As "Data.gov 2.0" evolves, there are changes ranging from interfaces (like Alpha.Data.gov) to open source (the Open Government Platform). In the meantime, the Data.gov project continues to open up hundreds of thousands of government datasets from hundreds of government agencies. Finding, connecting, and navigating through this growing, connected collection of data and linked resources becomes a challenge. This talk will discuss progress to date and invite ideas on the work underway around the world on government ontologies, taxonomies, and vocabularies.
  • Mr. GavinMatthews (Vertical Search Works) - "Assuring broad quality in large-scale ontologies" slides
    • Abstract: ... In developing and extending ontologies that support a range of semantic processing applications, it is easy to introduce unintended consequences, unnecessarily limited solutions, or hidden errors. From my experience at Cycorp and Vertical Search Works, I will try to provide some answers to the following questions: How can you measure the efficacy of the ontology at solving real-world tasks? How can you use introspection to reason about the validity of assertions, and identify missing information? How can you facilitate effective human review of ontology?

Agenda

OntologySummit2013 - Panel Session-05

  • Session Format: this is a virtual session conducted over an augmented conference call

Proceedings

Please refer to the above

IM Chat Transcript captured during the session

see raw transcript here.

(for better clarity, the version below is a re-organized and lightly edited chat-transcript.)

Participants are welcome to make light edits to their own contributions as they see fit.

-- begin in-session chat-transcript --

[08:59] Peter P. Yim: Welcome to the

Ontology Summit 2013: Virtual Panel Session-05 - Thu 2013-02-14

Summit Theme: Ontology Evaluation Across the Ontology Lifecycle

  • Summit Track Title: Track-D: Software Environments for Evaluating Ontologies

Session Topic: Software Environments for Evaluating Ontologies - I

Panelists / Briefings:

  • Dr. Michael Denny (MITRE) - "Ontology Quality and Fitness: A Survey of Software Support"
  • Ms. Jeanne Holm (Data.gov; NASA/JPL) - "Evaluating US Open Data for Discovery, Interoperability,

and Innovation"

  • Mr. Gavin Matthews (Vertical Search Works) - "Assuring broad quality in large-scale ontologies"

Logistics:

  • Refer to details on session page at:

http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_02_14

  • (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)
  • Mute control: *7 to un-mute ... *6 to mute
  • Can't find Skype Dial pad?
    • for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
    • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)

if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.

Attendees: Akram, Alan Rector, Amanda Vizedom, Anatoly Levenchuk, Bob Schloss, Bobbin Teegarden,

Carmen Chui, Clare Paul, Dalia Varanka, Dan Cerys, Dmitry Borisoglebsky, Doug Foxvog, ElieAbiLahoud,

Fabian Neuhaus, Gavin Matthews, Jeanne Holm, Jim Disbrow, Joanne Luciano, JoaoPauloAlmeida, John Bilmanis,

Ken Baclawski, Kevin Simkins, Lamar Henderson, Leo Obrst, Marcela Vegetti, MarcosMartinezRomero,

Matthew West, Max Petrenko, Michael Denny, Michael Grüninger, Mike Bennett, Mike Dean, Mike Riben,

Oliver Kutz, Paul Pope, Peter P. Yim, Ram D. Sriram, Richard Martin, Scott Hills, Simon Spero, Steve Ray,

Terry Longstreth, Till Mossakowski, Todd Schneider, Victor Agroskin, Will Burns,

Proceedings:

[08:51] marcelaVegetti morphed into Marcela Vegetti

[09:19] MatthewWest1 morphed into Matthew West

[09:19] anonymous1 morphed into Max Petrenko

[09:21] anonymous morphed into Kevin Simkins

[09:23] anonymous morphed into Gavin Matthews

[09:24] anonymous morphed into Michael Denny

[09:27] anonymous morphed into Carmen Chui

[09:27] anonymous morphed into MarcosMartinezRomero

[09:29] anonymous morphed into Jeanne Holm

[09:32] anonymous morphed into Lamar Henderson

[09:33] anonymous morphed into Doug Foxvog

[09:39] anonymous morphed into Paul Pope

[09:40] SimonSpero1 morphed into Simon Spero

[09:42] anonymous morphed into Lamar Henderson

[09:42] Clare Paul morphed into Clare Paul

[09:43] Joanne Luciano: Call for help with software development- is there anyone in the community who

knows SADI services? (if not, ok check out http://sadiframework.org) I have a generalized framework

approach that can be used for development of ontologies and evaluation, providing context for

intrinsic and extrinsic evaluation. If you can help, or know anyone who can, please get in touch

with me.

[09:36] Peter P. Yim: == Peter P. Yim opens the session on behalf of the co-chairs ... see: the [0-Chair] slides

[09:44] List of members: Alan Rector, Amanda Vizedom, Anatoly Levenchuk, Bobbin Teegarden, Carmen Chui,

Clare Paul, Dalia Varanka, Dmitry Borisoglebsky, Doug Foxvog, ElieAbiLahoud, Fabian Neuhaus,

Gavin Matthews, Jeanne Holm, Jim Disbrow, Joanne Luciano, JoaoPauloAlmeida, John Bilmanis, Ken Baclawski,

Kevin Simkins, Lamar Henderson, Leo Obrst, Marcela Vegetti, MarcosMartinezRomero, Matthew West,

Max Petrenko, Michael Denny, Michael Grüninger, Mike Dean, Oliver Kutz, Paul Pope, Peter P. Yim, Ram D. Sriram,

Richard Martin, Simon Spero, Steve Ray, Terry Longstreth, Till Mossakowski, Todd Schneider,

Victor Agroskin, anonymous, vnc2

[09:43] Peter P. Yim: == Michael Denny presenting ... see: the [1-Denny] slides

[09:44] anonymous morphed into Dan Cerys

[09:44] anonymous morphed into Akram

[09:50] Amanda Vizedom: @MichaelDenny - cutting out all human-required evaluation factors might miss

much opportunity. Why not include all factors, noting that there may be much that the evaluation

environment can do to *assist* with non-automated evaluation and with the tracking and management of

evaluation of all factors?

[09:54] Joanne Luciano: +1 to Amanda's comment -- use computers to assist humans

[09:51] anonymous morphed into Will Burns

[09:57] Simon Spero: Not perfect, but not bad:

http://www.howto.gov/customer-service/collecting-feedback/basics-of-survey-and-question-design

[10:01] Simon Spero: More academic: Tourangeau, R., Rips, L.J., and Rasinski, K. (2000),

The Psychology of Survey Response, Cambridge: Cambridge University Press.

http://www.amstat.org/sections/srms/ThePsychologyofSurveyResponse.pdf

[10:02] Simon Spero: http://www.amazon.com/Psychology-Survey-Response-Roger-Tourangeau/dp/0521576296

[11:25] Michael Denny: @SimonSpero: Thanks for the pointers to survey methodology. I am hoping you

will be able to help field this survey. I would like to come up with the simplest of implementations

that will allow continuing contributions and visibility of results.

[09:57] Doug Foxvog: @MichaelDenny: you suggest characterizing breadth, depth, use considerations,

etc. of ontologies. Do you have an ontology which can be used to express these features of ontologies?

[09:58] Terry Longstreth: Perhaps an element of the Maintenance phase, but I prefer adding

Retirement' to include archiving, sequestering, or erasing any component

[09:58] Bob Schloss: [Apologies - last minute unavoidable conflict came up. I will look over the 3

sets of slides from the rest of today tomorrow or next week]

[09:58] Peter P. Yim: @BobSchloss: thanks for coming (albeit briefly)

[10:01] Amanda Vizedom: Suggestion for additional evaluation factors (very important for some common

applications): Interpreting slide 8 #10 to mean core logical inference: inference type, along with

power (e.g., integrated support for probabilistic reasoning/ uncertainty); integrated lexical

coverage (languages, support for NLP); integrated topic-relation coverage (facets, subtopic, support

for document classification).

[10:01] Joanne Luciano: What are the research questions that the survey seeks to obtain (I didn't see

that -maybe I missed it).

[10:02] Doug Foxvog: Michael Denny brings up "assess[ing] query performance" for an ontology. Isn't

this to a large extent a property of the inference engine used, not necessarily of the ontology?

[10:05] Terry Longstreth: @doug: For the extrinsics - Track B - Todd and I have discussed (argued)

this. He won. Basically, if we can't see the ontology except as it presents behaviors, the

interpreting mechanism (whether a person, inference engine or generated software) is part of the

ontology.

[10:04] Amanda Vizedom: @doug I'd say both (that is, it is a characteristic of a specific ontology in

a specific reasoning environment).

[10:09] Doug Foxvog: re: Amanda's comment regarding bullet point "Assess the inferencing power of an

ontology". The inferencing power of an *ontology* would be a feature of the kinds of expressions

used: inheritance, argument constraints, properties of relations, existence and types of rules,

expression of uncertainty, etc. Support for actually inferencing using such statements would depend

upon the inference engine.

[10:03] Joanne Luciano: suggest adding an "open question" -- to ask what survey respondents think is

important but was not asked on survey.

[10:04] Marcela Vegetti: +1 to Joanne's comment

[10:03] Peter P. Yim: Michael Denny: ideas and feedback solicited on the survey design between now and Feb-22

[10:20] Michael Denny: re: all commenters, thank you -- the current set of software capabilities

under each ontology development phase are concluded by a solicitation to the software provider to

add their software capabilities that are not already covered. The objective of this survey is simply

to compile an inventory of software resources that help users evaluate or promote ontology quality

and fitness. There is no present intent to develop these findings into an ontology of

evaluation/fitness factors or capabilities.

[10:23] Amanda Vizedom: @MichaelDenny: Is there a notion of modularity in the survey, as

@MichaelGruninger just discussed I didn't see it, but could have missed)? If not, it seems well

worth adding, since it can be important to suitability and there is so much recent and ongoing work

on it. Yes?

[10:36] Michael Denny: @AmandaVizedom: Modules are covered under Management phase (as #4), but should

probably appear in Build phase as well.

[10:05] Peter P. Yim: == Michael Grüninger presenting ... see: the [2-Gruninger] slides

[10:05] anonymous morphed into Mike Bennett

[10:06] Simon Spero: Can you count how many different ontologies you have?

[10:16] Joanne Luciano: @SimonSpero Can you define "different"?

[10:22] Simon Spero: Joanne: no.

[10:08] anonymous morphed into Paul Pope

[10:18] Alan Rector: There is work in Manchester on various similar metrics - contact Bijan Parsia -

bparsia [at] cs.man.ac.uk

[10:21] Peter P. Yim: @AlanRector - thank you, Alan, I believe Bijan Parsia is being contacted and

invited to present at another track session ... right, Leo Obrst / Steve Ray (Track-A)?

[10:18] Joanne Luciano: @PeterYim (and other organizers) - do we have any slots in the summit for

discussion or any time in the face-to-face set aside for "discussion"

[10:23] Peter P. Yim: @JoanneLuciano - we will have "discussion" time at the end of each and every

virtual session. I am sure there will be plenty of discussion time during the face-to-face Symposium

too (although Mike Dean & Ram D. Sriram, our Symposium co-chairs will be in the best position to provide

specifics ... probably closer to the time.)

[10:27] Amanda Vizedom: @MichaelGruninger: Can you say more about "Intended Models" as used in the

COLORE sense, and how they might relate to things found "in the wild" - that is, in non-research

projects?

[10:38] Michael Grüninger: @AmandaVizedom: Suppose you are looking for an organization ontology with

relations such as supervises, authorizes, manages. You could use an org chart from your company as

an example of your intended semantics of the relations. We would translate such a chart into a set

of relations with their extensions (i.e. a model) and then search the repository.

[10:31] Simon Spero: @MichaelGruninger: is there anyway to deal with conservative/definitional

extensions without making CLIF's quantification of predicates unhelpful in the cases where they be

most useful? Would making CL sorted be enough?

[10:35] Michael Grüninger: @SimonSpero: all of the work we have done so far has used essentially

good-old fashioned FOL, and really hasn't exercised the full quantification of predicates in CL. We

translated CLIF to Prover9 and do the theorem proving there. Once there is a full CL theorem prover,

we can use it.

[10:31] Alan Rector: On modules - there is an OWL module extractor as a web app on the

http://owl.cs.manchester.ac.uk/ site. It is also available by request to download but new features

are being added regularly. [above link updated from original post, thanks to prompt from Simon Spero

and ElieAbiLahoud.]

[10:32] Fabian Neuhaus: @Till. Michael mentioned "Mossakowski, T., Codescu, M., Kutz, O., Lange, C.,

and Gruninger, M. (2013) Proof Support for Common Logic." in his presentation. Is the paper available?

[10:43] Fabian Neuhaus: @Michael: I think we need to scope the notion of "evaluation". Much of your

presentation covered an analysis of the relationships between ontology (e.g., conservative

extension, non-conservative extension) etc. This is interesting, but this does not tell you anything

about the quality of an ontology or a set of ontologies. .

[10:47] Michael Grüninger: @Fabian: First of all, one way in which we evaluate the quality of an

ontology is wrt intended models, and we use the relationships between the ontologies in the

repository to do this verification. In other words, consistency alone is not enough. Second, in past

few weeks, people have raised the idea of ontology comparison as being part of evaluation; it does

not address quality per se, but it does evaluate two ontologies against each other e.g. what are

their similarities and differences?

[10:52] Fabian Neuhaus: @Michael, second point. Yes, this is why I think we need to scope it, in

particular for the sake of the communique. If we use the word "evaluation" to mean "analyze" ontologies

or their use in systems under any conceivable aspect, including their structure, community use,

and intellectual property rights " then we will have trouble to get to a reasonable communique.

[10:55] Fabian Neuhaus: @Michael, first point. I understood the point, and agree that one very good

way to evaluate ontologies is to use intended models.

[10:55] Will Burns: Intellectual property rights as an issue makes the assumption that the data

itself has a context which uniquely identifies it and grants it the ability to be copyrighted. In

this manner, I suppose we're still addressing the ontology aspect as a 1:1 storage and retrieval

methodology whereby the files themselves are married to context. Unfortunately that doesn't *quite*

work out in a situation where the volume of data is no longer inherent with context.

[10:57] Simon Spero: @WillBurns: In US, Australia, and possibly Canada, "mere data" cannot be

copyrighted (beyond selection, coordination, and arrangement).

[10:58] Will Burns: @SimonSpero Correct. Specific and unique arrangements can be copyrighted, but the

question becomes what happens when we remove all three up front and still manage to store it all?

[11:02] Michael Denny: @FabianNeuhaus: By and large I was being driven by the full range of

evaluation factors mentioned in the three tracks which includes, for example, "intellectual property

rights". Such usage requirements that go into a user's choice to adopt particular ontologies as

components may determine the overall quality or fitness of their ontology product. That said, I also

hesitated to include evaluation of this factor as a candidate software capability. I chose to

include it to be faithful to the tracks' identifications of factors.

[10:27] Peter P. Yim: == Jeanne Holm presenting ... see: the [3-Holm] slides

[10:43] Steve Ray: Looks like the open government github platform shouldn't have a "t". Instead, it

is https://github.com/opengovplatform

[this is in reference to JeanneHolm's slide#15, which is now corrected. Thanks, Steve.]

[10:39] Joanne Luciano: Would like to bring everyone's attention to some tools from our lab at RPI

(Tetherless World Constellation) that to take gov't data and turn it to RDF - there's also data

quality tools available. This is the RPI data.gov site: http://data-gov.tw.rpi.edu/wiki Linked Data

Quality Reports: Tim Lebo https://github.com/timrdf/DataFAQs/wiki

[10:48] Joanne Luciano: @JeanneHolm Semantic Ecology and Environmental Portal uses gov't data:

http://tw.rpi.edu/web/project/SemantEco

[10:49] Jeanne Holm: Thanks Joanne! Great pointer.

[11:13] Scott Hills: @JeanneHolm, on your slide 16, can you offer a link to more info about

ADMS-based metadata?

[11:15] Jeanne Holm: @ScottHills You can find out more about ADMS (a European Union initiative) go to

http://joinup.ec.europa.eu/asset/adms/home

[11:16] Scott Hills: @JeanneHolm, thanks.

[10:51] Peter P. Yim: == Gavin Matthews presenting ... see: the [4-Matthews] slides

[10:58] Doug Foxvog: @Gavin: you show a "RT:" (related term?) link. Does the ontology specify the

type of relation? Or is the number of relations very limited?

[10:59] Simon Spero: @doug RT is usually defined as a residual category of all relations that aren't

equivalence or hierarchical.

[11:00] Till Mossakowski: slide 6: what does "BT" mean? subconcept? why is "woman musician" a

subconcept of "famous musician"?

[11:01] Simon Spero: @Till: That looks like an incorrect BT

[11:02] Simon Spero: @Till: ?N BT ?B means that everything about ?N is also about ?B

[11:03] Doug Foxvog: The ontology on slide 6 shows both "woman musician" and "musical performer" as

subtypes of "famous musician". Yet "famous musical performer" is a subtype of "musical performer".

The class "famous musician" probably is not mis-named since it is a subtype of "celebrity". There

seems to be a problem here.

[11:04] Simon Spero: @Till: Diagnostic sentence frame - " It's about a woman musician but it's not

about a famous musician" should be unacceptable

[11:15] Gavin Matthews: @TillMossakowski: BT (Broader Term) means sub-collection/sub-type. I'm not

claiming our ontology is perfect. I wanted to show how we detected errors. :)

[11:16] Simon Spero: @Gavin: Is it restricted to ~genls?

[11:18] Simon Spero: @Gavin: In controlled vocabularies, BT can apply to genls, isa, or intentionally

necessary parts (where the absence of a part requires explanation)

[11:19] Simon Spero: @Gavin: there are BTG, BTP, and BTI which are spec preds of BT

[11:20] Simon Spero: @Gavin: but undifferentiated broader does not allow you to infer a genls

[11:21] Gavin Matthews: @SimonSpero: We use BT to mean roughly "genls". We distinguish ISA meaning

isa. For practical reasons, we permit a certain underspecification.

[11:24] Amanda Vizedom: During last week's discussion of development methodology, we really talked

very little about how evaluation can guide development. The evaluation Gavin discusses is used by

ontology developers strongly; we might want to talk about that.

[11:05] Matthew West: Sorry need to drop off now. Considerable food for thought here.

[11:05] Simon Spero: See: http://dc2008.de/wp-content/uploads/2008/10/04_spero_poster.pdf

[11:06] Simon Spero: What's needed is D. Allan Cruse control..

[11:08] Doug Foxvog: @Simon: yes, "related term" might be all that is needed for generic web

searching. But it would be useful if a search for owners of a sports team would yield different

links than for members of the team, or for supporters of the team.

[11:09] Simon Spero: @doug: Yes - RT can be specialized (he described a few, I think).

[11:24] Gavin Matthews: @DougFoxvog: Yes, "RT" means related term. It is our vaguest relation, and we

constantly work to specialise it. Relations like "plays role in", "located in", and "has theme" are

considered to be sub-relations of RT, but in practice all of those horizontal links have

approximately the same effect on disambiguation.

[11:30] Simon Spero: @Gavin: Underspecification is necessary - there have been attempts to create

more specific coverings of associative relationships ( RT), but they all ran in to the problem that

they were incomplete, and not useful for IR purposes

[11:09] Amanda Vizedom: I would call everyone's attention to Gavin's slide 9. This level of testing

is something almost no one imagines applying to ontologies, even in contexts where the rest of the

software environment is so tested. Yet it's entirely possible, and once established, extremely valuable.

[11:35] Amanda Vizedom: Thanks Gavin!

[11:11] Peter P. Yim: == Q&A and Open Discussion ...

[11:13] Michael Grüninger: @Fabian: A similar comment can be made about ontology metrics i.e. what do

they have to do with ontology quality? I get more insight from knowing that an ontology I have is

inconsistent with another ontology than I get from knowing that there are 40 axioms with 5 relations,

12 classes with 3 siblings.

[11:14] Till Mossakowski: @Michael: what does "inconsistent with" mean? The union being inconsistent?

[11:14] Fabian Neuhaus: @Michael. I would (and have) made the same comment about these metrics.

[11:20] Michael Grüninger: @Till: Yes, I meant that two ontologies are mutually inconsistent. For

example, one ontology entails that time is discrete and the other ontology entails that time is dense.

[11:26] Joanne Luciano: One simple way evaluation can be used to guide development is with the use of

reasoners. Whenever we add a class or an assertion, we run the reasoner and check if we get what we

expect. Of course this is by hand, and scaling this is another story.

[11:27] JoaoPauloAlmeida: yes

[11:32] Steve Ray: Must run. Thanks.

[11:33] Alan Rector: On tracking environments: We have used Redmine successfully and found it easier

than Bugzilla because of the good Wiki and other facilities. So many of the issues that come up are

more complex issues than simple "bugs".

[11:35] Simon Spero: @AlanRector: bug report - running the owltoskos web tool on the skos ontology crashed.

[11:38] Alan Rector: @SimonSpero - please email Bijan Parsia - bparsia [at] cs.man.ac.uk - and Sean

Becchofer - sean.bechhofer [at] manchester.ac.uk - they will redirect it to whomever is dealing with

it now.

[11:35] Joanne Luciano: Please email me if you'd like to collaborate on a generalized ontology

evaluation and development framework. jluciano [at] rpi.edu Thanks everyone

[11:37] Peter P. Yim: join us again, same time next week, for Ontology Summit 2013 session-06: "Ontology

Summit 2013: Synthesis-I" - Co-chairs: Michael Grüninger & Matthew West -

http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_02_21

[11:38] Gavin Matthews: Thanks very much everyone. I've found it very interesting.

[11:40] Peter P. Yim: -- session ended: 11:39 am PST --

-- end of in-session chat-transcript --

  • Further Question & Remarks - please post them to the [ ontology-summit ] listserv
    • all subscribers to the previous summit discussion, and all who responded to today's call will automatically be subscribed to the [ ontology-summit ] listserv
    • if you are already subscribed, post to <ontology-summit [at] ontolog.cim3.net>
    • (if you are not yet subscribed) you may subscribe yourself to the [ ontology-summit ] listserv, by sending a blank email to <ontology-summit-join [at] ontolog.cim3.net> from your subscribing email address, and then follow the instructions you receive back from the mailing list system.
    • (in case you aren't already a member) you may also want to join the ONTOLOG community and be subscribed to the [ ontolog-forum ] listserv, when general ontology-related topics (not specific to this year's Summit theme) are discussed. Please refer to Ontolog membership details at: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
      • kindly email <peter.yim@cim3.com> if you have any question.

Additional Resources


For the record ...

How To Join (while the session is in progress)

Conference Call Details

  • Date: Thursday, 14-Feb-2013
  • Start Time: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC
  • Expected Call Duration: ~2.0 hours
  • Dial-in:
    • Phone (US): +1 (206) 402-0100 ... (long distance cost may apply)
      • ... [ backup nbr: (415) 671-4335 ]
      • when prompted enter Conference ID: 141184#
    • Skype: joinconference (i.e. make a skype call to the contact with skypeID="joinconference") ... (generally free-of-charge, when connecting from your computer)
      • when prompted enter Conference ID: 141184#
      • Unfamiliar with how to do this on Skype? ...
        • Add the contact "joinconference" to your skype contact list first. To participate in the teleconference, make a skype call to "joinconference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 141184# when prompted.
      • Can't find Skype Dial pad? ...
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)
  • Shared-screen support (VNC session), if applicable, will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/
    • view-only password: "ontolog"
    • if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.
    • people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides above (where applicable) and running them locally. The speaker(s) will prompt you to advance the slides during the talk.
  • In-session chat-room url: http://webconf.soaphub.org/conf/room/summit_20130214
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20130214@soaphub.org ... Handy for mobile devices!
  • Discussions and Q & A:
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)
  • RSVP to peter.yim@cim3.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.
    • Caveat: to allow us to share some of the latest in commercial deployment of ontology-based technologies, this session will be featured with a special waiver to commercial organizations on the Ontolog IPR Policy. Panelists from such organizations are welcome to talk about their proprietary (non-open) technologies if they so desire (on the condition that proprietary portions of their presentation are to be specifically stated as such). However, (despite the waiver) it should be noted that we will (as usual) be making available the entire proceedings, including all slides, recorded audio, etc., of the session to the community and the public at large from this Ontolog site (with those organizations' consent.)

Attendees

  • Expecting:
    • please add yourself to the list if you are a member of the Ontolog or Ontology Summit community, or, rsvp to <peter.yim@cim3.com> with your affiliation.
  • Regrets:
    • ...