From OntologPSMW

Jump to: navigation, search
[ ]

Contents

OntologySummit2015 Track B: Beyond Semantic Sensor Network Ontologies-II - Thu 2015-03-05     (1)

Introduction     (1D)

Sensors are the front end of and play a big part of IoT. Sensor-generated data have Big Data challenges like heterogeneity etc. Because misunderstanding the data can result in invalid or misrepresented analyses semantic technologies, such as the Semantic Sensor Network ontology (SSN) ontology and associated reasoning, represent a seed area for the IoT. We think this is a source of useful work relevant to IoT and an opportunity for good semantic development. The Sensor network focus and efforts to go beyond the original model allows discussion of some the major challenges in utilizing semantic technologies for the IoT. For example there is the issue of data processing after sensing is completed and networking and data processing needs to be coordinated. There is consideration on non-sensing devices such as actuators and concentrators. And there is the inherent IoT heterogeneity issue with its multiple Techs, Standards, & different Information types.     (1D1)

Agenda     (1E)

Speakers     (1E1)

  • Charles Vardeman, II: Computational Observations Hackathon idea     (1E2)
    • One of the potential foundational pieces of the Internet of Things (IoT) is the work done by the W3C Incubator Group on semantic sensor networks. A core component that was resultant of the groups work was the Sensor Stimulus Observation Ontology Design Pattern that captures the concept of observation in a quantifiable and qualifiable representation including the provenance necessary to understand the context of an observation. The DASPOS project in collaboration with a group from the SoCOP DC Geovocamp 2014 (http://vocamp.org/wiki/GeoVoCampSOCoP2014) have started development of an analogous Ontology Design Pattern for Computational Observations where the observation is the result of some computational model. As part of the Ontology Summit, we are looking for feedback on the model with respect to potential applications to the IoT.     (1E2A)
  • Ingo Simonis: OGC Sensor Web & Semantics     (1E3)
    • The OGC Sensor Web Enablement initiative started in 2001 with the goal to make sensors and sensor data connected to the Internet available at well defined interfaces using standardized information models and serializations. For over a decade, attempts to add Semantic Web technologies and techniques failed to break into the market. Just the latest developments around JSON-LD with additional pushes coming from the Internet of Things domain seem to become more successful.     (1E3A)
  • Konstantinos Kostis: Managing unknown IoT entities by uncovering and aligning their semantics     (1E4)
    • The talk will focus on research work at VTT (semantic interoperability in IoT) and also in current and future plans related to ���semantic interoperability for Cyber-Physical big data-intensive systems���.     (1E4A)
  • Jean-Paul Calbimonte: Ontology-based Access to Sensor Data Stream     (1E5)
    • Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available using heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details.     (1E5A)
  • Torsten Hahmann, Silvia Nittel: Understanding Group Activities from Movement Sensor Data     (1E6)
    • We will present ongoing work on utilizing narrow application ontologies to inject semantics into sensor data, helping us to identify and describe human-comprehensible concepts from sensor data. This is demonstrated using trajectory information about people moving between rooms in buildings for identifying group activities such as different kinds of meetings.     (1E6A)
  • Barry Smith: Ontology of Sensors: Some Examples from Biology     (1E7)
    • I will sketch how two ontologies, the Ontology for General Medical Science (OGMS) and the Ontology for Biomedical Investigations (OBI), represent the roles played by biotic and abiotic sensors in biomedical research.     (1E7A)

Resources     (1F)

  • Dial-in:     (1G4)
    • Phone (US): +1 (425) 440-5100 ... (long distance cost may apply)     (1G4A)
    • Skype: join.conference (i.e. make a skype call to the contact with skypeID="join.conference") ... (generally free-of-charge, when connecting from your computer ... ref.)     (1G4B)
      • when prompted enter Conference ID: 843758#     (1G4B1)
      • Unfamiliar with how to do this on Skype? ...     (1G4B2)
        • Add the contact "join.conference" to your skype contact list first. To participate in the teleconference, make a skype call to "join.conference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 843758# when prompted.     (1G4B2A)
      • Can't find Skype Dial pad? ...     (1G4B3)
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"     (1G4B3A)
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)     (1G4B3B)
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (1G5A)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (1G5B)
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20150305@soaphub.org ... Handy for mobile devices!     (1G5C)
  • Discussions and Q & A:     (1G6)
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.     (1G6A)
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)     (1G6B)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)     (1G6C)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)     (1G6D)
  • RSVP to gbergcross@gmail.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)     (1G8)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (1G10)

Chat Transcript     (1H)

[09:16] Mark Underwood: Slide decks for today's session downloadable from "Prepared Presentation Material" on http://ontolog-02.cim3.net/wiki/ConferenceCall_2015_03_05     (1H1)

[09:17] Gary Berg-Cross: Hello Chuck!!     (1H2)

[09:17] Charles Vardeman: Greetings!     (1H3)

[09:26] Peter P. Yim: Hi everyone!     (1H4)

[09:28] Michael Grüninger: I won't be able to participate during the session, but I will be starting and ending a recording of the session     (1H5)

[09:39] Konstantinos: Hi Gary, all     (1H6)

[09:40] Gary Berg-Cross: Welcome Konstantinos. You are scheduled as our 3rd speaker.     (1H7)

[09:46] Torsten Hahmann: In my Skype version (6.2) there is a big "Plus symbol" right next to the red "hang up" symbol to add the dialpad. It is only visible when the call is in progress.     (1H9)

[09:48] Tara Athan: A critical component of the description of a computational model about the real world are the regime of validity, which could be expressed either negatively (the model is not valid if the temperature is less than X) or positively (the model assumes the material is at thermal equilibrium). I have long hoped that very expressive KR languages (e.g. Common Logic) could be used to capture this "metadata".     (1H10)

[09:50] Konstantinos: I am in! Thanks a lot     (1H11)

[09:51] Charles Vardeman: @TaraAthan Yes that is my hope. One of the inspirations for the pattern was my experience in working with students who were doing simulations that probably were not valid given their choice of input parameters (temperature is less than X and the model was not parameterized for those conditions).     (1H12)

[09:52] Gary Berg-Cross: @Konstantinos Great. You will be the next speaker. We didn't have chance to test your mike so we will try that first and let you know if we can hear you.     (1H13)

[09:54] Tara Athan: @Charles Do you have a particular approach in mind for capturing model regime of validity or assumptions?     (1H14)

[09:54] Gary Berg-Cross: Welcome Barry. You are our last speaker which should be around 2.     (1H15)

[09:57] Josh Lieberman: so UML in slide 9 should be GML ?     (1H16)

[09:58] Gary Berg-Cross: Simon Cox presented last year, or so, on this O & M work as part of an Ontolog series on Earth Science. You can get his slides there.     (1H17)

[09:59] Charles Vardeman: @TaraAthan I have some rough thoughts based on some toy models (inclined plane) and some work that we did at a recent RDA workshop. We were playing with using value restrictions based on the model and the algorithm. The issue is that there are sets of conceptual and mathematical assumptions built into the computational model as well as assumptions that are built into the algorithmic implementation. One of the issues we need to explore is the relationship between what we call parameter type (algorithm) and AttributeType which is a property of the model.     (1H18)

[09:59] Josh Lieberman: Familiar with Simon's work, just not the point being made by Ingo.     (1H19)

[10:06] Tara Athan: @charles - one issue with value restrictions has to do with what is the required accuracy of the results. Supposing "h" represents a neglected effect, and the error due to neglecting it is bounded by some k * h^n, it could be possible to derive the value restrictions based on the users accuracy requirements.     (1H20)

[10:12] Tara Athan: Does anyone have a link for RDF streams?     (1H22)

[10:13] Mark Underwood: @Tara - Not sure. . . The W3C group is https://www.w3.org/community/rsp/     (1H23)

[10:15] Charles Vardeman: @TaraAthan I agree. We know in many cases how the errors associated with a model are propagated by the algorithm. As I alluded to, I think the computational model sub-pattern may have patterns that capture error associated with a model associating that the model has been captured to sufficient fidelity. I also have a notion that the model could also just be an information object that points to a publication that captures the model which would still be useful.     (1H24)

[10:18] Liana Kiff: What are the performance characteristics of retrieving and processing RDF streams?     (1H25)

[10:18] Tara Athan: The theory of monads (from category theory and functional programming) may be useful in dealing with streams of axioms such as RDF streams, since Stream is a particular kind of monad.     (1H26)

[10:18] Gary Berg-Cross: @jean-Paul People might like some references on the work that you cite.     (1H27)

[10:23] Mark Underwood: @Jean-Paul - Should one hold out hope for interop with CEP standards to add event, query models?     (1H28)

[10:24] Ravi Sharma: @Jean-Paul - great especially CEP. Do you or can you use timestamp on data for streaming? Also very useful for CEP? How does it relate to SBVR and time and calendaring efforts including OMGs?     (1H29)

[10:25] Gary Berg-Cross: @Jean-Paul Does the mapping from data to stream create an identity issue? If the data form has an ID does the stream have a new ID but point back to the original ID?     (1H30)

[10:27] Ravi Sharma: @Tara please send me references as well.     (1H31)

[10:35] Steve Ray: Wolfram Research has made a good start at a semantic registry of IoT devices at http://devices.wolfram.com/     (1H32)

[10:39] Ravi Sharma: @Konstantin - Source of data if on Internet or referenced to Internet is equivalent to big data but big data need not have IoT relationship? Let us know why the two are same?     (1H33)

[10:44] Konstantinos: @SteveRay that is cool, thanks     (1H34)

[10:46] Dennis Wisnosky: did she say woof woof     (1H35)

[10:47] Ravi Sharma: @Nittel - we need to distinguish between entities related to physical objects and then you say dinner, it is a plate with things and also a process? How do we deal this - in prior Knowledge Base?     (1H36)

[10:49] Ravi Sharma: Sorry I meant Silvia Nittel     (1H38)

[10:53] Mark Underwood: @Dennis That's how my parser heard it, unless it was a hidden reference to a highly supportive collaborator     (1H39)

[10:54] Dennis Wisnosky1: My favorite collaborator talks that language.     (1H40)

[10:55] Ravi Sharma: @Torsten, we have webinars as a category?     (1H41)

[10:56] Josh Lieberman: So the activity characterizations come from a combination of prior reasoning and machine learning classification of signals...     (1H42)

[10:57] SIlvia Nittel: yes, we say there has to be some sensor data synthesis and fusion to come up with the "observed entity signal"     (1H43)

[10:58] Torsten Hahmann: @Ravi: we should, but currently we don't ... The sensor signals underlying the trajectory data has no video/audio - so there are only limited things we can infer from it. But one could envision adding a "video conference sensor" to take care of this     (1H44)

[10:59] Christi Kapp: @Torsten Are you looking at using any statistical clustering algorithms to interpret the event data and correlate location events into meetings?     (1H45)

[11:00] Torsten Hahmann: @Christi: Not right now - we hope to get by with logical axioms/assertions right now. But we are thinking about using statistics to describe different kinds of meeting in more detail (after classification)     (1H46)

[11:02] Peter P. Yim: Do the activity descriptions come from an existing ontology? If the events, and the sensor data descriptions come from the same ontology (something comprehensive, as one extracted from Wikipedia), there are no alignment problems. ... (please identify yourself, and post to the field on the left of the "send button")     (1H47)

[11:02] Ravi Sharma: @Barry - how mature are ontology interfaces or use from domain to mid level such as healthcare?     (1H48)

[11:02] Gary Berg-Cross: Q from audience Do the activity descriptions come from an existing ontology? If the events, and the sensor data descriptions come from the same ontology (something comprehensive, as one extracted from Wikipedia), there are no alignment problems.     (1H49)

[11:07] Mark Underwood: Is he lexicalizing "biotic" ?     (1H50)

[11:12] Ravi Sharma: @Barry - how would you categorize Phantom sensation (pain) and squeezed nerves that prevent sensing of downstream pain, even though there are secondary nerve networks? I am hinting at correlation between primary and backup sensing? depiction in ontology?     (1H51)

[11:17] Jean-Paul Calbimonte: @Barry: for references, please refer to the RDF Stream processing group wiki, we have lots of resources there: https://www.w3.org/community/rsp/wiki/Main_Page     (1H52)

[11:17] Jean-Paul Calbimonte: sorry, meant @Gary     (1H53)

[11:18] Jean-Paul Calbimonte: @Mark: yes CEP interop is key. In fact some of the systems I mentioned already do. For example EP-SPARQL. or Our SPARQL stream approach, can use a CEP such as Esper behind the scenes     (1H54)

[11:19] Jean-Paul Calbimonte: @Ravi: yes, one way is to add timestamps to triples. But we found lately that it might be better to create RDF graphs and annotate them with timestamps     (1H55)

[11:20] Todd Schneider: Josh, is the 'secondary' signal just another measurement(process)?     (1H56)

[11:20] Ravi Sharma: @Barry - thanks for answers.     (1H57)

[11:20] Jean-Paul Calbimonte: @Gary: the mappings should provide these new URIs you mention. What we think is that PROV relationships should be used to link the original streams with the derived ones. But this should be formalized     (1H58)

[11:21] Gary Berg-Cross: @Jean-Paul Thanks for this connection to Prov and the references.     (1H59)

[11:22] Todd Schneider: Barry, can the use of statistical analyzes be represented as processes?     (1H60)

[11:22] Todd Schneider: Barry, can statistical analyzes be represented as processes?     (1H61)

[11:22] Josh Lieberman: Unanticipated relationship between a previously configured signal and a new measurement     (1H62)

[11:26] Konstantinos: I really need to leave the chat now, please forward any questions at kotis@aegean.gr and I will be glad to answer. Thank you for listening.     (1H63)

[11:26] Mark Underwood: Thanks for presenting, Konstantinos     (1H64)

[11:26] Gary Berg-Cross: @Konstantinos thanks very much for an interesting talk and for staying up late...     (1H65)

[11:26] Konstantinos: Thank you for your attention!     (1H66)

[11:26] Konstantinos: and support!     (1H67)

[11:27] Leo Obrst: @SilviaNittel: we do model meetings as events in our ontologies because we need such. They can be complex, with many participants in different roles, and with predecessor and successor events and states.     (1H68)

[11:28] Todd Schneider: Ravi, what about the physical manifestation(s) of a process?     (1H69)

[11:31] Mark Underwood: John - possible topic for the F2F meeting     (1H70)

[11:32] Ravi Sharma: @Todd - winds cause loss of houses in tornadoes? so cause effect relations but often as Barry explained process is effect of entities of first kind.     (1H71)

[11:32] Mark Underwood: Rich set of presentations - Thanks to all the presenters     (1H72)

[11:32] Ravi Sharma: thanks     (1H73)

[11:33] Charles Vardeman: Thanks!     (1H74)

[11:33] Jean-Paul Calbimonte: thanks to all     (1H75)

[11:33] Peter P. Yim: Big THANK YOU to the co-chairs for organizing this great session. ... Thanks to each and every speaker for their well prepared and rich presentations!     (1H77)

[11:33] Torsten Hahmann: Thanks everyone!     (1H78)

[11:33] John Graybeal: Question about how these ontologies become used: Some may be intended simply for the research communities, but getting practical ontologies adopted in operational systems seems to be a big hurdle. Do we have thoughts and/or conclusions about how to make this take place?     (1H79)

[11:33] Charles Vardeman: Bye!     (1H80)

[11:33] Leo Obrst: Thanks, all!     (1H81)

[11:33] Marcela Vegetti: Great Session!     (1H82)

Attendees     (1I)

Alex Mirzaoff     (1I1)

Barry Smith     (1I4)

Bobbin Teegarden     (1I5)

Carl Neilson     (1I7)

Charles Vardeman     (1I8)

Dennis Wisnosky     (1I12)

Frederic de Vaulx (NIST Associate)     (1I14)

Gary Berg-Cross     (1I15)

Jean-Paul Calbimonte     (1I17)

John Graybeal     (1I20)

Judith Gelernter     (1I22)

Konstantinos     (1I24)

Liana Kiff     (1I26)

Mark Underwood     (1I29)

Nicolas Seydoux     (1I31)

Ram D. Sriram     (1I34)

Richard Beatch     (1I36)

Richard Martin     (1I37)

SIlvia Nittel     (1I38)

Tom Tinsley     (1I43)

Torsten Hahmann     (1I44)