From OntologPSMW

Jump to: navigation, search
[ ]

Contents

OntologySummit2015 Track B: Beyond Semantic Sensor Network Ontologies-II - Thu 2015-01-29     (1)

Introduction     (1D)

Semantic technologies and ontologies, such as the Semantic Sensory Network (SSN) ontology and associated reasoning, play a major role in the IoT because they are increasingly being applied to help intelligently process and understand sensor information. This supports the overall IoT vision of various, ubiquitous & increasingly small sensors and actuators embedded in physical object devices providing information about the environment and systemic interpretation of collected data via linked wired and wireless networks to the Internet. This information augments our awareness of a sensed environments and, thus, serves as an IoT bridge between physical and digital realms. The SSN ontology, soon an OGC standard, has been a source of good work useful for starting work relevant to IoT. A good deal of work and publications have grown around the ontology and some lessons were learned along the way. A sensor network focus allows discussion of some these topics along with major challenges in utilizing semantic technologies for the IoT.     (1D1)

The track will discuss the major challenges in utilizing semantic technologies for the IoT, and current efforts on developing the next generation of semantic technology for integrating, processing and understanding sensor information and for developing smart sensor networks.     (1D2)

Working with various researchers we will discuss the use of the SSNO and its various extensions, look at some of the ontology problem space and explore possible solutions.     (1D3)


Agenda     (1E)

  • Introduction to the track and to the session by Gary Berg-Cross (SOCoP)     (1E1)

Speakers     (1E2)

  • Jeff Voas (NIST)��� Networks of Things: Pieces, Parts and Data (Sensors Model for IoT)     (1E3)
  • Cory Henson: Semantic Sensor Network Ontology: past, present, and future     (1E4)
    • In the summer of 2011, the Semantic Sensor Network Ontology (SSNO) was born from the sudden and rapid emergence of sensor data on the Web. The SSNO provides a common language to enable more effective interoperation, integration, and discovery of Web-based sensing systems and the data they produce. Over the previous three years the SSNO has been used in a wide range of applications; and with this experience the SSNO is now set to become an OGC/W3C standard. The emergence of the IoT, however, presents new challenges and requirements that, perhaps, were not fully appreciated in the summer of 2011. How will this new development effect the semantics of sensing going forward? Stay tuned.     (1E4A)

Resources     (1F)

  • Dial-in:     (1G4)
    • Phone (US): +1 (425) 440-5100 ... (long distance cost may apply)     (1G4A)
    • Skype: join.conference (i.e. make a skype call to the contact with skypeID="join.conference") ... (generally free-of-charge, when connecting from your computer ... ref.)     (1G4B)
      • when prompted enter Conference ID: 843758#     (1G4B1)
      • Unfamiliar with how to do this on Skype? ...     (1G4B2)
        • Add the contact "join.conference" to your skype contact list first. To participate in the teleconference, make a skype call to "join.conference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 843758# when prompted.     (1G4B2A)
      • Can't find Skype Dial pad? ...     (1G4B3)
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"     (1G4B3A)
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)     (1G4B3B)
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (1G5A)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (1G5B)
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20150129@soaphub.org ... Handy for mobile devices!     (1G5C)
  • Discussions and Q & A:     (1G6)
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.     (1G6A)
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)     (1G6B)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)     (1G6C)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)     (1G6D)
  • RSVP to gbergcross@gmail.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)     (1G8)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (1G10)

Attendees     (1H)

[09:34] Michael Grüninger: If you can't connect via skype, you can find local numbers to dial in at Local Numbers: http://InstantTeleseminar.com/Local     (1I1)

[09:38] Mark Underwood: Slides from presenters are on this page: http://ontolog-02.cim3.net/wiki/ConferenceCall_2015_01_29     (1I2)

[09:40] Gary Berg-Cross: The speaker will call out the page #s. You have to download them.     (1I3)

[09:45] Jack Ring: Unfortunately, for me, I am just now recruited to assist with a medical emergency. I want to examine the impact of 'software bugs' logic, arithmetic and semantic in heterogeneous suites of computer programs. I think this will be a major problem in all IoT's, particularly those associated with autonomous systems. Will catch up later.     (1I5)

[09:50] Ravi Sharma: you mean march 5 and not 15th     (1I6)

[09:50] Torsten Hahmann: Yes, the second session is on March 5th     (1I7)

[10:00] Ravi Sharma: what I find missing in the slide 7 is the valuable info extraction unless it is in the thing called observation?     (1I8)

[10:02] Ravi Sharma: unless sensors have it we need standard interfaces for sensors may be categories so as to be able to integrate slide 8. example is XML etc.     (1I9)

[10:10] Mark Underwood: Live twitter stream @ TweetChat.com     (1I10)

[10:11] MIchael Uschold: Does anyone have the link to Werner Kuhn's talk distinguishgin knowledge modeling language vs. knowledge representation language?     (1I11)

[10:15] Gary Berg-Cross: @Michael U Werner's slides should be at:http://ontolog.cim3.net/file/work/OntologySummit2014/2014-02-06_OntologySummit2014_Overcoming-O> ing-behavior-in-ontology-engineering--WernerKuhn_20140206.pdf That session was at :http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2014_02_06     (1I12)

[10:16] Ravi Sharma: Why can the clusters be not real, we can connect seismometers in a node that is the cluster?     (1I13)

[10:19] Ravi Sharma: It is easy to implement bidirectional capability in Communications, this would help tune or change mode of multiple ways of using a sensor more effective including turning on or off?     (1I14)

[10:20] Mark Underwood: @MichaelUschold I don't have the cite, but most of Kuhn's papers are on ResearchGate if you want to look there     (1I15)

[10:23] Ravi Sharma: Some of the concerns about authentication and security apply to data or bitstream encapsulation etc and existing NIST and other standards can be used rather than invent one for sensors?     (1I16)

[10:26] Bobbin Teegarden: Can't there be concentrators of concentrators, et al...?     (1I17)

[10:28] Ravi Sharma: The summary is that among existing sensor data streams such as healthcare, environment and monitoring (surveillance) the important area that ontology can contribute to is eUtility and Decision and emphasis for this there would be great for SUmmit 2015.     (1I18)

[10:28] Ravi Sharma: @Bobbin - yes     (1I19)

[10:30] Ravi Sharma: In addition to self correcting or auto repair features such as reset, recalibration, test against a standard reference, sensors woul also benefit from remote programming and hence a case for 2 way communication!     (1I20)

[10:32] Mark Underwood: Does "decision" here overlap w/ Big Data "analytics" - How is it different (data sources, M2M streams?)     (1I21)

[10:34] Ravi Sharma: Mention of Supply chain could also mean value chain or service hierarchy?     (1I22)

[10:36] Ravi Sharma: @Mark - I see a lot of this as part of big data and linked metadata that help decisions analytics provide you concrete metrics and measures for "decisions".     (1I23)

[10:36] Gary Berg-Cross: Interestingly Device, a physical thing, is not one of the 10 primitives...     (1I24)

[10:37] Ravi Sharma: Device and sensor are interchangeable even instrument or system as a whole such as scada could just be a node treatable like a sensor?     (1I25)

[10:40] Ravi Sharma: @Gary - IOT is rea as all cameras and surveillance data are on internet today? these could be called devices.     (1I27)

[10:42] Peter P. Yim: @CoryHenson - ref. the copyright statement at the bottom of your slides, please note that the prevailing Open Ontolog IPR Policy will apply in the context of this Ontology Summit session - see: http://ontolog-02.cim3.net/wiki/WikiHomePage#Intellectual_Property_Rights_.28IPR.29_Policy     (1I28)

[10:49] Gary Berg-Cross: @Ravi The Collection idea in the IoT primitives is similar to the SSN System idea but between sensor and system we have the Device concept that seems useful.     (1I30)

[10:52] Bobbin Teegarden: What is the software environment that is doing the eval and making the decisions around this ontologically based morphing info?     (1I31)

[10:54] Ravi Sharma: @Gary - do you mean - more like an appliance? or interface among them?     (1I32)

[10:56] Ravi Sharma: @Cory - slide 17 apex OWL is akin to decision or at least valuable info ready for decision?     (1I33)

[10:57] Michael Grüninger: @Gary Berg-Cross, Torsten Hahmann: I need to leave in 5 minutes; you should wrap-up around 2:30 or so (although the recording should continue after that until 3:00pm, so if the discussion is energetic, feel free to continue a little past)     (1I34)

[10:57] Gary Berg-Cross: @Ravi, you many have many different types of sensors (temp, salinity, position, vecolity etc..) on one device. Even a mobile phone has many sensing possibilities.     (1I35)

[10:58] Torsten Hahmann: I was wondering about this as well - is it treated as a single sensor or as a cluster?     (1I36)

[11:00] Ravi Sharma: @Cory - datacube is also similar concept in NextGen Airspace related atmospheric data cube which is mostly metadata based on sensors and models.     (1I37)

[11:04] Ravi Sharma: @COry Slide 22, would not Query and response give you local information rather than take it global and then filter it down? these would be more important for dedicated communities not all open.     (1I38)

[11:05] Bobbin Teegarden: Cory: how did you get to the bits to test and vector off of? What SW?     (1I39)

[11:09] Bobbin Teegarden: @Cory: brilliant! thank you.     (1I40)

[11:10] Terry Longstreth: @Gary - many different sensors is a matter of perspective and utility. We haven't defined the 'size' of a sensor, or the extent of its utility. As an imagery scientist, I'd consider Hubble to be a sensor, while the satellite controller might view the individual reaction wheel spin reporters as sensors.     (1I41)

[11:10] Torsten Hahmann: @Ravi: could you restate your question for Cory?     (1I42)

[11:11] Ravi Sharma: @Torsten - Iphone6 has more than a dozen sensors, inertial etc and fitbots are another set of wearables flooding market now.     (1I43)

[11:12] Peter P. Yim: unmute all is "99" by an admin     (1I44)

[11:14] Leo Obrst: Can everyone who is not talking mute themselves locally?     (1I45)

[11:14] Peter P. Yim: mute all is "88" by an admin     (1I46)

[11:14] jack hodges: That would be nice     (1I47)

[11:14] Mark Underwood: The question was this: Where should the work of ontologists be focused. Referring to a slide from last week, wondering where the focus should be -- domain-specific? Device-specific? Inside professional organizations?     (1I48)

[11:18] Gary Berg-Cross: We may have to use just written Qs to speakers.     (1I49)

[11:20] Ravi Sharma: I feel value of ontology is in decision making or aggregating only relevant info and leaving out i.e. decision support and model implementation     (1I50)

[11:21] Mark Underwood: I need to exit, but please extend my sppreciation to speakers / Moderator - These presentations were really good and on-topic     (1I51)

[11:21] Bobbin Teegarden: @Cory Could a whole ontology be treated as a bipartite graph?     (1I52)

[11:22] Ravi Sharma: not leaving out model but filtering yes dynamic filtering say report only 1 deg change in temperature?     (1I53)

[11:22] Ravi Sharma: I did mention 2 way and some buitin intelligence     (1I54)

[11:22] CoryHenson: @Bobbin: No just domain-specific knowledge relating observed-properties to features-of-interest (e.g., apples are red, apples are green)     (1I55)

[11:24] Ravi Sharma: In 2000 time frame we used queries or repositories in NASA but now we can do it for reporting realtime results such as moisture across a region?     (1I56)

[11:24] Terry Longstreth: My observation is above- response to @gary     (1I57)

[11:24] Ravi Sharma: @ terry pease type your Q     (1I58)

[11:25] Terry Longstreth: The aggregation of data may be within the sensor - Like an image     (1I59)

[11:25] Frederic de Vaulx (NIST Associate): for next time     (1I60)

[11:25] Frederic de Vaulx (NIST Associate): :)     (1I61)

[11:25] Mark Underwood: I do wonder whether the BI analytics folks (Qlikview, Tableau) will try ingesting sensor streams & co-op the centralized network model (vs. the local ones). I have asked them this already in the form of "How will you handle Velocity" & most reply they are working on it, but requires different design paradigms     (1I62)

[11:25] Ravi Sharma: One person's sensor is someones system yes     (1I63)

[11:26] Ravi Sharma: aggregation and federation of sensors - for me hubble is a set of sensors, camera in visible vs in IR etc.     (1I64)

[11:27] Terry Longstreth: Right, the sensors are targeted to specific purposes. Another example would be a traffic speed camera     (1I66)

[11:28] CoryHenson: Yes, systems of sensors can be sensors themselves     (1I67)

[11:28] Leo Obrst: @PeterYim: can two people simultaneously have admin capabilities on the same call, so that this issue doesn't occur again?     (1I68)

[11:28] Ravi Sharma: different algorithms are required to process them, the scientists who would correlate would mostlikely do offline or improve comparative images - @terry we have to balance intelligent sensors and imaging capabilities.     (1I69)

[11:28] Ravi Sharma: Thanks a good conference     (1I70)

[11:29] Leo Obrst: Thanks, all!     (1I71)

[11:29] CoryHenson: Thanks everyone     (1I72)

[11:29] Mike Bennett: Thanks all (except one person - I wonder if you will ever know who you were?)     (1I73)

[11:30] Torsten Hahmann: Sorry for the audio problems, it was our first time moderating a session ...     (1I75)