From OntologPSMW

Jump to: navigation, search
[ ]


OntologySummit2015 Track A Session - Ontology Integration in the Internet of Things - Thu 2015-02-05     (1)

Session Co-chairs: RamSriram & LeoObrst     (1A)

Billions of things will be connected to the Internet. These things span a spectrum of cognitive abilities from simple sensors to humans. Ontologies will play a significant role in integrating these things at different abstraction levels. The goal of track A (Ontology Integration in the Internet of Things) is: To discuss the various approaches being taken to address the integration and interoperability issues. We intend to present case studies of IoT, discuss current approaches in integration and interoperability, discuss gaps in current approaches, and discuss issues of vertical integration and interoperability across layers of the IoT, including granularity. We also want to propose methods for achieving integration and interoperability through ontologies, and propose a unified framework for integration and interoperability for multimodal (audio, text, video, etc.) interfaces.     (1C)

Agenda and Presenters     (2)

    • Overview of Track A: Ontology Integration in the IoT - Dr. LeoObrst (MITRE), Dr. RamSriram (NIST)     (2A1)
    • An Ontology-Driven Integration Framework for Smart Communities - Dr. SteveRay (Carnegie Mellon University)     (2A2)
      • Abstract: This presentation describes our work concerning the definition of a neutral, abstract ontology and framework that supports the vision and diverse contexts of a smart community. This framework is composed of a general, core ontology that supports what many are calling the Internet of Things, a scalable number of extension ontologies to describe various application perspectives, and a mapping methodology to relate external data and/or schemas to our ontology. Finally, we show why this ontology is robust, scalable and generic enough to support a wide range of smart devices, systems and people.     (2A2A)
    • Dynamic Semantics for the Internet of Things - Dr. PayamBarnaghi (University of Surrey, UK)     (2A3)
      • Abstract: The rapid increase in the number of network-enabled devices and sensors deployed in physical environments is changing information communication networks and services and applications in various domains. It is predicted that within the next decade billions of devices will generate large volumes of real world data for many applications and services in a variety of areas such as smart grids, smart homes, healthcare, automotive, transport, logistics and environmental monitoring. The technologies and solutions that enable integration of real world data and services into current information networking technologies are often described under the umbrella term of the Internet of Things (IoT).     (2A3A)
      • When dealing with large volumes of distributed and heterogeneous IoT data, issues related to interoperability, automation, and data analytics will require common description and data representation frameworks and machine-readable and machine-interpretable data descriptions. IoT data is heterogeneous, multi-modal and can be of variable quality and is often streamed. Interoperability is a key requirement to support large-scale IoT deployments and multi-provider systems. However, dynamicity, diversity and resource constraints of the IoT environment can hinder using semantic technologies in the way they are used on the web. This talk will provide an overview of the use-case and requirements for semantic interoperability in the IoT with a focus on annotation, processing and information extraction and dynamicity in the IoT environment. Some of the recent and on-going research and development in this domain will be also discussed.     (2A3B)
    • Semantic Integration Prototype for Wearable Devices in Health Care - Dr. JackHodges (Web of Things (WOT) Research Group, Siemens Berkeley Laboratory)     (2A4)
      • Abstract: Semantic technologies and ontologies will be useful in application contexts when they can be integrated with information sources and existing application model formats. Often this requires an approach of using some semantic information in lightweight processing models and accessing richer semantic models when needed, but based on rich and common underlying ontologies. In many application contexts there exist curated domain models which can be leveraged if they can be integrated. In this talk a prototype of one such scenario will be presented – the use of curated biomedical ontologies to assist health care professionals in selecting appropriate wearable devices to monitor diagnosed disorders. This prototype sought to provide access to and search across five kinds of information in mostly curated ontologies: wearable devices, quantities, diseases, symptoms, and anatomical parts. The choices made and issues confronted will be discussed.     (2A4A)

Presentation Material     (2B)

  • Dial-in:     (3D)
    • Phone (US): +1 (425) 440-5100 ... (long distance cost may apply)     (3D1)
    • Skype: join.conference (i.e. make a skype call to the contact with skypeID="join.conference") ... (generally free-of-charge, when connecting from your computer ... ref.)     (3D2)
      • when prompted enter Conference ID: 843758#     (3D2A)
      • Unfamiliar with how to do this on Skype? ...     (3D2B)
        • Add the contact "join.conference" to your skype contact list first. To participate in the teleconference, make a skype call to "join.conference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 843758# when prompted.     (3D2B1)
      • Can't find Skype Dial pad? ...     (3D2C)
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"     (3D2C1)
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)     (3D2C2)
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").     (3E1)
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.     (3E2)
    • thanks to the folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) ... Handy for mobile devices!     (3E3)
  • Discussions and Q & A:     (3F)
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.     (3F1)
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)     (3F2)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)     (3F3)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)     (3F4)
  • RSVP to with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)     (3H)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.     (3J)

Attendees     (4)