Actions

Ontolog Forum

'Ontolog Summit 2017' Summary of Tools, Processes, Languages –sessions and tracks Feb-May 2017 Compiled by Dr. Ravi Sharma- May 2017

Focus -Relationships between AI and ontology

Tracks focus on 3 specific relationships

– Track A: Using Automation and ML to Extract Knowledge and Improve Ontologies (Learning →Ontology)

– Track B: Using background knowledge to improve machine learning results (Ontology→Learning)

– Track C: Using ontologies for logical reasoning (Ontology→Reasoning)



Intro and Overview related presentations

Track A: Champion - Gary Berg Cross

• Ontology engineering is an iterative and spotty (non-uniform progress in its activities and process).

• Bottlenecks and obstructions in Onto-Eng. and Onto Dev. Ref: Oscar Corcho

• Objective: How Machine Learning (ML) can - generate KB to help Develop Ontologies, - reduce noisy data to further quality of developed ontologies, harmonize ontologies from dependence on peculiarities of datasets used.

• Tools Mentioned: OntoLT – Protégé Based for Extracting Concepts and Relationships in text searches. OntoLearn has been successfully experimented in several domains (art, tourism, economy and finance, web learning, interoperability).

• Ontology Learning: about building domain ontologies automatic extraction of concepts and relationships. A Layer Cake of Ontological Primitives. Book Ref.- Paul Buitelaar, Philipp Cimiano & Bernardo Magnini (Editors). Also linguistic methods: by Ícaro Medeiros (2009). Concept Learning: Jens LEHMANN et.al. algorithms, decision trees, Operator factors, all used to reduce work in Ont. Eng. NELL: Never-Ending Language Learner (2014) - Semi-Supervised Bootstrap Learning to read, reason and extend ontology. Discourse Representation Theory (DRT), Semantic Technology Laboratory, Valentina Presutti et.al. using frame semantics and design patterns.

Track B: Co-Champions Mike Bennett and Andrea Westerinen

• With Machine Learning as focus, what are ontology and non-ontology background knowledge influences on ML?

• ML and NLP activities: how they use formal ontologies and how can we design ontologies that can be useful for them?


• What are non-ontology inputs e.g. vocabularies, hierarchies, taxonomies, thesauri in ML and NLP?

• Where are these ML and Ontology inputs being used; In Domains such as Financial business, Intelligence, text Processing and legal areas?


Track C: Co-Champions Donna Fritz and Ram D. Sriram

• Ontology interoperability ranges from lowest expressivity (e.g. taxonomy) at Syntactic level to intermediate (e.g. thesaurus) at structural level and high expressivity (e.g. conceptual models, and logical theory) with First Order Logic as being strong semantics at semantic level.

• Tools and technologies also follow this progression (e.g. from ER model, XML - Schemas, RDF/S -OWL DL, FOL).

• Reasoning requires disambiguation of terms from domain overlaps and irreducible structure implied by derived information. It progresses from more elementary computational uses of Ontologies such as coordination (agents) and configuration (information management).

• Anatomy of an Intelligent Agent includes not only sensors, knowledge, inference but also social and sentimental attributes.

• Processes and methods for Knowledge Acquisition range from visual (pictorial), linguistic, virtual to algorithmic

• Design engineering examples applicable for ontology inference relate to Structural strategies, requirements and use cases including ecosystem requirements.

Overview related Presentations

• Paul Buitelaar: Ontology Learning. Similar to Above mentioned Layers, highest being Axioms and Logic downward layers include schemata, relations, concept hierarchies, and synonyms and terms at the lowest layers for ontology learning. Then multilingual concepts were used to enhance Ontology Translation, used reversely for auto translation into German etc., from English.

• Concept were demonstrated by graphics examples on Linked Open Data. Concept hierarchies examples were Graphs showing triples and or relationships among taxonomy elements.

Track A: Session 1 Presentations

Estevam Hruschka: Overview of Never Ending Language Learner NELL.

Need to build Machines that learn over long periods adding to learning incrementally and attempts to understand logic in users’ queries and statements. NELL extracts facts from Web 24x7 and occasional human interaction and has been operating at CMU since 2010 and improves on things and relations (800 + ontologies). NELL has 2500 functions using noun phrases (NP) as Context distribution for text, morphology. and web contexts. Multiview semi supervised functions and classes discussed and used for building knowledge graphs. NELL architecture described by NP categories, NP relations, rules, concept association, and discovery of new relations etc., and leading to collections of learned beliefs that are refined continuously. NELL also extends Ontologies and addressed multi lingual learning approach, by x-learning the multilingual web.

• Valentina Prescutty: Fully described FRED as NLP extraction tool, working with RESTful Services or Python Library, and generic enough for sub-domain or general extraction of relationships from Semantic Web by crawling. OWL Semantics + n-ary pattern for creating graphs showing Entities and Relationships. Compared FRED with other NLP engines such as Stanford’s Dependency Parsing and Babelfy’s linking and using Semantic Web resources (e.g. DOLCE-Zero and Schema.org). FRED Features - Extract multiple NLP tasks to Knowledge Graphs (KG).

 From DRS to RDF/OWL • Tense, modality and negation • Compositional semantics, taxonomy induction and quality representation • Periphrastic relations (such as- of, with, for).

 Additional NLP integrations, multilingualism, Text Graph annotations, etc.

 FRED Implementation – capability to parse and use multiple NLP engines mentioned above. Frame detection comparable to Semafor Tool.

 Comparable to other tools

 The final goal of FRED is to support natural language understanding. Yet Plenty of Open Issues and Challenges (e.g. complex multiword extraction, semantics, type coercion to identify a hidden frame, etc.

Alessandro Oltramari: From machines that learn to Machines that Know!

• Deep Learning is complex as it involves algorithms, conditional neural networks and Conditional Machines for dynamic data flow. It can be made “tractable” by quantitative methods namely correlation of Input and output data (metrics) and by semantic transparency, by interlinking the input and output ontologies (e.g. complex and visual activities).

• CMU mind’s Architecture has a dynamic Cognitive Engine that helps reach output (event, description, environment) from a video input. Activity and Semantic aspects of visual pattern recognition are possible.

• IoT: robots – sensors, KRR and actuators will provide 40ZB of data by 2020 and Internal Machine Intelligence will need context, cognitive states, environment, social inputs. A user centric KB will balance IoT Sensor data for forecast, predictive analyses, forecasting and discovery from unstructured data. This would help various sectors e.g. Automotive, Healthcare, Energy. Cognitive and emotional relate to Internal Ontology and environment and sensors in IoT to External Ontology.

• There is lot beyond ontology learning but for perception and knowledge one needs ontology for deep learning with machines.


Track A: Session 2 Presentations

Michael Yu (UCSD) - Inferring structure and function of a cell.

• Millions of biological measurements were used to infer hierarchy of cell structure and thus create a data driven gene ontology (GO) and then use this knowledge of cell structure to understand cell functions (cells of budding yeast).

• Data driven Gene Ontology overlaps existing GO but also discovers new subunits. Data GO discovers clusters of Gene – Gene interactions, refines molecular understanding of autophagy (internal cell parts recycling) includes 23% new processes not previously in GO. Also address central Q of genetics – relating genotype to phenotype. For example, double gene deletion types affect slower or faster cell growth.

• A supervised ML Framework developed for relating Geno- to Pheno- types. The “ontotype” is a multi-scale representation of the cell in between genotype and phenotype. It serves as a highly informative set of features for supervised learning of phenotype. The hierarchical model predictions for Genetic Interactions outperform non- hierarchical. Better predictions for Nuclear Lumen DNA repair studies.

Francesco Corcoglioniti: Frame-based Ontology Population from text with PIKES

• Ontology Learning and Population (from web SW, NLP & KR) versus Semantic Frames (n-ary relations, role properties and frame based ontologies).

• PIKES Is a Knowledge Extraction Suite, for populating Frame based Ontologies. Intermediate Mention layer uses Resource layer to extract Instance layer. Described Linguistic Feature Extraction, Knowledge Distillation including use of Background Knowledge and Implementation details to describe how Evalutions were carried out – for throughput, and Gold Standards and using Semafor, Mate-tools and combination. Comparison with FRED (described by other presenters earlier) were carried out and PIKES nominal frames converted to FRED’s binary relations. Benefits of PIKES are for example 2 phase decoupling.

• Related efforts:

• – PreMOn – lemon extension for predicate models • – KE4IR - knowledge extraction for IR • – KnowledgeStore - store for PIKES data • – KEM – RDF/OWL model for knowledge extraction (ongoing work)

Evangelos Pafilis: EXTRACT 2.0: interactive extraction of environmental and biomedical contextual information

• Metagenomics: Important branch using direct environmental sampling, in this case marine biology helps in studying Taxonomic hits, Microbial Diversity, Gene/Protein prediction, Functional Categorization, Biochemical pathways, molecular functions. Sampling, locations, sequencing, environment and host organism are important. Guidelines exist e.g. Genomic Standards Consortium, for environmental descriptors, environment-ontology organization (EnvO). In-house documents, literature, and web pages for search sources. Synonyms and terms (narrow, related) improve ranking, selection and matches. EnvO does not support higher levels such as species, food, and negation. Output includes matches (terms and mappings).

• Large Scale Analysis: EXTRACT is used to extract metadata interactively. Features include multiple entities Pop-up and match parameters including tagging and identifiers from EnvO. Dictionary and NCBI based species descriptions are used to relate to environment, organism, tissue and diseases. Tool uses for record annotation and interfaces described. Environment of Life referenced.

YetToCome Track A: – ML and NLP in ontology – domains / terms, integration, overlap, tools and visualization- ….


Track B: Session 1 Presentations

SimonDavidson: The Investigator's Toolkit – Deriving Immediate, Actionable Insights from Unstructured Data

• Why ontologies are used: To discover needle in Haystack, extract relevant information from multitudes of data; to merge information in real time from multiple languages; to communicate across closed rooms by descending to common halls. Ontology is a hierarchical map of concepts and instances and relationships between instances.

• Functional Features of Toolkit: NLP, entity Extraction, Ontological Analysis, Categorization, Relationship Analysis and Sentiment - all to view and analyze insights. Additional uses include legal discovery, political campaign, Toolkit named PSONIFY uses technology semantics and be expert knowledge. Regulation, Technology and Semantics are its three pillars. Examples include workflow, regulatory proof of concept - line by line interpretation, financial ontology.


Ken Baclawski: Combining Ontologies and Machine Learning to Improve Decision Making

• Motivation from John Sowa  “For intelligent systems, the cognitive cycle is more fundamental than any particular notation or algorithm.”  “By integrating perception, learning, reasoning, and action, the cycle can reinvigorate AI research and development.”

• Ontology for Decision making:  Situation Awareness  Decision making loop

• Use Cases: Simple (healthcare, finance), multilevel (cloud services) and recursive (emergency).

• OODA Described: Observe Orient Decide and Act Loop Architecture developed by an Air Force Colonel.

• Relevance of data and processing them for a goal are part of Situation Awareness (SA), fundamental to Decision making, described by perception of elements in an environment including space and time, elements’ meaning and likely change in near time. Situation Awareness Ontology exists with its data. Situation understood as limited part of reality, described by relevant data, relate to other situations and are transformed by decision process.

• KIDS: is a Knowledge Intensive Data System, based on OODA, categorization of data and includes reasoning, for decision support. Architecture Described with Data: (FIHD -Facts, Information, Hypotheses and Directives) and Knowledge: (CARE -Classification, Assessment, Resolution and Enactment). Interaction with the World and reasoning described, CARE loop described.

• KIDS ontology based on framework, situation theory ontology, integrated with Provenance ontology, allows for flexibility and diversity. KIDS architecture - hierarchy, properties and commutative conditions described. Use cases and solutions with SA and Control shown with KIDS. ML technique is Multivariate State Estimation Technique (MSET). Decision making uses Sequential Probability Ratio Test (SPRT), visuals of MSFT provided for a sensor stuck, also applied to customer service use case and efforts at self-adaptive decision making process described. Sully’s US airways Flight 1549 landing at Hudson River as rare Decision support emergency use case analyzed.

• Provenance, reasoning and learning described. Roles of ML in optimizing and detecting and of Ontologies in reasoning, SA and verification (e.g. regulation) enumerated.


Bryan Bell & Elisa Kendall: Leveraging FIBO with Semantic Analysis to Perform On-Boarding, Know-Your-Customer (KYC) and Customer-Due-Diligence (CDD)

• Language Evolution has always provided value. Now machines can read and understand the content through word-disambiguation and semantic reasoning. Using an API, Disambiguator, a cognitive computing capability aligned with FIBO is created for KYC and CDD. Morphological, grammatic and logical analysis of text and semantic analysis through Disambiguator resulting in understanding word form, parts of speech and their relationships with each other, the Cogito API can provide for FIBO and related services: FIBO Search, Categorization, Content clustering, Targeted entity extraction, Recommendation engines, Expert finder tools, Web site ad placement, Self-help solutions, Social media analysis.


Tatiana Erekhinskaya: Converting Text into FIBO-Aligned Semantic Triples

• Described Lymba – a very powerful tool for Text-Entity-Concept-Temporal awareness and Knowledge extraction. Works with unstructured data and in combination with RDBMS. It helps extract triples and attempts to construct ontologies from such processing and knowledge engineering. It also extracts information from social media. Details of the steps and type of items extracted were enumerated starting with text to knowledge and Applications.

• With FIBO as Use Case it was shown to use semantic relationship extraction, contractual clauses discovery including currency, amounts, time constraints, items in contracts, etc. after Entities, other steps were:

o –Semantic relations (26 basic types): Agent, theme, instrument, location, etc.

o –Custom relations and entities

o –RDF/TriX representation of knowledge

Compliance risk, money movements were also capable of being analyzed by the tool.


Track C: Session 1 Presentations

Donna Fritzsche and Ram D. Sriram: Introduction to the Reasoning and Ontologies Track

The theme is to discuss techniques developed for reasoning using various ontological foundations. Anatomy of an Intelligent Agent: A View

Basic Components • Sensors • Knowledge • Inference (& Machine Learning) • Actuators

Additional attributes • Feedback mechanisms • Emotion/ sentiment analysis • Social network

Channels for Describing Knowledge from Engineering perspective:

Pictorial => Symbolic => Linguistic => Virtual => Algorithmic (e.g. Equations, solvers, computer algorithms)

Design Process requires Knowledge in above categories while designing; such as, mostly pictorial for requirements to virtual and symbolic for detailed design.

Inference is largely based on Symbolic Knowledge e.g. constraint satisfaction and uncertainties (Bayes, Fuzzy, etc.).

Reasoning spectrum ranges from taxonomy to logical theory as shown in Diagram.

Requirements for design include Ecosystems driven: mappings and crosswalks and Use Case and Agents driven: learning increments over time, decision expression, graceful degradation, etc.


Pascal Hitzler: Roles of Logical Axiomatizations for Ontologies

• Axioms and Semantics illustrated through Classes and described by assumption, model and logical consequence in monotonic logics e.g. First Order Predicate Logic, Description Logic (OWL).

• Rich Axiomatization helps disambiguate, e.g. graph structure and location. Recommends Rich Axiomatization with use of scoped Range and Domain but avoid reuse of Vocabularies to reach better logical consequences, after ontologies are merged; e.g., Rule OWL (ROWL) Protégé Plug-in improved accuracy for hard sentences. OWLAx Plug-in for Protégé Discussed.

• Monotonicity (FOL) first test of suitability of express-ability in OWL but with dynamic feature enrichment such as species addition the monotonic assumption fails. Non-monotonic additions to Protégé have been proposed such as SHACLE for shape etc.

• Inferences are easier when Instance based on OWL profiles than Schema based that require consistency and coherence in ontologies.

• Role of Axiomatizations: Not yet exploiting / leveraging full power of Axiomatization for reasoning and ontologies.



Jans Aasman: Cognitive Probability Graphs need an Ontology

• Probability graphs and ontologies presentation covered Probability graph databases and enrichment of same by feedback from Machine Learning for structured, unstructured data and by inputs from Knowledge (domain, LOD, ontologies, etc.) and probabilistic inferences. Thus, by putting everything into one distributed semantic graph one visualizes Cognitive Computing.

• Examples of application included: Healthcare, eCommerce and brand protection, Logistics, Police Intelligence, Tax fraud.

• Healthcare: Franz and Montefiore are building a cognitive computing platform for healthcare without Datamart; 2.7 M patients’ data for 10 years, creating one healthcare analytics platform. Keeping the Personalized Healthcare with true cost of care as Objective a nine-layer Cognitive Health computing layer maturity diagram was shown and feedbacks from Queries (SPARCLE, SQL), that go back to the graph provide traceability, so important for healthcare and diagnostics.

• Structured patient data combined with complex integrated terminology allow queries to be simplified. Further described were, Odds ratio, Association Rules, and division of space into Vornoi cells using Vector Quantization and K-means Clustering i.e. partitioning cluster space into prototype clusters.

• A Use Case attempt as ProofCheck Summary for respiratory failure for the patients had many parametric associations with risks and could help diagnose and predict before ICU intubation of patient would be required.

• There does not seem to be an ontology for Analytical results in such healthcare scenarios. Requirements for such Ontology would include:

o Who (person, organization) o Typing (do we need taxonomy of types?) o Why (do we need to provide reason) o How (Rscripts, methods, algorithms, queries) o What (what data did we use, sample size, training sets, store all of it?? Or a description of how we got at the data.) o Results & outcomes depend on type of analytics we did (oddsratio, lift, cluster, topological similarity, etc…) o In a separate graph, what is the meta data for this graph?



Track C Session 2 Presentations

Dr. Lise Getoor: Combining Statistics and Semantics to Turn Data into Knowledge

• Described Probabilistic Soft Logic (PSL) to combine Statistics and Semantics for creating Knowledge Representation (KR) graphs using Data and Machine Learning (ML). Improvements by each - Probabilistic and Semantic – in KR are possible but combination creates more useful Knowledge Graph Identification (KGI).

• PSL (Program) = Rule+ Input Database. These use: Predicates, Atoms - random variables, Rules, Set (aggregates). Ontology alignment discussed by comparing entities and relationships and determining extent to which the ontologies are similar (or not) and assigning probabilities. Formal use of PSL with Hinge Loss Markov Random Fields that makes large-scale reasoning scalable. Claims PSL makes inference fast and is flexible when applied to - image segmentation, activity, recognition, stance-detection, sentiment analysis, document classification, drug target prediction, latent social groups and trust, engagement modeling, ontology alignment, knowledge graph identification (KGI). Using NELL, described in earlier Tracks and Sessions, and selecting relevant facts, Knowledge Graphs (KG) are inferred and Knowledge Graph Identification (KGI) achieved by using entity extraction, link prediction, collective classification, ontology constraints and source uncertainties. Examples of disambiguation in extracted graphs to construct KGI were provided, as well as example of improvement in 3 KGs by combining Statistics and Semantics. Formal equations for KGI using Ontology Rules were provided.


Dr. Yolanda Gil: Reasoning about Scientific Knowledge with Workflow Constraints: Towards Automated Discovery from Data Repositories

• Yolanda starts with an example of Human Colorectal Cancer Proteomics (Genomics) and uses illustration for considerations relating to hypotheses improvements through workflows and provenance. Suggestions are to automatically let the machines select candidate hypotheses and run them through Library of workflows and improve or modify them iteratively. Current Proteogenomic data types are so varied that ML can be effectively applied with Workflows. Long-Term Goal: Human directs automated intelligent system to explore hypotheses of interest. Typical Template for discovery and validation include Query, Analytical Workflows, Meta-workflows (for confidence estimation, etc.) and refinement or change to new hypotheses; generalization indicated for scientific discoveries referring to Works of Ontology experts (Lenat and others). DISK: Automated Discovery of Scientific Knowledge. Hypothesis Vocabularies related references were provided.

• W3C DISK Ontology standards are being developed for Workflow and Provenance, OPMW and PROV as well as P-Plan for Plan ontology. Ontology based DARPA WINGS for workflow was described using reasoners and meta-workflow examples for Human Proteomics. Integrative studies of multi-source data are very rare, Data resources are constantly growing but studies are done once, Reproducibility and explanation are key when multiple collaborators participate.


Dr. Spencer Rugaber: Applications of Ontologies to Biologically Inspired Design

• Purpose: designing engineering systems by using analogies from the natural world. Use of GATECH tools and Projects for addressing challenges such as different vocabularies, similar vs exact matches and relevance ranking –

o DOWSER - for Software Engineering that constructs UML classes from textual descriptions, using ontologies from UML and WordNet and rule based reasoning and o KA – Artificial Intelligence, using SBF ontologies (Structure-Behavior-Function notation for mechanical designs)

• Approach: domain independent ontologies for structures functions and behaviors, and mapping Biosystems vocabularies on them, document analyses, SBF models, analogical reasoning and subsumption. • IBID: Interactive tool for Intelligent Biologically Inspired Design showed value added results • Extensions planned for Finance and Cancer areas. Future work to overcome limitations due to UML, SBF as target notations, use of natural language instead of controlled vocabularies and Question answering within the document.

NEXT:

Ontology Summit 2017: Symposium Keynote Presentations

• Tools, Processes, Ontology Domains - referenced in Ontology Summit 2017