Blog:Improving Machine Learning using Background Knowledge

From OntologPSMW
(Difference between revisions)
Jump to: navigation, search
(Organized by Mike Bennett and Andrea Westerinen)
(Organized by Mike Bennett and Andrea Westerinen)
Line 6: Line 6:
  
  
Machine Learning (ML) is based on defining and using mathematical models to perform tasks, predict outcomes, make recommendations and more. Initial models can be specified by a data scientist alone, and/or constructed through combinations of supervised and unsupervised learning and pattern analysis. However, it has been noted that if no background knowledge is employed, the results may not be understandable[1]. Background knowledge can also improve the quality of machine learning results by using reasoning techniques to select learning models, clean data or improve data selection (reducing large, noisy data sets to manageable, focused ones).   
+
Machine Learning (ML) is based on defining and using mathematical models to perform tasks, predict outcomes, make recommendations and more. Initial models can be specified by a data scientist, and/or constructed through combinations of supervised and unsupervised learning and pattern analysis. However, it has been noted that if no background knowledge is employed, the results may not be understandable[1]. Background knowledge can also improve the quality of machine learning results by using reasoning techniques to select learning models, clean data or improve data selection (reducing large, noisy data sets to manageable, focused ones).   
  
The objective of this  Ontology Summit 2017 track is to explore how ...
+
The objective of this  Ontology Summit 2017 track is to explore the problem space by way of a couple of use cases in the financial space among others, and to look at possible architectures for using ontologies, vocabularies or other resources in processing natural language text. We will consider the kinds of ontology that would be needed for the example use cases, the challenges in using multiple source ontologies and some possible ground rules for what kinds of ontologies or vocabularies would be needed. We will also explore other aspects of an ontology-driven natural language architecture such as the application of semantics to neural learning functionality and the possible role of statistical analysis in this. A recurring question in this and other comparable problem spaces, is where does the human fit in the loop and what do they do? This will set the scene for the second Track B session on April 12 when we hope to have some examples of these architectures put into practice.
 
+
The objective of this Ontology Summit 2017 track is to explore the problem space by way of a couple of use cases in the financial space among others, and to look at possible architectures for using ontologies, vocabularies or other resources in processing natural language text. We will consider the kinds of ontology that would be needed for the example use cases, the challenges in using multiple source ontologies and some possible ground rules for what kinds of ontologies or vocabularies would be needed. We will also explore other aspects of an ontology-driven natural language architecture such as the application of semantics to neural learning functionality and the possible role of statistical analysis in this. A recurring question in this and other comparable problem spaces, is where does the human fit in the loop and what do they do? This will set the scene for the second Track B session on April 12 when we hope to have some examples of these architectures put into practice.
+
  
 
== References ==
 
== References ==
 
Buitelaar, Paul, Philipp Cimiano, and Bernardo Magnini. "Ontology learning from text: An overview." Ontology learning from text: Methods, evaluation and applications 123 (2005): 3-12.
 
Buitelaar, Paul, Philipp Cimiano, and Bernardo Magnini. "Ontology learning from text: An overview." Ontology learning from text: Methods, evaluation and applications 123 (2005): 3-12.

Revision as of 00:05, 14 March 2017

Contents

Purpose

Use of Ontologies to Improve Machine Learning Techniques and Results

Organized by Mike Bennett and Andrea Westerinen

Machine Learning (ML) is based on defining and using mathematical models to perform tasks, predict outcomes, make recommendations and more. Initial models can be specified by a data scientist, and/or constructed through combinations of supervised and unsupervised learning and pattern analysis. However, it has been noted that if no background knowledge is employed, the results may not be understandable[1]. Background knowledge can also improve the quality of machine learning results by using reasoning techniques to select learning models, clean data or improve data selection (reducing large, noisy data sets to manageable, focused ones).

The objective of this Ontology Summit 2017 track is to explore the problem space by way of a couple of use cases in the financial space among others, and to look at possible architectures for using ontologies, vocabularies or other resources in processing natural language text. We will consider the kinds of ontology that would be needed for the example use cases, the challenges in using multiple source ontologies and some possible ground rules for what kinds of ontologies or vocabularies would be needed. We will also explore other aspects of an ontology-driven natural language architecture such as the application of semantics to neural learning functionality and the possible role of statistical analysis in this. A recurring question in this and other comparable problem spaces, is where does the human fit in the loop and what do they do? This will set the scene for the second Track B session on April 12 when we hope to have some examples of these architectures put into practice.

References

Buitelaar, Paul, Philipp Cimiano, and Bernardo Magnini. "Ontology learning from text: An overview." Ontology learning from text: Methods, evaluation and applications 123 (2005): 3-12.

(no items)
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox
Share