Actions

Ontolog Forum

Session Track B Session 1
Duration 1.5 hour
Date/Time Mar 15 2017 16:30 GMT
9:30am PDT/12:30pm EDT
4:30pm GMT/5:30pm CET
Convener MikeBennett and AndreaWesterinen

Ontology Summit 2017 Track B Session 1

Note that in the US Daylight Savings Time is in effect but Europe is still on standard time

Improving Machine Learning Using Background Knowledge

Video Teleconference: https://bluejeans.com/768423137

Meeting ID: 768423137

Chat room: http://bit.ly/2lRq4h5

Please use the chatroom above. Do not use the video teleconference chat, which is only for communicating with the moderator.

When you use the Video Conference URL above, you will be given the choice of using the computer audio or using your own telephone. Some attendees had difficulties when using the computer audio choice. If this happens to you, please leave the meeting and reenter it using the telephone choice. You will be given a telephone number to call along with an access code.

Template:Blog:Session 2 Ontology Use in Machine Learning

Proceedings

[12:11] TerryLongstreth: Testing use of 'Chrome' to extend the life of my Window XP system

[12:15] TerryLongstreth: Seems to support Bluejeans pretty well

[12:36] MikeBennett: My other machine is really not liking the BlueJeans install

[12:38] KenBaclawski: @MikeBennett: In my experience, google-chrome is the best for BlueJeans

[12:44] ToddSchneider: Building an ontology, as an engineered artifact, requires requirements.

[12:46] MikeBennett: Slides are changing fine here, there might be a lag.

[12:48] AndreaWesterinen: @Mike, If I missed any highlights, please chime in here in the chat. Thanks!

[12:49] MikeBennett: @Andrea Thanks - you covered it great!

[12:54] ToddSchneider: Simon, if not covered in your slides, could you tell us something about the ontological analysis used?

[13:00] ravisharma: who is speaking?

[13:01] AndreaWesterinen: @Ravi Simon Davidson is speaking

[13:02] ravisharma: thanks

[13:05] MikeBennett: Looking at the demo - we should give this to Louise Mensch who is doing a similar thing for Trump/Putin as the demo shows for Obama i.e. who is linked to whom. Just for curiousity, not to be political :)

[13:06] David Hay: What is the source of all of these documents?

[13:10] Simon Davidson: Hi David, These are emails and tweets from an enron data set and a sony wikileaks dataset

[13:13] Mark Underwood: PSA FYI Twitter @ontologysummit

[13:14] ravisharma: what happens when there is gap in achieving situation awareness?

[13:16] Simon Davidson: Hi Todd..it might be a bit lengthy but let me give you an answer..

[13:18] Simon Davidson: Ontology

The core of the technology is a patented (in the US and Israel) method for integrating statistical algorithms of NLP with ontological based input to imitate processes of human reading of texts. This method Artificial Intuition (vis Daniel Kahnemans Fast Thinking)- uses algorithms that apply to the processed texts the combined knowledge of seasoned subject matter experts regarding texts of the same domain used in training.

One of the major difficulties in natural language processing is polysemy words idioms, quotations and phrases that have different meanings in different contexts. For example, whether Waterloo means a place in Belgium, a metaphor for a final defeat, a London railway station, or a song by ABBA depends on the context and the source. A verse in the Quran may mean one thing to a moderate Muslim and the exact opposite to a radical.

A human reaches intuitive conclusions - even by perfunctory reading - regarding the authorship and intent of a given text, subconsciously inferring them from previous experience with similar texts or from extra-linguistic knowledge relevant to the text. Then as he accumulates more information through other features (statements, spelling, and references) in the text, he either strengthens his confidence in the initial interpretation or changes it.

Our technology extracts such implicit meaning from a text or the hermeneutics of the text. It employs the relationship between lexical instances in the text and ontology - graph of unique language-independent concepts and entities that defines the precise meaning and features of each element and maps the semantic relationship between them.

The ontology and lexicon, including user-generated watch lists and packages:

A. The ontology is a hierarchical structure of concepts, sub-concepts and specific instances of ideas, entities, etc. Each instance can include cross-references to other entities that are used by the system to define the semantic affinities between entities

B. The lexicon contains language elements of the source language that are necessary for analysis of structured and unstructured inputs. The lexicon entries are annotated by additional linguistic information and linked to the ontology either by default links or by rules that determine to which ontology entity they are to be linked and under which circumstance.

The platform identifies statements in the texts that correspond to meaningful concepts in the ontology. These statements may be ostensibly ambiguous and are disambiguated by the system based on their context. IntuScan then builds a digest of unambiguous statements in the text based on the ontology. This digest then serves as a model for further extraction of meaning through statistic categorization. IntuScan extracts "ideas" from the text. These "ideas" are specific concepts that do not represent a named entity but an action or event that may be important for the user. An idea may be a political, business, social, religious or cultural idea that is expressed by a set of statements frequently not transparent event to the uninitiated human reader.The statement is then linked to the ontological instance that reflects its meaning. As in the example of Waterloo - the instance Battle of Waterloo is linked to the concept defeat (even though one mans defeat is another mans victory, Waterloo is not used as a figure of speech for victory, whereas, in English, the Battle of Trafalgar could mean that).

The platform categorizes each document according to clusters of categories: type of document, priority (risk) level, ideological affiliation and domain. In addition, identifies topics within the domain. The topic identification algorithm is based on statistical machine-learning models adapted for domain-specific topics. The statistical algorithm (SVM or CRF algorithms) takes a set of texts that have been pre-determined by the user as belonging to a given category and creating a vector that describes the textual features of the text. Then when a new text is processed, a model is created of the new text based on its features and the model is compared with the model of the large batch of texts. The closer it is (e.g. the closer the score is to 1) to the category vector, the more likely it is that it belongs to that category. The difference between the methods can be the type of statistical algorithm used, supervised or unsupervised learning, the amounts of pre-tagged data necessary to create a model etc. The two methods can be very different based on those variants.The topic identification can be customised by taking documents that are processed and turning their tag set into the basis for a new categorisation model.

Enhancement of the ontology is done through our Content Manager that allows the user to add (but not delete) ontological concepts. Once this is done, the system identifies those concepts in processed texts. Usually the time necessary for tweaking an ontology for a specific case is no more than a few days to a week.

Obviously, no documents or conclusions are submitted by a user onwards internally or to a court/regulator without a legal or HR team having read them. Therefore, the categorisation is a support mechanism that allows the legal team to address the relevant documents at an earlier stage in the process. The user can explain the process to the court as elaborated above however the process cannot be the legal proof in lieu of the actual document. does this answer your question?

[13:22] David Hay: Sorry I missed it: what does "KIDS" stand for?

[13:23] MikeBennett: The Knowledge Intensive Data System

[13:24] ToddSchneider: Simon, yes. Thank you. So your tool/capability does not use things like rigidity or ontological commitment as part of your ontological analysis?

[13:24] Jose Parente de Oliveira: Do you use an ontology of the types of entities involved in situations?

[13:25] anonymous1 morphed into Bill DeSmedt

[13:25] ravisharma: hierachy is more understandable for ised learning?, but not so straightforward to understand relationships for clustered or unsupervised

[13:43] Jim Disbrow: Yes, I enjoyed the presentation, thanks.

[13:44] ravisharma: ken pretty comprehensive talk.

[13:45] ChristiKapp: Didn't the pilots just apply a previously learned pattern (routine) to a new situation? Per the neurological loop at core of habit instead of learning.

[13:45] Simon Davidson: Jose-we absolutely do --an entity to us is the TYPE -person versus a corporate v a place v a theatre

[13:45] BobbinTeegarden: Ken: loved the fractal nature of the architecture: loops within loops within... itself a loop.

[13:45] AndreaWesterinen: @Jose ... Situation Theory Ontology is not OWL-DL. It has more meta-levels which requires OWL Full or a more general framework like Common-Logic. Have possibility to talk about the ontology as data that has to be manipulated.

[13:45] Jose Parente de Oliveira: Ok for me

[13:46] ravisharma: how do you handle situation awareness when the information to reach awareness is not complete?

[13:47] Simon Davidson: Todd-im afraid I do not know what the term rigidity is..could u describe it..we do run Statistical models which measure tolerance and relevance-we use this in predictive coding (supervised learning)

[13:47] AndreaWesterinen: @Ravi ... Good point. That is why you need a cycle. If you don't have full awareness, you try to cycle to keep "observing" (find more facts, ask questions of the user, ...).

[13:48] ravisharma: are there any insights into gaps to be filled as decisions can not wait?

[13:48] BobbinTeegarden: So is the meaning of things in the 'context' (situation here), and refining loops present more meaning?

[13:49] ravisharma: any aids such as extrapolations, algorithms

[13:50] AndreaWesterinen: @Ravi Maybe no insights, but the process can highlight where to improve tooling, sensors, logic, ...

[13:51] AndreaWesterinen: Just to be clear, I am transcribing Ken's words and not answering the questions independently. :-)

[13:51] ToddSchneider: Simon, the notion of rigidity was introduced in papers on OntoClean by Nicola Guarino and Christopher A. Welty, e.g., Proceedings of 12th Int. Conf. on Knowledge Engineering and Knowledge Management, Lecture Notes on Computer Science, Springer Verlag 2000

[13:52] AndreaWesterinen: @Mike asks what sort of ontologies work best for learning

[13:52] ravisharma: Is ontology not difficult for unsupervised learning?

[13:53] Jim Disbrow: First Order Logic seems to have been incorporated everywhere. But ... If and until reflexive operators within active relationships are incorporated as a segment of the ontological structures - as a part of 3rd Order Logic - none of the Ontologies will be as functional as well as they might be. When might this happen?

[13:53] AndreaWesterinen: @Simon responded with his own question on how to distinguish between ontology SME and users who need/enrich the ontology.

[13:54] AndreaWesterinen: @Ken responded with looking back at Track A to "learn" an ontology - maybe there is a cycle - that the ontology is produced by ML and then enhanced by ML.

[13:55] ravisharma: Jim-great observation, it is of course hard for someonelike me to get beyond first order

[14:01] AndreaWesterinen: @Gary asked if one could break the fact that relevance reasoning is more at a meta-level (second-level properties)? Perhaps by using a different/separate system for the relevance reasoning and its "individuals", where these become first-level properties in this "different" system.

[14:01] ravisharma: Is ontology not difficult for unsupervised learning? this why my hand is up

[14:02] ravisharma: for clustering

[14:02] ravisharma: constructing ontology is easier for supervised learning

[14:02] ToddSchneider: To Simon's quandary, there should be 'layers' of ontologies that represent different levels of 'detail' (i.e., the more detail constrains interpretation). Also the way the ontologies are used to mitigate 'differences' in understanding plays a role (e.g., system design).

[14:02] Alan Rector: In the issues around OWL vs other representations, do you distinguish the different sorts of background knowledge - whether to be learned or used. OWL's open world reasoning

[14:03] ravisharma: Ken is it not harder to associate hierarchies and other relations in training set, but for clusters relations are harder to establish?

[14:05] ravisharma: i meant to say it is harder for clustering?

[14:05] ravisharma: for training set it is easier?

[14:05] Alan Rector: OWL-DL's opeh world semantics is closely related to modal logic with everything preceded by 'necessary' - whether it is possible to build a set-theoretic model that conforms to the axioms. This is very close to saying that truth and falsity in DLs is truth or falsity in "all possible worlds" as oppossed to facts about this particular world.

[14:06] Alan Rector: On the issue of making it comprehensible to ordinary non-knowledge engineers. We havse always found it necessary to have an "intermediate representation" overseen by the experts accompanied by translation patterns/grammar for known patterns and reference to the knowledge engineers for novel ones.

[14:07] ravisharma: clusters are natural grouping of data but associating meaning to cluster is harder?

[14:08] ravisharma: let alone relations among clusters as some of the clusters are not always associable with entities that are to be mapped to clusters?

[14:09] Jim Disbrow: OWL used to do a refer back to RDF for their relationship definitions. RDF never used to have the capability to express reflexivity in it's subject-operator-object protocol. This is sufficient for First Order but not for 3rd Order.

[14:10] ravisharma: ken - i will email this at more length to you but I thought ontologies can be more easily constructed with supervised learning? compared to clustering.

[14:11] Alan Rector: In the biomedical world, many concepts change; many are controversial; definitions hae to be argued about. As the understanding of a disease changes then its classification may change.

[14:13] BobbinTeegarden: Is sideways 'horizontal' clustering just (re)adjusting knowledge contexts, vs vertical levels of abstraction (fractally) more adjusting depth of knowledge? And in reality does it happen in both directions at once?

[14:14] MikeBennett: @Bobbin good idea - the vertical would include introducing partitions such as the form versus function example we just talked about for landing places.

[14:15] AndreaWesterinen: My apologies, but I will have to leave the call to attend a meeting.

[14:17] MikeBennett: Thanks Andrea!

[14:19] MikeBennett: Ken describes the use of different kinds of reasoning e.g. abductive versus inductive - not limiting things to the DL world.

[14:24] ToddSchneider: One method for representation is to assume all (possible) relations exist and then apply application context or domain to select the

[14:25] ToddSchneider: 'most' relevant.

[14:25] BobbinTeegarden: Alan, are you implying that human knowledge complete any ontology we create? Almost a Peter Senge concept, that what happens in an ontology is only 20% of what is really in context?

[14:26] Alan Rector: I'll try an move this to the blog.

[14:28] MikeBennett: http://ontologforum.org/index.php/Blog:Improving_Machine_Learning_using_Background_Knowledge

[14:28] MikeBennett: is the blog page for Track B

[14:30] ravisharma: thanks folks

[14:31] Jim Disbrow: thanks folks

[14:32] Alan Rector: Currently four hours EDT to London time and 5 hrs to Continental Europe.

Attendees

Resources

Audio Recording

Previous Meetings

... further results