Open mcodescu opened 9 years ago
The task is to write a method. The input is an ontology (represented in our data structure), the output is a file, which can be read as input for the monster renderer
The input file won't be an arbitrary ontology, but rather look like https://github.com/ConceptualBlending/conceptual_blending_project/blob/master/documents/horseV3.owl which describes the anatomy of some animal/monster.
The output (= input format for the renderer) format is explained here https://github.com/ConceptualBlending/monster_render_system under the heading Markup Files. Examples are here https://github.com/ConceptualBlending/monster_render_system/tree/release/MonsterRenderer/Medusa/Documents/MonsterRenderer/MonsterMarkup
The output consists of two parts: the definition part and the relation part. For this purpose you need to create two lists
Here is a rough outline an approach that I would take 1 Create a list of all pairs (p1,p2) such that "Individual: p1 Facts: meets p2" is a sentence in the ontology (I'll call these (p1,p2) meets-pairs) 2 for each (p1,p2)- meets pair, 2a search in the ontology for the following sentence patterns "Individual: p1 Types: PT1" and "Individual: p2 Types: PT2" and "Individual: i1 Facts: has_fiat_boundary p1" and "Individual: i2 Facts: has_fiat_boundary p2" (if you don't matching sentences and, thus, no values for all variables PT1, PT2, i1, i2 then create a warning, and skip to next meet-pair) 2b For i1 and i2 , search in the ontology for the following sentence patterns "Individual: i1 Types: T1" and "Individual: i2 Types: T2" (if you don't matching sentences and, thus, no values for all variables T1, T2 then create a warning, and skip to next meet-pair) 2c add (i1, T1) to [DefList] add (i2, T2) to [DefList] add (i1, PT1, i2, PT2) to [RelList] 3 generate output textfile based on DefList and RelList
For horseV3, if we just consider the meets pair that is generated from Individual: h-nb Facts: meets t-nb the output should looks something like this
{ "Definitions": [ { "Identifier": "h", "Type": "Horsehead" }, { "Identifier": "t", "Type": "HorseTrunk" } ], "Relations": [ { "Individual1": "h", "Point1": "ProximalBoundary", "Individual2": "t", "Point2": "TrunkNeckBoundary" } ] }
Seems very fine to me, this transformation should work.
Only little note to the implications which is not important for the transformation between OWL and Medusa JSON-Inputformat:
The input file created above implies that Horsehead and HorseTrunk are items inside the repository and assigned to a image file where Horsehead contains at least one point ProximalBoundary and HorseTrunk contains at least one point TrunkNeckBoundary. Hopefully this is okay?!
To sum up: The repository have to contain all classes as items (e.g. HorseHoof and Horsehead but also Horse) in order to have a robust system which response with an combined image for each possible class/subclass definition. This leads to a complete Horse image in worst case because there is no recursive definition how to build a Horse if someone states inside the input file
{ "Definitions": [ { "Identifier": "h", "Type": "Horse" }, ...
From the perspective of the render everything is fine as long as nobody requires a complete Horse-Image -- or -- If someone requires one we have to provide a entire Horse-Image as well.
I just noted that github does not print stuff which is written in angular brackets and that my description of the approach is in parts unreadable for that reason. I will fix it now.
@pinnecke
Yes, sure. We need to link up with anatomical terms (and their boundaries) with the repository.
Hi Fabian, I have created the Render Method with the file name "Render.rb" and committed in "https://github.com/ConceptualBlending/conceptual_blending_project/tree/master/Exercise/write_ontologies/Raj". The sample ontology with a meet-pair is taken as input. Please check and let me know if any changes are required.
I assign you for the moment, until we decide who should do this.