Good question if possible at all. Options:
1) As a first thought, a set of rules describing which grammatical relation (function) dominates/governs which other grammatical relation(s) is necessary. Such information may be extracted from question tests that are used when analysing a sentence to find out what modifies what e.g.:
"Peter is buying a black coat."
"What is Peter buying?" -> "a black coat" is a dependency of "buying"
"Who is buying a black coat?" -> "Peter" is a dependency of "buying"
"What is the coat like?" -> "black" is a dependency of "coat"
Generalising these to rules like: the object in sentences with the same structure is an argument of the predicate (or adjective phrase is an argument of the object) could produce the set of rules which could guide the logic of a new tool to generate the dependencies to be stored in depolex. The problem is that such analyses must be carried out manually for each sentence structure found in the ML corpus.
2) Looking up existing dependency graphs for the language in question and convert that to depolex form.
3) Make use of https://universaldependencies.org (https://universaldependencies.org/u/dep/)
4) An intermediate solution may be that dependencies generalised by grammatical categories (noun, adjective, etc.) are generated which seems to be easier. That would at least allow interpreting way more sentences than what manually crafted parse trees allow. Although, that would be only sufficient to parse sentences but not to act upon them (e.g. imperatives) as the words belonging to the same grammatical category would share the functor of the dependency of the grammatical category. Nevertheless, to tag constituents/phrases in indicative sentences that's sufficient as such functors could add c_values to the analyses_deps. Which also means that asking questions and information retrieval would work as well. Afterthought: generating the dependencies this way would reflect the grammar 1:1. It may work though when combined with 1) but then 1) in itself may be better.
Good question if possible at all. Options: 1) As a first thought, a set of rules describing which grammatical relation (function) dominates/governs which other grammatical relation(s) is necessary. Such information may be extracted from question tests that are used when analysing a sentence to find out what modifies what e.g.: "Peter is buying a black coat." "What is Peter buying?" -> "a black coat" is a dependency of "buying" "Who is buying a black coat?" -> "Peter" is a dependency of "buying" "What is the coat like?" -> "black" is a dependency of "coat" Generalising these to rules like: the object in sentences with the same structure is an argument of the predicate (or adjective phrase is an argument of the object) could produce the set of rules which could guide the logic of a new tool to generate the dependencies to be stored in depolex. The problem is that such analyses must be carried out manually for each sentence structure found in the ML corpus. 2) Looking up existing dependency graphs for the language in question and convert that to depolex form. 3) Make use of https://universaldependencies.org (https://universaldependencies.org/u/dep/) 4) An intermediate solution may be that dependencies generalised by grammatical categories (noun, adjective, etc.) are generated which seems to be easier. That would at least allow interpreting way more sentences than what manually crafted parse trees allow. Although, that would be only sufficient to parse sentences but not to act upon them (e.g. imperatives) as the words belonging to the same grammatical category would share the functor of the dependency of the grammatical category. Nevertheless, to tag constituents/phrases in indicative sentences that's sufficient as such functors could add c_values to the analyses_deps. Which also means that asking questions and information retrieval would work as well. Afterthought: generating the dependencies this way would reflect the grammar 1:1. It may work though when combined with 1) but then 1) in itself may be better.