Rostlab / LocText

Relation Extraction (RE) of: Proteins <--> Cell Compartments
https://www.tagtog.net/-corpora/LocText
Apache License 2.0
5 stars 2 forks source link

‼️Crush the Baseline #11

Closed juanmirocks closed 7 years ago

juanmirocks commented 8 years ago

Implement Features Anew

"DependencyFeatureGenerator::18_LD_bow_N_gram_LD_2_<treatment ~~ with>_[0]",  # 264
"DependencyFeatureGenerator::22_PD_bow_N_gram_PD_2_<treatment ~~ with>_[0]",  # 805


Sentence Features

All Tokens Features

Selected Tokens Featurres

Token features are extracted for tokens that are part of the entities and for tokens that are in a linear dependency with the entities.

Linear Context and Dependency Chain

Token features are also extracted for tokens that are present in the linear and dependency context. A linear context of length 3 is considered, i.e., features are extracted for the next and previous 3 tokens relative to the token under consideration.

A dependency chain of length 3 is considered for the dependency context. Incoming and outgoing dependencies are considered for dependency-related features. For example, in the case of incoming dependencies, features are extracted for the source/from token and its incoming and outgoing dependencies are considered too. This goes on up to a dependency depth of 3. In addition to the token features, the dependency edge types are also considered while extracting dependency chain features.

Dependency Features

Many features are extracted depending on the dependency graph. Using the Floyd- Warshall algorithm, the shortest path between a protein entity and a location entity in a potential PL relation is calculated. For the purpose of extracting the shortest path, an undirected graph of dependencies is considered. Figure 4.6 shows the shortest path from protein entity "COP1" to location entity "cytoplasmic". The path is shown in bold. Note that an undirected graph is considered only for the purpose of extracting the shortest path. Most of the dependency-based features depend on this shortest path. While extracting the features, the original direction of the edges in the shortest path is also taken into consideration. The length of the shortest path contributes an integer-valued feature in addition to the binary feature for each length.

Token features are extracted for terminal features of the shortest path, which are head tokens of the entities. Some of the other path related features include token features of the internal tokens in the path, features for every edge in the path, features for internal edges in the path, etc.

N-gram Dependency Features

The protein and location entities are represented by their respective head tokens. The shortest path between two entities is actually a shortest path between the head tokens. However, there need not be a single shortest path between two entities. There can be multiple paths between two entities with same distance or same minimum distances. All such paths with minimum distance are computed and features are extracted for each one of them.

For every such minimum distance path from the set of minimum distance paths, parts of the paths are considered for N-gram features. For example, a set of all 2 consecutive tokens are considered for 2-gram features and a set of all 3 consecutive tokens are considered for 3-gram features, etc.. 2-, 3- and 4-gram features are extracted from all such paths. These features also include token features that are part of the corresponding set, dependencies in the set, directions of dependencies in the set, etc.

Other Features

Features Specific to DSModel

Some features are specific to the DSModel since it involves processing a pair of sentences as a combined sentence along with extra links. Some of those features include bag of words/stem/POS of tokens in individual sentences, binary tests like the presence of an entity in the first sentence or second sentence, etc. Importantly, the DSModel also uses the predictions of the SSModel. The features depending on SSModel predictions include binary tests like whether the entities considered in the potential relations have a predicted same-sentence relation or not. The intuition behind using same-sentence predictions is that entities that already have a same-sentence relation are unlikely to have a different-sentence relation in most cases.



Reported:

August 4th 2016

November 10th 2016

logs/training/1478708868440160526/loctext_id1478708868440160526_m1_u0.30_c0.0085.log:Computation(precision=0.6157205240174672, precision_SE=0.002796955995324513, recall=0.6238938053097345, recall_SE=0.004141255122679181, f_measure=0.6197802197802198, f_measure_SE=0.0028700805881961096)

logs/training/1478708868440160526/loctext_id1478708868440160526_m1_u0.90_c0.0080.log:Computation(precision=0.6624365482233503, precision_SE=0.0028419058353217194, recall=0.5787139689578714, recall_SE=0.004008789406529725, f_measure=0.6177514792899409, f_measure_SE=0.002730519497460233)

juanmirocks commented 7 years ago

As of now we are >1 %point below Shrikant's reported performance. We will continue now with the DS model. And then (with the combined models) continue with the feature selection & hyper parameter optimization.

@MadhukarSP @shpendm

juanmirocks commented 7 years ago

So far I'm gonna let Run2 parameters as defaults since it has much better precision. Run1 is really almost like StubSameSentenceRelationExtraction, predicting everything as positive but 14 negative edges. Run2 predicts 133 edges as negative.

juanmirocks commented 7 years ago

Note:

In comparison:

That is, relna takes much more time for training --> may be because relna produces many more features that are actually helpful and maybe should be added to LoccText

See #16

juanmirocks commented 7 years ago

Ponder:

Let ' s introduce *Juan Miguel* : is *awesome* !

OW1 = introduce s ' Let
IW1 = : is awesome !

OW2 = !
IW2 = is : Miguel Juan

LD = : is

Yes, possible problems if I introduce the inner window -- but at the moment I'm not

juanmirocks commented 7 years ago

Maybe ponder about:


(BLR1, OW1, B) | (are, OW1, B) | (receptors, OW1, B) | (novel, OW1, B)
(BLR1, IW1, F) | (are, IW1, F) | (receptors, IW1, F) | (novel, IW1, F)

...

(BLR1, LD, F) | (are, LD, F) | (receptors, LD, F) | (novel, , )```
juanmirocks commented 7 years ago
16:57:30|LocText$ grep tokens_count_before run.log
Feature map: 6 == SentenceFeatureGenerator::8_tokens_count_before_[0] -- _1st_ value: 4
Feature map: 6 == SentenceFeatureGenerator::8_tokens_count_before_[0] -- _1st_ value: 0
Feature map: 6 == SentenceFeatureGenerator::8_tokens_count_before_[0] -- _1st_ value: 0
Feature map: 6 == SentenceFeatureGenerator::8_tokens_count_before_[0] -- _1st_ value: 2
Feature map: 6 == SentenceFeatureGenerator::8_tokens_count_before_[0] -- _1st_ value: 2

16:58:21|LocText$ grep tokens_count_after run.log
Feature map: 7 == SentenceFeatureGenerator::9_tokens_count_after_[0] -- _1st_ value: 10
Feature map: 7 == SentenceFeatureGenerator::9_tokens_count_after_[0] -- _1st_ value: 1
Feature map: 7 == SentenceFeatureGenerator::9_tokens_count_after_[0] -- _1st_ value: 1
Feature map: 7 == SentenceFeatureGenerator::9_tokens_count_after_[0] -- _1st_ value: 1
Feature map: 7 == SentenceFeatureGenerator::9_tokens_count_after_[0] -- _1st_ value: 16
juanmirocks commented 7 years ago

Tanzeem links

thesis.pdf

https://push-zb.helmholtz-muenchen.de/deliver.php?id=7114 or this same: Wachinger_B-2013-Next_generation_knowledge_extraction_from_biomedical_literature_with.pdf

juanmirocks commented 7 years ago

Other (from Tanya)

weka Ranker, big variation
pca / 10

[20170112, 14:14:38] Tatyana Goldberg: Search:weka.attributeSelection.RankSearch -S 1 -R 0 -A weka.attributeSelection.GainRatioAttributeEval --
[20170112, 14:14:48] Tatyana Goldberg: Evaluator:    weka.attributeSelection.CfsSubsetEva
juanmirocks commented 7 years ago

@goldbergtatyana would it be possible for you to generate the same list as human_localization_all.tab yet also for the organisms: {arabidopsis, Saccharomyces cerevisiae (yeast)}

And other model organisms if you think appropriate?

goldbergtatyana commented 7 years ago

HI @juanmirocks, I need to see the file human_localization_all.tab to know what to generate for you.

juanmirocks commented 7 years ago

? I don’t understand, that’s the file you generated.

For refreshment, the file looks like:

(Las time you added an extra column with the GO identifiers, that’s exactly what I need together with the PubMed ids, which is also included)

Entry Entry name Protein names Gene names Subcellular location [CC] Gene ontology (cellular component) P04637 P53_HUMAN Cellular tumor antigen p53 (Antigen NY-CO-13) (Phosphoprotein p53) (Tumor suppressor p53) TP53 P53 SUBCELLULAR LOCATION: Cytoplasm. Nucleus. Nucleus, PML body. Endoplasmic reticulum. Mitochondrion matrix. Note=Interaction with BANP promotes nuclear localization. Recruited into PML bodies together with CHEK2. Translocates to mitochondria upon oxidative stress.; SUBCELLULAR LOCATION: Isoform 1: Nucleus. Cytoplasm. Note=Predominantly nuclear but localizes to the cytoplasm when expressed with isoform 4.; SUBCELLULAR LOCATION: Isoform 2: Nucleus. Cytoplasm. Note=Localized mainly in the nucleus with minor staining in the cytoplasm.; SUBCELLULAR LOCATION: Isoform 3: Nucleus. Cytoplasm. Note=Localized in the nucleus in most cells but found in the cytoplasm in some cells.; SUBCELLULAR LOCATION: Isoform 4: Nucleus. Cytoplasm. Note=Predominantly nuclear but translocates to the cytoplasm following cell stress.; SUBCELLULAR LOCATION: Isoform 7: Nucleus. Cytoplasm. Note=Localized mainly in the nucleus with minor staining in the cytoplasm.; SUBCELLULAR LOCATION: Isoform 8: Nucleus. Cytoplasm. Note=Localized in both nucleus and cytoplasm in most cells. In some cells, forms foci in the nucleus that are different from nucleoli.; SUBCELLULAR LOCATION: Isoform 9: Cytoplasm. cytoplasm [GO:0005737]; cytosol [GO:0005829]; endoplasmic reticulum [GO:0005783]; mitochondrial matrix [GO:0005759]; mitochondrion [GO:0005739]; nuclear chromatin [GO:0000790]; nuclear matrix [GO:0016363]; nucleolus [GO:0005730]; nucleoplasm [GO:0005654]; nucleus [GO:0005634]; PML body [GO:0016605]; protein complex [GO:0043234]; replication fork [GO:0005657]

On Fri, Feb 3, 2017 at 1:32 PM Tatyana Goldberg notifications@github.com wrote:

HI @juanmirocks https://github.com/juanmirocks, I need to see the file human_localization_all.tab to know what to generate for you.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/juanmirocks/LocText/issues/11#issuecomment-277236470, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGQH2AQHlg-Banzy_WgtVdBoLzY4mdiks5rYx5EgaJpZM4KeEed .

goldbergtatyana commented 7 years ago

seems to me like a simple uniprot search:

  1. go to http://www.uniprot.org/
  2. click on Advanced next to the search bar
  3. select in the left drop down Organism[OS] and type in the organism name you're interested (e.g. human)
  4. in the result window select Filter By "Reviewed"
  5. then click on columns, a button that is shown above the results table
  6. then select entry name, protein name, gene name (all in names & taxonomy section), Subcellular location [CC] (in Subcellular Location section) and Gene ontology (GO) (in Gene ontology (GO) section). Unselect everything else
  7. in the result view you can then download the result in a tab separated format
goldbergtatyana commented 7 years ago

as for organisms,Id suggest to go as in the linked annotations article for:

juanmirocks commented 7 years ago

@goldbergtatyana oh I see -- you generated it this way

unfortunately the output from uniprot doesn't normalize the subcellular localizations when they extracted from citations, as in:

Cytoplasm {ECO:0000305|PubMed:16410549}. Nucleus {ECO:0000269|PubMed:16410549}.

Also, sometimes I don't see the relation between the column CC and the GO (which should be the almost the same or at least congruent?). For example, in

`Q8WZ42 TITIN_HUMAN Titin (EC 2.7.11.1) (Connectin) (Rhabdomyosarcoma antigen MU-RMS-40.14) TTN SUBCELLULAR LOCATION: Cytoplasm {ECO:0000305|PubMed:16410549}. Nucleus {ECO:0000269|PubMed:16410549}. condensed nuclear chromosome [GO:0000794]; cytosol [GO:0005829]; extracellular exosome [GO:0070062]; extracellular region [GO:0005576]; I band [GO:0031674]; M band [GO:0031430]; muscle myosin complex [GO:0005859]; striated muscle thin filament [GO:0005865]; Z disc [GO:0030018]``

goldbergtatyana commented 7 years ago

@juanmirocks please upload the orginal file for me to able to reconstruct the logic then. By now I unfortunately do not remember what was done. Thanks

juanmirocks commented 7 years ago

The file is in your public_html folder On Fri, 3 Feb 2017 at 20:47, Tatyana Goldberg notifications@github.com wrote:

@juanmirocks https://github.com/juanmirocks please upload the orginal file for me to able to reconstruct the logic then. By now I unfortunately do not remember what was done. Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/juanmirocks/LocText/issues/11#issuecomment-277343981, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGQH3cFj9UtWD2wYxt7JV2KoPmPfYhBks5rY4QbgaJpZM4KeEed .

juanmirocks commented 7 years ago

organisms:

"4679": 1,
"7955": 1,
"9913": 2,
"562": 5,
"3888": 5,
"10116": 6,
"4097": 7,
"7227": 7,
"4577": 14,
"10090": 44,
"3702": 179,
"9606": 222,
"4932": 302,

UniProt query to get the reviewed proteins of all corpus' mentioned organisms:

(organism:human OR organism:yeast OR organism:arabidopsis OR (organism:"Allium cepa (Onion) [4679]" OR organism:"Danio rerio (Zebrafish) (Brachydanio rerio) [7955]" OR organism:"Bos taurus (Bovine) [9913]" OR organism:"Escherichia coli [562]" OR organism:"Pisum sativum (Garden pea) [3888]" OR organism:"Rattus norvegicus (Rat) [10116]" OR organism:"Nicotiana tabacum (Common tobacco) [4097]" OR organism:"Drosophila melanogaster (Fruit fly) [7227]" OR organism:"Zea mays (Maize) [4577]" OR organism:"Mus musculus (Mouse) [10090]")) AND reviewed:yes

that is:

http://www.uniprot.org/uniprot/?query=%28organism%3Ahuman+OR+organism%3Ayeast+OR+organism%3Aarabidopsis+OR+%28organism%3A%22Allium+cepa+%28Onion%29+%5B4679%5D%22+OR+organism%3A%22Danio+rerio+%28Zebrafish%29+%28Brachydanio+rerio%29+%5B7955%5D%22+OR+organism%3A%22Bos+taurus+%28Bovine%29+%5B9913%5D%22+OR+organism%3A%22Escherichia+coli+%5B562%5D%22+OR+organism%3A%22Pisum+sativum+%28Garden+pea%29+%5B3888%5D%22+OR+organism%3A%22Rattus+norvegicus+%28Rat%29+%5B10116%5D%22+OR+organism%3A%22Nicotiana+tabacum+%28Common+tobacco%29+%5B4097%5D%22+OR+organism%3A%22Drosophila+melanogaster+%28Fruit+fly%29+%5B7227%5D%22+OR+organism%3A%22Zea+mays+%28Maize%29+%5B4577%5D%22+OR+organism%3A%22Mus+musculus+%28Mouse%29+%5B10090%5D%22%29%29+AND+reviewed%3Ayes&sort=score

with all columns:

http://www.uniprot.org/uniprot/?query=%28organism%3Ahuman+OR+organism%3Ayeast+OR+organism%3Aarabidopsis+OR+%28organism%3A%22Allium+cepa+%28Onion%29+%5B4679%5D%22+OR+organism%3A%22Danio+rerio+%28Zebrafish%29+%28Brachydanio+rerio%29+%5B7955%5D%22+OR+organism%3A%22Bos+taurus+%28Bovine%29+%5B9913%5D%22+OR+organism%3A%22Escherichia+coli+%5B562%5D%22+OR+organism%3A%22Pisum+sativum+%28Garden+pea%29+%5B3888%5D%22+OR+organism%3A%22Rattus+norvegicus+%28Rat%29+%5B10116%5D%22+OR+organism%3A%22Nicotiana+tabacum+%28Common+tobacco%29+%5B4097%5D%22+OR+organism%3A%22Drosophila+melanogaster+%28Fruit+fly%29+%5B7227%5D%22+OR+organism%3A%22Zea+mays+%28Maize%29+%5B4577%5D%22+OR+organism%3A%22Mus+musculus+%28Mouse%29+%5B10090%5D%22%29%29+AND+reviewed%3Ayes&sort=score

juanmirocks commented 7 years ago

@goldbergtatyana finally I get good performance 😀

juanmirocks commented 7 years ago

Goal of the task achieved:

We are now going to concentrate on different sentence models with different distances, see #34