Open StrohD opened 11 months ago
Hi @StrohD , thanks for the suggestion, appreciate it. Set operations will for sure boost the program. However, there is one thing to be noticed, Set operations will lead to slightly different output as the original code which uses list to allow a protein to be counted multiple times. For example, if protein A is the protein of interest with protein B and C as its first-level neighbors while protein D is the neighbor of both protein B and C, the original code will count C twice to increase its weight when they scan the second-level neighbors of protein A. But the Set operations will not.
Therefore, if you plan to use the trained models we provided for prediction, I would recommend to use the original code to generate the PPI features to maintain consistency.
Also, if you intend to train your own PPI model (SONAR3.0) from scratch, you could definitely use the Set version to boost. Based on our previous experiments, Set version will not harm the performance too much (just slightly lower than the original classifier, though I don't recall the exact AUC values). Feel free to try if you are interested.
Once again, thanks for the suggestion! More discussions are also welcome!
Unfortunately these functions are incredibly slow making the tools hard to use with PPI and PPA data. Here is a proposed change to that increases speed on order of magnitudes compared to the original implementation using python sets and removing unnecessary computations and loops in loops.
` def get_PPI_features(prot, G, RBP_set, PPI_1stNBs=None,num_cut=5): NBhood1_total=set(G.subgraph(PPI_1stNBs).nodes()) if PPI_1stNBs else set(G.neighbors(prot)) NBhood2_total=set() NBhood3_total=set()
def get_PPI_feature_vec(prot, G, RBP_set, num_cut=5, PPI_1stNBs=None):
print(prot)
`