oalieno / asm2vec-pytorch

Unofficial implementation of asm2vec using pytorch ( with GPU acceleration )
MIT License
74 stars 21 forks source link

model reports low cosine similarity for identical functions #6

Closed markgllin closed 3 years ago

markgllin commented 3 years ago

using a single function as a training dataset, I'm able to generate a model using train.py. With the same function as both target function 1 and 2, compare.py reports cosine similarity values close to 0 when expected values are closer to 1 (i.e. almost identical).

# asm/ contains a singular file with one function
python scripts/train.py -i scripts/train.py -i asm/ -o model.pt --epochs 100

# asm/function is used for both training + comparison
python scripts/compare.py -i1 asm/function -i2 asm/function -m model.pt
=> cosine similarity : 0.019504

Am I misunderstanding the usage/purpose of asm2vec-pytorch?

Attached is an example function used for both training + comparison in the model (although I found this to be true of every function I've tested) function.txt

If it's relevant, this is a function extracted from a statically linked busybox binary.

markgllin commented 3 years ago

testing with Lancern's implementation, cosine similarity is consistently reported to be 0.99+, aligning with expectations. Will see if I can determine why I'm getting different results with asm2vec-pytorch

oalieno commented 3 years ago

Hi @markgllin, great question scripts/compare.py also need to be trained, the default epoch is only 10. The embedding is initially set to random vector, so it is reasonable that the cosine similarity score is nearly 0. You can set the epochs higher to get a better result.

I also test around this case. After setting a higher epoch, which is 2000, I get the cosine similarity score around 0.4. Which is quite low compared to 0.99. Need more digging.

oalieno commented 3 years ago

It seems that embedding vector initialization with make_small_ndarray is crucial. Not sure why 🤔

markgllin commented 3 years ago

is make_small_ndarray a method from pytorch or numpy? A quick search returns 0 results in terms of documentation for this :S

markgllin commented 3 years ago

I also have a very minimal understanding of ML, but would testing with an epoch of 2000 not result in overfitting?

oalieno commented 3 years ago

@markgllin make_small_ndarray is the function implemented in Lancern's implementation. I found the reason. Because Lancern's implementation uses a very small vector for initialization, like [0.002, ...] and [-0.001, ...]. The gradient is almost the same or even larger compared to the initial vector, like [0.004, ...] and [0.003, ...]. Therefore after only one epoch, which includes several updates, both the vector will be roughly equal to the gradient vectors. And they should be the same because they are the same function and only differ in random walk. On the other hand, this implementation uses a random vector range from 0 to 1 as the initial vectors, like [0.65, ...] and [0.22, ....]. And the gradient is small compared to the initial vector, like [0.0017, ...] and [0.0012, ...]. Therefore after only one epoch, which includes several updates, the initial vectors are still roughly equal to [0.65, ...] and [0.22, ....]. Because the vectors are not fully trained yet. In conclusion, if we only train one epoch. In Lancern's implementation, the embedding will be roughly equal to the gradient vector. While in this implementation, the embedding will be roughly equal to a random vector. It is a question of how to set the learning rate and how to train the model well. Still need some research about how to train model like word2vec and doc2vec well. As for the question of overfitting, you can see this link here https://www.reddit.com/r/MachineLearning/comments/3rcnfl/overfitting_in_word2vec/ and https://groups.google.com/g/gensim/c/JtUhgUjx4YI. I am not worried about overfitting right now.

markgllin commented 3 years ago

ahh I understand. In that case, this sounds like a non-issue and more like determining the appropriate parameters. I'll continue tinkering with that. Feel free to close this issue. Thanks!

oalieno commented 3 years ago

I will push a new commit to change the default setting of the initial vector and learning rate.

oalieno commented 3 years ago

Oh wait, I see your pull request. Yeah, custom learning rate also work.

oalieno commented 3 years ago

19e4e2505d7093dd9969dfb7ca35a6963e8be2ed Default adam learning rate seems a bit small, setting default to 0.02.