WM-SEMERU / ds4se

Data Science for Software Engineering (ds4se) is an academic initiative to perform exploratory and causal inference analysis on software engineering artifacts and metadata. Data Management, Analysis, and Benchmarking for DL and Traceability.
https://wm-csci-435-f19.github.io/ds4se/
Apache License 2.0
7 stars 3 forks source link

[representation] iS2S: Structure 2 Sequence Code Embeddings #94

Open danaderp opened 3 years ago

danaderp commented 3 years ago

Description Code Embeddings are abstract representations of source code employed in multiple automation tasks in software engineering like clone detection, traceability, or code generation. This abstract representation is a mathematical entity known as Tensor. Code Tensors allows us to manipulate snippets of code in semantic vector spaces instead of complex data structures like call graphs. Initial attempts focused on identifying deep learning strategies to compress code in lower-dimensional vectors (code2vec). Unfortunately, these approaches do not consider autoencoder architectures to represent code. The purpose of this project is to combine a structural language model of code with autoencoder architectures to compress source code snippets into lower-dimensional tensors. The lower-dimensional tensor must be evaluated in terms of semantics (clone detection).

Disentanglement of Source Code Data with Variational Autoencoder The performance of deep learning approaches for software engineering generally depends on source code data representation. Bengio, et al. show that different representations can entangle explanatory factors of variation behind the data. We hypothesize that source code data contains these explanatory factors useful for automating many software engineering tasks (e.g., clone detection, traceability, feature location, and code generation). Although some deep learning architectures in SE are able to extract abstract representation for downstream tasks, we are not able to verify such features since the underlying data is entangled. The objective of code generative models is to capture underlying data generative factors. However, a disentangled representation would allow us to manipulate a single latent unit being sensitive to a single generative factor. Separate representational units are useful to explain why deep learning models are able to classify or generate source code without posterior knowledge (or labels). This project aims at identifying single representational units from source code data. We will use CodeSearch Net datasets and Variational Autoencoders to implement the approach.

Project Goals

Implement a module of interpretability to test edge cases of the autoencoder

Recommended Readings

danaderp commented 3 years ago

For Sam:

m13253 commented 3 years ago

Updated work plan:

m13253 commented 3 years ago

Meeting note 2021-03-18:

References:

m13253 commented 3 years ago

Meeting note 2021-03-25:

m13253 commented 3 years ago

Update 2021-04-01:

m13253 commented 3 years ago

Meeting note 2021-04-02:

Tasks:

  1. Complete sampling
  2. How to incorporate code2vec

Sampling (2 types):

  1. focus on encoder, obtain the middle vectors (<- for now)
  2. create random noise, how do they generate code

Case of studies:

The experiments we are going to run.