According to the paper, the smell is described as follows:
Problem
If the machine runs out of memory while training the model, the training will fail.
Solution
Some APIs are provided to alleviate the run-out-of-memory issue in deep learning libraries. TensorFlow’s documentation notes that if the model is created in a loop, it is suggested to use clear_session() in the loop. Meanwhile, the GitHub repository pytorch-styleguide recommends using .detach() to free the tensor from the graph whenever possible. The .detach() API can prevent unnecessary operations from being recorded and therefore can save memory. Developers should check whether they use this kind of APIs to free the memory whenever possible in their code.
Impact
Memory Issue
Example:
### TensorFlow
import tensorflow as tf
for _ in range(100):
+ tf.keras.backend.clear_session()
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])
Hello!
I found an AI-Specific Code smell in your project. The smell is called: Memory not Freed
You can find more information about it in this paper: https://dl.acm.org/doi/abs/10.1145/3522664.3528620.
According to the paper, the smell is described as follows:
Example:
You can find the code related to this smell in this link: https://github.com/autorope/donkeycar/blob/c0d4eb310b4aab4915a655f7545a2aa8bf983e50/donkeycar/management/base.py#L380-L400.
I also found instances of this smell in other files, such as:
.
I hope this information is helpful!