In order to run the NLP models, there are a lot more dependencies required, in addition to downloading the model (both sizes in GBs) and more loading time.
There should be two install options, based on the analysis performed to make packaging and installation easier-
default
deep
Where the default does the issue detection and issue type classification with minimum dependencies (only scancode-toolkit dependencies), and faster.
And then the deep option which uses the NLP models. Also, enable GPU computation based on whether required libraries (Cuda) are installed and a GPU available.
In order to run the NLP models, there are a lot more dependencies required, in addition to downloading the model (both sizes in GBs) and more loading time.
There should be two install options, based on the analysis performed to make packaging and installation easier-
default
deep
Where the
default
does the issue detection and issue type classification with minimum dependencies (onlyscancode-toolkit
dependencies), and faster. And then thedeep
option which uses the NLP models. Also, enable GPU computation based on whether required libraries (Cuda) are installed and a GPU available.Related - #21