Open Ly0n opened 1 year ago
This would be fascinating to explore; I could see trying to either leverage the data from the landscape directly for this model or look at how to have this be a landscape feature.
So, after talking with @Ly0n about the scope of this specific part of the project, heres my 2 cents:
My main problem with this use case would be a whitelist search for topics/keywords. If you need to find keywords and topics and match them to existing ones in a database to find out what the project you are analysing is really about, you have failed.
0.) Provide a service that does the stuff below and analyses a git project 1.) Preprocess readme (not only abstract, but every relevant text) 2.) Use Bert (AI), to find metrics keywords/technologies used as well topic classifications 3.) Define heuristics for high correlation topics as well "side" topics 4.) Feed analysis (project Meta Info + ai content info) into a database for further tools, that can utilize this information (big graph with all projects and their best for topics/technologies used etc)
Sounds like a doable (first shot) use case (without magic bloat), while simultaneously allowing to further experiment with what we really want to do here :)
Thanks @kikass13. In the first step, this would be exactly what you are describing. Theoretically, it should be possible to check every new open source project for sustainability issues. Ecosyste.ms records all open source projects anyway. If we filter them on certain basic criteria, it could be possible to capture a large part of the new open source projects in this area with little calculation effort. I am currently in contact with AI experts in this field who might be able to assist us here.
We should aim for a workshop in early September to clarify the main points.
If the analysis is easy/cheap to run I'd be up for attempting to run it on many projects and store the results against each one and then allow filtering by the results.
We have to learn, which kind of metric / information is needed for later analysis, therefore processing / databasing the info of every project separately is key (and should not take that long) ... The problem is, that we will probably need some (multiple) iterations per project, as we don't know what we need. Also projects change over time, so re iteration is necessary :)
So I guess my point is: we need a working mockup implementation and see how this would look / work (you already have a notebook as a starting point). I will look at that stuff starting September and fool around with Bert for a bit :)
We have already started initial experiments with the Natural Language Processing (NLP) tool BERTopic. First attempts can be found here: https://colab.research.google.com/drive/1Y5exVWXYFvbYp0yzbJcpya6FL8HejYRu?authuser=0&pli=1#scrollTo=BG8S-Qz_Lmy8 Please contact me if you want permission to edit the notebook.
Such a development could have a variety of benefits for our project:
Better and consistent labels over all projects. —> Much better plots to show the distribution of projects based on topics.
Graph showing connections between topics —> Showing how the ecosystem is connected based on topics and which projects provide important notes.
Creation of a dictionary with the relevant words / topics within the open source sustainability ecosystem —> Very important for further investigations and for gap analytics. What topics are missing so far in our analytics.
Automatic discovery of new projects based on the dictionary
Investigate the role of AI within the ecosystem based on AI related topics within the READMEs. Related to #155
This task is certainly not easy, but the developer of BERTopic has offered us his support. Please contact me if you want me to make a contact here.
Another open source project which could be potentially interesting here is: PandasAI