sdsc-ordes / gimie

Extract linked metadata from repositories
https://sdsc-ordes.github.io/gimie/
Apache License 2.0
6 stars 2 forks source link

make list_files recursive #76

Closed rmfranken closed 4 months ago

rmfranken commented 1 year ago

This is rare, but when license(s) are hosted in a folder that is not the root directory of the repository, Gimie currently does not pick them up. fix should include changing the list-files function to look inside "license-like" folders, instead of only the root dir. Rest of the script should remain pretty untouched.

rmfranken commented 1 year ago

testcase: https://gitlab.com/inkscape/inkscape

cmdoret commented 1 year ago

suggestion: list_files should probably not look for license-specific folder names, as it is a general method. It could just work recursively (maybe up to a fixed depth, to avoid issues with extremly large repos).

note: GitExtractor.list_files() is already recursive, but does not limit the depth.

rmfranken commented 1 year ago

Agreed, had not yet updated this issue after our train conversation. Recursion limit definitely seems smart. If someone puts a license more than 3 layers deep it's their own fault for making it so hard to find.

use case I'm scared of without recursion limit is something like zarr folder structure, where the hierarchy of data is represented as folders within folders.

rmfranken commented 1 year ago

Another idea: Recursion limit + exclude-list for the file format. If I'm a imaging tool, I might not put a mp4 of cell microscopy footage in 4k in the root folder, but I could provide it as an example in a example folder. If we limit to look only in files which are some sort of text-format, we could bypass most of my fears I think, and we would only deal with "number of files" as a danger. Optionally, a file size limit to max a few kb...

and that too, we could cover with a if statement - only look through this folder, if the folder has less than 100 files (or 568, for the edge edge edge case where someone incorporates all 568 unique spdx licenses) in it or something. it's a bit dirty but should catch 99% of cases?

cmdoret commented 1 year ago

I think filtering on file-type should be the job of the consumer (e.g. filtering for a license-like filename, or a specific extension).

list_files() returns Resources which only have the name and the pointer to read the file. The size of a Resource is therefore independent on file size, yet I see how it could cause an issue in the consumer function if the file takes ages to parse. FIltering on file-size make sense if this is available from GitHub/GitLab's GraphQL API, but not if we need to download the file to know their size.

I agree that the number of files might be an issue. I suspect GraphQL responses are already paginated to 50 or 100, btw.

I guess the method could then look something like:

def list_files(self, max_n_files=100, max_file_size_kb=2048) -> List[Resource]
cmdoret commented 1 year ago

After more thought: I feel like filtering on file size should not be part of list_files. However, it could add populate a size field in the output resources, thus letting the consumer make its own choice.

cmdoret commented 1 year ago

To provide an concrete example, I tried to expand on what a consumer may look like here: https://github.com/SDSC-ORD/gimie/issues/72#issuecomment-1772207044

cmdoret commented 1 year ago

@rmfranken while I agree that list_files should be recursive, it seems that for licenses, the recommended practice is to put the license at the root folder, moreover some repositories (e.g. kafka and airflow) store their dependencies' licenses in a folder.

For this reason, I propose not to look for licenses in directories, at the risk of having a few false negatives.

Note: in your inkscape example, there's a COPYING file in the root directory, this one would still get scanned and maybe licenses could be identified from it.

I guess that would make this issue lower priority. Some notes for when we decide / need to tackle it:

rmfranken commented 1 year ago

Haha! Some midnight inspiration?

I am also tempted - especially because false-negatives are much less consequential in our context than false positives. I also have not had much time for it lately - so the lower priority is fine for me. :)

I also guess there are bigger fish we can fry to provide end-user value than to go from 95 to 100% on licenses. CFF parser, ML Code quality scoring, input/output file format extraction (that one is probably very tricky, but very useful).