Open pnacht opened 7 months ago
Note: there's also some checks that use fileparser.OnAllFilesDo
, which more of the checks use:
Replacing fileparser.OnMatchingFileContentDo with a function that only loads the first 262 bytes h2non/filetype needs
Similarly, we check the first 1024 bytes in another part. https://github.com/ossf/scorecard/blob/83ff808f0def7d05c24b7f67024cee07467d2921/checks/raw/binary_artifact.go#L172-L177
I can't say i've been a fan of the read it all at once aspect, instead of using io.Reader
and io.Writer
more. I wonder if there's something we can do instead. Although the []byte
runs deep throughout the code, including the RepoClient
interface.
https://github.com/ossf/scorecard/blob/83ff808f0def7d05c24b7f67024cee07467d2921/clients/repo_client.go#L31-L39
When profiling the weekly cron with pprof
, file i/o is one of our largest chunks of time.
Not every GetFileContent
needs the whole file, sometimes we don’t need any of it. So the io.Reader
part allows the callers to choose how much of the file to use (whether that's all of it via io.ReadAll
, 1KB, none, etc)
Additionally, io.ReadCloser
is a better choice, so the caller can close it when they’re done.
GetFileContent(filename string) (io.ReadCloser, error)
There are a lot of existing usages of GetFileContent
, which would need changed, but @adg offered an incremental, non-breaking way of introducing the change.
first add [a new] method to the concrete client implementations, then you can have a separate interface (even defined internally) that you test for with a type assertion, and use the new method when it is available
type fileReader interface {
ReadFile(string) (io.ReadCloser, error)
}
fr, ok := client.(fileReader)
if ok {
// use the new method fr.ReadFile
} else {
// use client.GetFileContent
}
@pnacht I didn't see a crash on my machine for the repo, but my VM may have more resources. Does this prototype branch eliminate the crash for Binary-Artifacts
?
https://github.com/spencerschrock/scorecard/tree/reader-partial-interface
Nope. But I noticed you added the new behavior to the githubrepo
client, but this is a local repo, so I believe it runs on the localdir
client?
doh! pushed another commit. I'm noticing a significant speedup now
Yep, just ran it on my localdir and it runs!
The breaking change was already made in #3912, so removing from v5 milestone, but there are still some callers to fileparser.OnMatchingFileContentDo
so parts of the issue are still applicable.
Describe the bug Running Scorecard with Binary-Artifact and/or Pinned-Dependencies on a repo with large files crashes entirely.
Reproduction steps I stumbled on this while trying to run Scorecard on a local clone of a HuggingFace model repository.
Steps to reproduce the behavior:
Deleting the very large files (including the .git folder), the checks pass. (There may be other checks that would also fail, I only tested those that run with
--local
)Expected behavior The checks should work even with large files.
As described below, Binary-Artifacts doesn't need to load the entire file, and it's unlikely an actual script will ever be big enough to be a problem.
Additional context I believe I understand why these checks are failing: both have at least one function (BinaryArtifacts and collectShellScriptInsecureDownloads) that runs
fileparser.OnMatchingFileContentDo
withPattern: "*"
(i.e. all files).As the function name implies, this function sequentially opens and loads all matching files. I assume one of the files was simply too large.
This should be fixable, though:
BinaryArtifacts
usesfileparser.OnMatchingFileContentDo
to callcheckBinaryFileContent
. That loads the file and then uses https://github.com/h2non/filetype to determine the file's type. This can be replaced by:fileparser.OnMatchingFileContentDo
with a function that only loads the first 262 bytes h2non/filetype needscollectShellScriptInsecureDownloads
could be set to only run on files with common script extensions (i.e..sh
,.bash
,.ps
, and no extension).