Open jefflembeck opened 6 years ago
This was a protection we have added to the system. There were some packages that were saturating the I/O because they had a lot of files in it. I'm not sure what we could do here, //cc @bcoe
@satazor Saturating in what sense? Too many open file descriptors? Too CPU intensive? Too expensive to analyze?
Too many file descriptors as well as filesystem writes/entries. For instance, usually it’s faster to write a single large file than multiples that total the same size.
What happens is that the I/O isn’t fast enough and it causes the whole system to lag.
Moreover, a tarball could have almost infinite small files in it. This would be a vector of attack because a well crafted tarball could fill up the filesystem max inodes. We can revisit the threshold for the maximum number of files but it was already quite generous.
The total number of files is 32000. We may increase it, what value would you think it’s reasonable?
If a tarball has too many files, the analyzer will not update.
See: https://github.com/Microsoft/Typescript
(50537 files)
This is, unfortunately, preventing Typescript from updating in search results:
see: https://npms.io/search?q=typescript or: https://www.npmjs.com/search?q=typescript
vs. https://www.npmjs.com/package/typescript