Closed sh1ng closed 5 years ago
It was added for a good reason. As I mentioned, cub is a massive repo and takes a long time. It's also unreliable in speed, sometimes taking much longer due to github throttling us. It's also a point of failure for the build process.
There's been no need to do a recursive copy of the entire repo. If concerned about the extra scripts, I'm sure one can figure out some simpler way to achieve the same results.
jon@pseudotensor:~/Downloads$ time git clone https://github.com/NVlabs/cub.git
Cloning into 'cub'...
remote: Enumerating objects: 29, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (27/27), done.
remote: Total 32671 (delta 7), reused 22 (delta 2), pack-reused 32642
Receiving objects: 100% (32671/32671), 16.55 MiB | 0 bytes/s, done.
Resolving deltas: 100% (28627/28627), done.
Checking connectivity... done.
real 0m1.848s
user 0m2.112s
sys 0m0.192s
jon@pseudotensor:~/Downloads$
Actually, looks like cub finally fixed their history and removed the giant files/etc. that made it very slow before. So you may be right that this is fixed effectively.
I think we not even need to have shallow cloning
if [ $spath == "xgboost" ] || [ $spath == "LightGBM" ] || [ $spath == "tests/googletest" ]