AllenDowney / ThinkStats2

Text and supporting code for Think Stats, 2nd Edition
http://allendowney.github.io/ThinkStats2/
GNU General Public License v3.0
4.03k stars 11.31k forks source link

Suggestion to use shallow git clone to download repository #58

Closed dlorch closed 2 years ago

dlorch commented 7 years ago

The repository with its full history is quite big, so cloning it locally as you suggest in chapter 0.2 might transfer more data than needed. Someone who just wants to go through the exercises might be better advised to shallow clone your repository:

$ git clone --depth 1 https://github.com/AllenDowney/ThinkStats2.git
Cloning into 'ThinkStats2'...
remote: Counting objects: 420, done.
remote: Compressing objects: 100% (275/275), done.
remote: Total 420 (delta 150), reused 368 (delta 145), pack-reused 0
Receiving objects: 100% (420/420), 133.64 MiB | 1.50 MiB/s, done.
Resolving deltas: 100% (150/150), done.
Checking connectivity... done.

In fact, downloading the ZIP might be the best option altogether.

AllenDowney commented 7 years ago

I did not know about shallow cloning. That's a good idea. Thanks!

On Fri, Dec 30, 2016 at 4:43 AM, Daniel Lorch notifications@github.com wrote:

The repository with its full history is quite big, so cloning it locally as you suggest in chapter 0.2 might transfer more data than needed. Someone who just wants to go through the exercises might be better advised to shallow clone your repository:

$ git clone --depth 1 https://github.com/AllenDowney/ThinkStats2.git Cloning into 'ThinkStats2'... remote: Counting objects: 420, done. remote: Compressing objects: 100% (275/275), done. remote: Total 420 (delta 150), reused 368 (delta 145), pack-reused 0 Receiving objects: 100% (420/420), 133.64 MiB | 1.50 MiB/s, done. Resolving deltas: 100% (150/150), done. Checking connectivity... done.

In fact, downloading the ZIP might be the best option altogether.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AllenDowney/ThinkStats2/issues/58, or mute the thread https://github.com/notifications/unsubscribe-auth/ABy37QSifOLqrj06X7DrkEfeqtP4Pdgyks5rNNJXgaJpZM4LYF3k .

Shumakriss commented 6 years ago

I would like to make an alternate suggestion. The repository only contains 264 commits which is actually rather short. Eventually, users will need to update code which might include changes to the dataset which will be slow. What I have seen most often is for either the code or the build tools to automatically download the files from another hosting site. This can also simplify the complexity of merges since binary files like zip/gz/tar do not mix well with text-based version control.

AllenDowney commented 6 years ago

That is also a good suggestion. But here's my problem: most of the email I get about this book (and my other books) comes from people who have not used Git before, people who don't use command line tools, and people who, after they have downloaded a file, don't know where in their file system to find it. So I am trying to keep this process very, very simple. Right now, worrying about the size of the download is not the highest priority, but I will keep it in mind.

On Tue, Apr 17, 2018 at 3:28 PM, Chris Shumaker notifications@github.com wrote:

I would like to make an alternate suggestion. The repository only contains 264 commits which is actually rather short. Eventually, users will need to update code which might include changes to the dataset which will be slow. What I have seen most often is for either the code or the build tools to automatically download the files from another hosting site. This can also simplify the complexity of merges since binary files like zip/gz/tar do not mix well with text-based version control.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AllenDowney/ThinkStats2/issues/58#issuecomment-382113077, or mute the thread https://github.com/notifications/unsubscribe-auth/ABy37QDD8c6o8pjoRqrjR4-thXuhXMr8ks5tpkJdgaJpZM4LYF3k .