Closed GuillaumeGomez closed 7 months ago
@tempdragon: I guess you could do the git clone manually with --depth 1
in the correct directory or do it once where you have good internet connection.
Well, I found some better ways to handle the internet, so it is no longer a problem for me to access GitHub. However, this may still cause potential problems for those in poor web conditions.
I propose to offer something like an option in the script, for example one that can be manually specified (e.g. separate the downloading/cloning logic instead of the hard-coded download command) to give people potentially using a poor network connection a better choice, while using the clone as a default. If this is to be refused, please tell me here so that we both won't have to wast time on it. (I guess I will start working on it after my work on debuginfo is done or proved wrong.)
I do respect the conclusion despite my first-ever contribution to a real open source project was reverted (Though I still wish to see better reasons or evidence of what happened exactly in the case mentioned by GuillaumeGomez, just for curiosity). Could you please consider disclosing the discussions so that I may potentially learn from it? @antoyo @GuillaumeGomez
Sorry, we didn't mean to be rude or anything; we should have explained more.
Well, it's more a question of opinions than anything else, so there won't be much to learn here.
It is already possible to do the clone manually as explained in these instructions. So, not including this PR should not prevent you from doing what you wanted (please tell us if this is not the case). Or perhaps you just wanted to automate these steps?
For the PR, I first wanted to ask to comment the code as I didn't know what one of the git
commands would do, just to make sure people in the same situation would not have to go read the documentation to understand what the code was doing. However, the PR was merged before. That's when I had a discussion with @GuillaumeGomez who somehow didn't read all the code and we concluded it's making the build script more complex to do something that is already possible to do manually.
Again, sorry for not explaining more the first time.
Btw, I just realized I didn't answer to the proposal: given that's already possible to do manually, is this still something you want? If so, we can discuss the proposal.
Or perhaps we could make the instructions in the readme clearer?
Well, the point is not gcc, but Rust. In the script it is possible to use a custom build of gcc, but doing a manual shallow clone and then fetching and checking out to the desired commit manually will still cause the script to re-clone the whole repo if you do not change the script. In a poor web condition, it is not possible to receive more than 40MiB of data from github every time you clone. So cloning a whole rust is completely impossible. That is the cause of original my commit.
Oh, sorry, I was confused.
You should be able to do a shallow clone the Rust repo manually: the script is supposed to do a git fetch
and then a git checkout
; if this doesn't work, let me know and we'll find a solution.
Or maybe the git fetch
is the operation that is too slow and you would like to make it faster?
If so, would having a command to give you the git commit hash of the Rust repo be sufficient for your needs (so you could do the git clone
manually for the right commit)?
If not, we could talk about having a command to clone the repo or something.
Reverts rust-lang/rustc_codegen_gcc#432
After further discussions with @antoyo, we came to the conclusion that rather than using extra complex commands, downloading the whole git history was taking more disk space but was simpler to maintain as a whole. Like I said, I'm not a big fan of
--depth
because of issues I encountered with it in the past too so at least it's always that off my mind.Thanks in any case @tempdragon!