Closed alphapapa closed 8 years ago
Certainly! But it's convenient, so we are migrating the http
to https
, well I am trying to figure out how to do that and I will, (unless someone wants to take a stab at teaching me how to do it in layman terms) as it would be very nice to have this before or when 1.0 ships!
I am closing this in favor of https://github.com/fisherman/fisherman/issues/82
Please let's continue the discussion there. :smile:
Respectfully, HTTP vs. HTTPS is an orthogonal problem. In fact, if software is distributed with cryptographically secure signatures, HTTPS is not strictly necessary. And considering the well-documented problems with CAs, some people would prefer to avoid HTTPS altogether.
we are migrating the http to https, well I am trying to figure out how to do that and I will, (unless someone wants to take a stab at teaching me how to do it in layman terms)
That's another part of the problem: running a secure HTTPS server is not something a layman can do. Many, if not most, HTTPS servers on the Internet are not correctly configured. Even experienced sysadmins and developers find it difficult, because it's very complicated (poorly designed/overengineered protocols tend to be that way).
The best solution is something that uses signed packages and/or signed git tags. cf. Debian package repositories, The Update Framework, etc.
But it's convenient
Security is never convenient. That's the point: having your system compromised is decidedly inconvenient.
I don't mean this in a disrespectful way, but telling people to pipe to shell shows that you don't understand the severity of the problem, and thinking that using HTTPS will solve it shows that you don't understand the problem to begin with.
I hope you'll change your mind. As for myself, I will be staying far away from your project, because any project whose purpose is to install executable code on other people's systems, and which doesn't take security seriously, is a project that won't touch my systems.
I am trying to help, by drawing attention to the problem and pointing toward good solutions, but it's up to you to make the right call. Good luck.
Not at all, I am all for improvements and moving forward, but,
The best solution is something that uses signed packages and/or signed git tags. cf. Debian package repositories, The Update Framework, etc.
Can you help setting that up? I have zero experience with that.
Now, I am most definitely not trying to be disrespectful, but you may also want to consider staying away from the following projects (if I look hard enough I am pretty sure I could find a lot more):
curl -sSf https://static.rust-lang.org/rustup.sh | sh
curl https://install.meteor.com/ | sh
sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
I hope you'll change your mind...
I am not following. What is the proposed changed again?
Can you help setting that up? I have zero experience with that.
Well, if you wanted to distribute fish stuff as a repository of Debian packages, that would work, but it would also mean, of course, practically limiting your audience to those who use Debian or a Debian-based Linux distro.
I haven't used The Update Framework myself, but there is lots of documentation available on their web site if you wanted to integrate it into your project.
I pointed to those as examples and inspiration, not as drop-in solutions. There aren't any drop-in solutions, really, although using a Debian repo would probably be the closest you could get.
But again, that's sort of the point: it's a hard problem.
you may also want to consider staying away from the following projects (if I look hard enough I am pretty sure I could find a lot more):
Rust – curl -sSf https://static.rust-lang.org/rustup.sh | sh Meteor – curl https://install.meteor.com/ | sh OhMyZsh – sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
I don't follow Rust, but I did post a similar bug report on the Meteor project nearly two years ago, and it was met with a similar response: that TLS and CAs solve the problem. And I don't use zsh, but if I did, that is indeed a reason I would not touch OhMyZsh.
I am not following. What is the proposed changed again?
My proposal is basically this: projects which exist to install executable code on other people's systems should take security very seriously. China's Great Cannon attack shows that this is more important than ever, and we would be naive to assume that other nations and agencies don't have similar capabilities.
Using signed git tags isn't difficult or complicated, and doing something like using a Makefile to build tarballs and signing them with GPG isn't either.
I'm just trying to explain how important the problem is. It's your project, so, of course, it's up to you what you want to do about it. :) Good luck.
@alphapapa
You said "Good Luck" twice which makes me think you are probably thinking I am either trying to evade the problem or scare you away from the project. So, let me start by saying, that's not so, please stay and help me understand the problem better.
Here are some things I could work on:
-n --dry-run
option or DRY_RUN env variable that users can set when running the script just to see what will happen.I have a few questions, if you don't mind, what is the difference between:
curl url > install
# study the code
chmod +x install
./install
# and
# study the code in url
curl url | install
And what does "signed git tags" and "signing them with GPG" have to do with distributing the system? This project is on GitHub and will continue to be in the future. Are those things still relevant?
One more thing, since I can't obviously expect you to do any of this for me, can you point to some projects you follow (or not) that use these recommended things you are saying? Otherwise, there is no way I could possibly learn how to fix the "problem".
@alphapapa I was able to read the comments in the issue you created in the Meteor repository here:
I also read what @gschmidt had to say about it and noticed there was no reply to your last comment.
I read Sean Cassidy's article here and the case about an incomplete script can be solved trivially by doing the same trick Meteor does, the problem about someone hacking install.fisherman.sh
so that based in the user agent people installing Fisherman will end up installing something else can be alleviated with by having an https
certificate and by trusting Fisherman is not trying to ruin your computer.
Now, "trusting" Fisherman is an unavoidable step. Like, if you ever meet someone in person, how do you trust that person will not do something to harm you?
But the biggest problem, at least in my case, is that I just don't have a clue about the technical things you are talking about.
Using signed git tags isn't difficult or complicated, and doing something like using a Makefile to build tarballs and signing them with GPG isn't either.
:confused: What?
Just like there is a Fisherman QuickStart Guide here, would you be so kind to provide a "GPG and Tarball signing tutorial to distribute your programs Quick Start Guide"? That would be most helpful.
@alphapapa Would it be possible to set the UserAgent with curl
so that we always get the same version of the script that users can verify on their browsers?
You said "Good Luck" twice which makes me think you are probably thinking I am either trying to evade the problem or scare you away from the project. So, let me start by saying, that's not so, please stay and help me understand the problem better.
Actually, I was trying to provide for a sort of "graceful exit", implying that I respect your freedom to run your project as you see fit, and that I'm not going to try to browbeat you into changing your mind. :) But I'm happy to continue discussing it with you.
Here are some things I could work on:
Recommending users to try https://github.com/p-e-w/maybe
That's an interesting project! Thanks for sharing that. It doesn't sound like it's ready for "production" use yet, but it's definitely something to keep an eye on.
It reminds me of the checkinstall
project, which is a mature, LD_PRELOAD
-based solution that creates packages which can be installed by distro package managers. It's not especially useful for software which is intended to be installed in user homedirs, but it's very useful for system-wide packages.
Providing a -n --dry-run option or DRY_RUN env variable that users can set when running the script just to see what will happen.
That could definitely be useful, but it only protects against one attack vector, that of the install script. It doesn't protect against the code which gets installed being compromised. For example, if your install script copies a .fish
file that contains a function, then someone who wanted to compromise users' systems could compromise that .fish
file, which the dry-run wouldn't detect. Then, when the user executed that function, it could do whatever it wanted.
I have a few questions, if you don't mind, what is the difference between:
curl url > install # study the code chmod +x install ./install # and # study the code in url curl url | install
Well, the difference is significant: In the first example, the code is downloaded once, and the code which is studied is stored locally. In the second example, the code is downloaded twice, and the code which is actually piped from curl
is not actually inspected; there is no guarantee that the same code will be received both when it is studied in a browser and when it is downloaded by curl
. And for an attacker, it would be trivial to have the server serve one file to a browser and a different file to curl
.
And what does "signed git tags" and "signing them with GPG" have to do with distributing the system? This project is on GitHub and will continue to be in the future. Are those things still relevant?
I'm a bit confused by your question. Signed git tags can be used to verify that a certain git commit was signed by a certain key and has not been tampered with. Assuming that the signer keeps his key secure and that his system is not compromised, this provides a degree of certainty that the code in a certain git commit is what the developer intended to release, rather than what an attacker wants to inject.
Without them, you rely on GitHub's internal security processes, on TLS, on CAs, on your browser's own correctness, etc. If any one of those are compromised, then it could be possible to compromise the code received by users.
Now, I wouldn't necessarily disagree with you if you said that it's unlikely that any of that would happen, and that it's even less likely that a relatively minor project such as this would be targeted. But that's beside the point. It's not likely that either of us will get into a car accident, but we still buckle our seatbelts when we go somewhere (I hope!).
One more thing, since I can't obviously expect you to do any of this for me, can you point to some projects you follow (or not) that use these recommended things you are saying? Otherwise, there is no way I could possibly learn how to fix the "problem".
That's a good question. Please realize that I'm far from an expert myself! I'm just a relatively informed user and hobbyist developer who tries to keep these things in mind and stay somewhat current.
I think the most important thing to keep in mind is that "security is a process, not a product." Security is never "done." There is no silver bullet.
For general info, I recommend reading LWN.net and Bruce Schneier's blog. These will help keep you apprised of developments in the security community and help you develop a security-conscious mindset.
For specific technical examples, I recommend checking out The Update Framework. They have a lot of documentation there and are trying to provide a general solution to the problem of secure software distribution.
I also recommend studying Debian packaging systems. Debian has been doing secure software distribution for a long time. It's not simple, but if you can learn some of the principles their system applies to defend against attack vectors, you can apply those to any solution you choose or develop. There's extensive documentation available in the Debian manuals on the Debian site.
Just like there is a Fisherman QuickStart Guide here, would you be so kind to provide a "GPG and Tarball signing tutorial to distribute your programs Quick Start Guide"? That would be most helpful.
I'm not sure if I understand what you're asking for. I mean, you use tar
to create tarballs, and you use gpg
to generate signatures for files. If you haven't used either of those utilities before, then you should be able to find some good docs and examples with Google. If you have specific questions, I might be able to help answer those.
Would it be possible to set the UserAgent with curl so that we always get the same version of the script that users can verify on their browsers?
That's a good, technical question. :) Let's see:
$ man curl | grep -i agent
-A, --user-agent <agent string>
(HTTP) Specify the User-Agent string to send to the HTTP server. Some badly done CGIs fail if this field isn't set to "Mozilla/4.0". To encode blanks in the string, surround the string with single quote marks. This can also
See also the -A, --user-agent and -e, --referer options.
user-agent = "superagent/1.0"
So, sure, you could do that. But what does that really prove? How do you know the user-agent you choose will match the user's browser? Are you going to update it every time a new browser is released? Are you going to make users customize it?
From an attacker's perspective, if they see an outdated user-agent, can they guess that it's a trick? What if they see a mismatch, like a "Linux" user-agent from an IP address that just made a request with a "Chrome/Windows" user-agent? Can they guess that it's a Windows user using Cygwin, and that the Linux user-agent is fake?
The point is simply that, while that might be a good idea, it doesn't solve the problem; it's not a silver bullet. And it could actually provide a false sense of security, or raise a red flag and give a more suspicious appearance.
I hope some of this is useful or helpful. :)
@alphapapa Thank you! This is the kind of reply I was looking for. In the future, I want to make my software more secure and reliable.
Like you said, there is no "silver bullet", but I would feel more comfortable contending a Werewolf with a M134 Minigun using regular bullets, in the case I could not procure silver bullets.
At the same time, I want to provide an "easy" to install option, one which is also "good looking" because I believe a successful solution must be both good and good looking, so I will look into getting the https
anyway. I've never done that so it's a learning experience.
For the time being, I will also work on a DRY_RUN
option and be more conspicuous in the install guide about security.
https://github.com/fisherman/fisherman/wiki/Installing-Fisherman#manual-install
The original commit introducing the curl _ | fish
option, have a look, it's funny.
Yarr, a lad with a sense o' humor be a fine cap'n. ;)
Yeah, using TLS is not a bad idea if you can get it working, as long as people keep in mind that it's only one layer of security.
I agree that appearance and usability are important, of course. The one concrete suggestion I have is to encourage people to prefer the manual installation method, and to warn them that the "easy way" is also the least secure way. I guess that could put some people off, but at the same time, I think it's important to educate people and teach them to think about where the executable code they run on their systems comes from. And most people who are using Fish should be able to do that. :)
Keep up the good work! I'll keep an eye on your project. I haven't used any of the other fish frameworks, for various reasons, but if you can solve this issue, I might give it a try someday. :)
Thanks! I will work on some of the fixes aforementioned and give you a ping. In the meantime, and if you have time, you can try Fisherman using the manual install git clone; and make
which poses no risk AFIC.
:+1:
Well, of course, there's always some risk when installing software, even signed software. Someone could slip a malicious plugin into the index, past the review process (assuming there's a review process?), your GitHub account could be compromised and used to insert malicious code, a vuln in GitHub's software could allow malicious access, etc. Unlikely, of course, but always something to be aware of. :)
Again, I'm no expert, so I'm hardly qualified to review your work. But I look forward to seeing what you come up with! :)
Ha! There's always a way, making Fisherman more secure when installing plugins is also in the roadmap.
For example, forbidding plugins to run any code after the install by default or by using a flag. Adding some mechanism for linting the code, both in terms of style and syntax.
I heard @colstrom is cooking something.
I have started working on a dry run for the installer, not a huge improvement, but it shows we are moving forward a little.
Added new try-me
mode to the installer:
curl -sL install.fisherman.sh | env TRY_ME=true fish
or
set -x TRY_ME true
curl -sL install.fisherman.sh | fish
Other minor relevant update is wrapping the install script in a function that would prevent failing if the script starts running before it is entirely downloaded.
@bucaran, this looks like a nice feature, it can let those users try fisher before actually installing it.
Well, actually when I ran this thing, I only get:
fish_indent: src/proc.cpp:129: int get_is_interactive(): Assertion `is_interactive >= 0' failed.
fish_indent: src/proc.cpp:129: int get_is_interactive(): Assertion `is_interactive >= 0' failed.
fish_indent: src/proc.cpp:129: int get_is_interactive(): Assertion `is_interactive >= 0' failed.
fish_indent: src/proc.cpp:129: int get_is_interactive(): Assertion `is_interactive >= 0' failed.
fish_indent: src/proc.cpp:129: int get_is_interactive(): Assertion `is_interactive >= 0' failed.
And there is one more feature that I hope (maybe I might open a new issue), I hope that there is a method to uninstall fisher, I hope to give those who dislike feature a warm goodbye.
@pickfire I know this could be a lot of work, but would you mind trying running the above again using fish 2.2.0
or a checking out a commit closer to it?
Closing as TRY_ME option was added and should be tracking the move from http to https in #82.
Well, I'll leave this last comment, and then I'll leave you to your project. :)
The only way to fix this problem is to distribute via GPG-signed tarballs and/or GPG-signed git tags. This is a common method used by the FOSS community, as well as widely used Linux distros like Debian. Anything else is wide-open for tampering, whether via MITM attacks (which TLS helps mitigate, but doesn't necessarily fix, especially if you distrust CAs), or via compromised servers.
I mean, you're not even providing unsigned checksums of tarballs, which is very commonly used in the FOSS community. It's hardly secure, but at least it provides another hurdle for an attacker to clear, as well as helping to guard against accidental corruption.
Frankly, the fact that you're not taking any of these obvious, straightforward, and commonly used measures, even after being informed of them, shows that you aren't taking your users' security seriously. I don't say this to be rude, just to be frank. Why should anyone trust you to install code on their systems when you aren't taking their faith in you seriously? It's a weighty responsibility, but you seem to be shrugging it off as if it were nothing.
Well, it's your project, so of course its your prerogative. I tried. :)
@alphapapa I think it is nice maintaining the security of open source project. Thanks for giving this nice idea.
Fisherman consist of mostly shell scripts, during the installation, people could just create another shell script and remove that verification part as we are using automatic installation, this is probably the challenging part we are facing.
during the installation, people could just create another shell script and remove that verification part as we are using automatic installation, this is probably the challenging part we are facing.
I don't understand what you mean. If an attacker could edit your scripts and remove the validation code, it would mean that he had already compromised the system being installed on, and he could do whatever he wanted. It wouldn't be necessary to use your code as an attack vector; he would have already broken in.
@alphapapa Does it make any difference that this project is shell scripts only?
EDIT: I see you answered in the other issue, so please disregard this comment.
@alphapapa That's what I am talking about, he could spoof the shell script and change the validation code with something nasty.
@pickfire Who is "he"? If "he" has the ability to replace the validation code, that means that he already has access to the user's filesystem, which means he can already do whatever he wants. It means he can already execute whatever code he wants. The game is already over at that point.
The issue we are discussing here is securing the process of downloading code to fisherman users' systems.
@alphapapa I don't even know who is "he". "He" can be anybody.
If the process of downloading code is distrupted by "he" ("he" has no access to the user's filesystem), if "he" can spoof the schell script or do something like dns poison (I am a newbie at cracking), "he" could make the code nasty like I said and execute what he wants in the user's system.
So why just secure the process of downloading code, the process of curl get.fisherman.sh | fish
might have destroyed your system if "he" knows how to compromise that at the first place.
@pickfire I feel like we're talking past each other to some extent, but also you don't seem to understand what I'm saying.
So why just secure the process of downloading code, the process of curl get.fisherman.sh | fish might have destroyed your system if "he" knows how to compromise that at the first place.
That's the whole point here: to secure the code distribution process. Piping from curl to the shell makes that impossible to begin with. But even switching to a download-then-execute model isn't inherently secure. That's why it requires strong crypto.
Here's a comprehensive document I just discovered. It was written years ago and covers the principles used to e.g. secure Linux distros' software distribution. I highly recommend reading at least section 1 to get an overview of the principles involved: http://www.cryptnet.net/fdp/crypto/strong_distro.html
'Tis is a bad idea, mateys.
"Prepare to be boarded"? Yarr, lads, even sounds scurvy!