Open timmc opened 8 years ago
Cc: @mlsteele !
Hi @timmc , thanks for the input!
Those are definitely some good points, isolating the proof verification process would be a good thing. I've been working on something which is a step in that direction. Right now the way proofs are validated on the client is described in go files like proof_support_coinbase.go
. Going forward validation will be driven by a small DSL which will be interpreted by the client. The JSON-backed language will describe the steps a keybase client should go through to validate a proof. So in order to vet the http fetching and inspection for all the services we support, you would only have to audit 2 things: the active DSL code, and the (small) interpreter for the DSL.
The other thing this gets us is that we can quickly adapt to changes in remote services. The plan is to have clients pull the validation DSL from the keybase server. You may not like the sound of that at first, but bear with me. If one of the services like twitter changes their html layout, we will be able to quickly adapt to that change and get proof validations working again without everyone needing to upgrade their clients. This is especially important for iOS which has ~1 week upgrade time. We will hash the DSL chunk into one of our published merkle trees. That way clients will only execute new validation instructions if they can be sure they are seeing the same instructions as everyone else. The threat model here is that if keybase servers are compromised, then the attacker can push bad validation instructions causing clients to validate illegitimate proofs, but that action will be detectable by auditing the publicly published updates to validation instructions. The merkle trees are all written to append-only logs, so a compromised server would never be able to show clients one version and publish another.
As soon as I put the DSL spec into the repo, which will serve as a much better explanation, I'll refer to it in this issue.
@mlsteele The question of:
A large part of this issue is simply the question: How many types of proof are we likely to see?
Is still in a way orthogonal to the DSL though, I think? We're still going to have proof code that knows how HTTP works and proof code that knows how DNS works, and perhaps others.
I expect we'll support quite a few future services for proofs. But the list of protocols we support for talking to these services won't need to grow much. We currently use:
and that seems to cover all of what we want to do.
Here's the dsl spec: https://keybase.io/docs/client/pvl_spec
It would also be nice to be able to manually provide a proof to the client for verification via a file or stdin.
As the list of proof types grows (see https://github.com/keybase/keybase-issues/issues/518) the keybase client will have an increasing quantity of code that goes out to the network, has an interaction, pulls down data, and inspects it. Code that handles untrusted data has a reputation for security holes, so as the quantity of this code grows, there should be some effort to isolate it from the rest of the application.
For the moment most or all of the proof verifications involve grabbing a file over HTTP and parsing out a proof, then inspecting the proof, which means the risk is mostly shared between the proof types. However, other types of proofs (e.g. blockchain proofs, such as Ethereum contracts) would require new code.
A large part of this issue is simply the question: How many types of proof are we likely to see?
(See https://github.com/keybase/keybase-issues/issues/518#issuecomment-242433478 for the ill-placed discussion I inadvertently started on this topic -- some relevant chunks there.)