I'm proposing that we add an access control model to the documentation. Since most of us here are technically minded, we want to dig into the code and get right into programming. While this is great for productivity, it's not so great for security. It could keep us from stepping back to take a higher-level look at what we're trying to achieve. If we don't take the time to take a higher-level look at our access control model and really think about what makes Tox "secure", it could lead to security problems down the road and we'll never have that gut feeling of trust in our software.
The concept of access control models is relatively new and gets some criticism because it takes a lot of time, but I think it's necessary whenever claiming that software is secure. When most people claim that their software is "secure" without a logical basis to back up the claim, people become immediately skeptical. What makes software "secure" and how do you even define it? Every system makes assumptions, and with every assumption there's the possibility that it can be violated. For instance, every time I leave my house, I lock the front door. In my access control model I'm making the assumption that the front door is the only way to get into the house, the only way to open the front door is with a key, and only I have access to the key. So now if someone asks if my house is "secure", I can say yes and disclose these assumptions of trust. Once I disclose the trust assumptions, it's up to YOU to decide if you think the model is trustworthy. If you look at those trust assumptions and think they're reflected well in my implementation (I don't have any windows or a back door, I have a very expensive lock, and I take good care of my key) then you can have that feeling of trust in my system and can visit my house without being nervous about someone breaking in. However, if you think my trust assumptions are unsound (maybe I have a window that someone can throw a rock through, or I have a lock that can easily be knocked off with a hammer, or I have lots of duplicates of my key that I don't keep track of), then you can say you don't think the system is trustworthy and can identify the trust assumptions that I need to fix before you're willing to sleep over my house.
What I'm talking about is defining security so we're not just using the word as a buzz word, it will actually have a defined meaning. By writing an access control model it will force us to define every assumption of trust the system relies on to be "secure", so now we will have an actual definition when we claim security. Once we have the assumptions in place, we can look at each assumption and identify the damage that would be caused if the assumption were violated. It will also identify all the entry points into the system so we'll know where it's vulnerable and know where to focus our energy towards making it secure. This kind of disclosure will add an increased level of trust in the system not just for us programmers, but for everyone who uses it.
Another benefit to formal access control models is it will put us all on the same page. Recently in the IRC channel there was a debate over whether or not a public key is sufficient to identify a user, or whether the public key should be symmetrically encrypted to model the system through two factor authentication instead. Some people thought the public key alone was secure, while other people believed the added encryption was necessary. If we define the access control model upfront, we can define the base trust assumptions that we all agree on, and then commit to those assumptions and enforcing them as best we can. That way, a couple months down the road we won't be saying "maybe this public key ISN'T secure by itself", but we'll already be so far in the programming it will be too late to change it without a significant amount of redesigning the system. The sooner we make this model, the better.
However my criticism of Cryptocat's model is that it does not capture the vulnerabilities of encryption. It never makes assumptions regarding that only the user has access to the corresponding private key or that only authorized users in a conversation have the symmetric key. However it wouldn't be sound to automatically assume confidentiality or authentication so these are necessary assumptions to capture in the access control model.
I'm going to start writing up a basic model to be included in the documentation. If anyone wants to help me think about the trust assumptions that our software relies on, that would be great. It'll also help if we have someone willing to make graphs to visualize our model. There's also lots of details of the system that I'm not familiar with (like if we make any trust assumptions inherent the DHT protocol) so I'll need to chat with people who know more about how that works.
By having this access control model, we will be able to say "Tox is secure" with a straight face because we have a logical basis to back up our claims. Without it, we'll just be relying on our intuition and gut feeling, and we know that that has a tendency to be flawed and it'll be hard to convince other people that our system is secure.
TL;DR: I think we should think more about what we're assuming in terms of security and write up our assumptions in the documentation. This'll be beneficial because it will put us all on the same page in terms of how we define security and we'll be aware of the vulnerabilities of our system so we know where to focus our energy towards making it secure.
Reposting from https://github.com/irungentoo/toxcore/issues/267
@sodwalt commented on Aug 2, 2013