Closed jfly closed 5 years ago
An important detail in tech codes of conduct is consequences: https://www.chromium.org/conduct https://geekfeminism.org/about/code-of-conduct/
It's also good to be constructive:
If we want to keep it a simple sentence, I think that's fine – I'd definitely call it a "pledge", though.
I agree with this. @jfly and I have discussed this category of things many times, but I'll put my vote here as well.
I've never been under any "code of conduct", and it's hard for me to see what it would do practically. Does anyone have a real world experience of the opposite?
The suggested phrasing doesn't seem to prohibit or prescribe any concrete conduct.
The Geek Feminism code deals with interpersonal interactions. Those have an offended party that can make a complaint. The same is not true if a SW Team member violates the DB, so I don't think that model applies to us.
The 5 "increasingly terrifying" vulnerabilities are real, and it would probably be useful to do something about them. I'm thinking logging of who does what, storing backups in read only locations, some powers only available to a smaller group etc. Requiring more than one person to sign off on certain actions would be the gold standard, but I don't know any practical way to do such things.
On Sat, Jan 28, 2017 at 12:17 AM, James LaChance notifications@github.com wrote:
I agree with this. @jfly https://github.com/jfly and I have discussed this category of things many times, but I'll put my vote here as well.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/thewca/worldcubeassociation.org/issues/1171#issuecomment-275834905, or mute the thread https://github.com/notifications/unsubscribe-auth/AAuGKzs3vrw08t695mPMEPUojbXcfSRnks5rWvm1gaJpZM4LwYjH .
-- Procrastinators - the Leaders of Tomorrow
Hi, In my experience, having a code of conduct is a way to make things clear. It wont stop somebody to take the website down and steal the domain, but it will make clear what we commit for and what we have to take care of.
@larspetrus: Requiring more than one person to sign off on certain actions would be the gold standard, but I don't know any practical way to do such things.
You can do such things using ssss, or stuff like that, but that is definitely a harder way to work. I do not think the WCA threat model needs such a thing, yet.
The 5 "increasingly terrifying" vulnerabilities are real, and it would probably be useful to do something about them. I'm thinking logging of who does what, storing backups in read only locations, some powers only available to a smaller group etc.
So, I think it will always be the case that a software team member can modify/leak information, and probably not easy to enforce audits in a way that can't be bypassed with a little work.
Registration transfer is certainly a risk, but there's legal recourse for that. (I wouldn't mind clamping down access to an account that can, say, only be accessed by Jeremy and one Board member, though.)
I think we can address the "everything can be deleted" issues by having some team members make cold storage backups. If it's easy to dump all the backups into a folder, I'd be happy to burn backups to an (encrypted) Blu-Ray once or twice a year. Since team members can access all the data anyhow, I think the "break glass" availability of cold backups definitely outweighs the risks.
Also, if we version and hash/blockchain/Git repo the public exports, we can also make changes to existing results easier to see. That's good for general accountability, and we could possibly make it hard to make those changes without indicating who made the change and what reason they gave.
One of the practices we have in our company regarding elevated access to data: 1) the persons only get access to what they really need access to. (for example you could create different groups with different levels of elevation, based on trust) 2) the data owner and the manager of each person with elevated access receive an overview of the persons with elevated access every month and should take action if the situation has changed 3) persons with elevated access only have access via single sign-on with two factor authentication (this seems impractical here) 4) all usage is monitored, we create usage logs in a database that these persons do not have access to, These usage logs are periodically reviewed.
I am not afraid of WST members changing roles of persons. That is easy to reverse. I am also not afraid of persons changing results. I am also not afraid of persons deleting our resources and data. (assuming we have back-ups and can reverse stuff)
The biggest risks I see are: 1) persons copying and distributing private data or data that is owned by WCA 2) persons modifying our software to create backdoors, initiate financial transactions and so on
Do we believe this is still necessary, or is this already covered by the Code of Ethics?
I think we can let this go in favor of a WCA Code of Ethics =)
(Just to clear the air, there is no incident that occurred that caused me to create this issue. @lgarron suggested this and I think it's a good idea.)
Members of the software team actually have a lot of power. In increasing terrifyingness:
I want to make it clear that each and every one of these powers is necessary for us to do our jobs, and I'm not willing to reduce the number of people who have these powers. The fewer people who have these powers, the harder it is for improvements to be made, and more importantly, the smaller our bus factor becomes.
What do people think of a simple pledge that all incoming software team members have to make? Something cheesey but hopefully not too cheesy like: