First, it does not deal with sharing of measurement data. Sharing of data sets (or making them available to reviewers) is increasingly expected. Ensuring that the data is protected and that users whose information may be in the data set (even unknowingly -- we note that timing analyses have gotten better over time, so data you thought was OK, can be used to harvest unexpected information) are protected is important. One needs to be clear (and if the draft says this and I missed it, my bad) researcher sharing the data is responsible for thinking about such questions and ensuring safety would help. Second, some of the information the draft suggests removing may make it harder for third parties to audit data and I believe we need to think more carefully about that question.
Second, a lot of the concerns about active measurement center around implied consent. A number of thoughtful observers felt that efforts in the mid-2010s to understand censorship systems placed unknown individuals at risk (the experiments involved trying to send forbidden information to random IP addresses within the censored space -- with no knowledge of where those IP addresses were [e.g. someone's laptop] and thus the possibility that an individual would be flagged by the censorship system as a possible consumer of forbidden information). I think the community still lacks a consensus, but perhaps a good starting point is that implied consent is not acceptable for active measurements that may cause harm to individuals. This allows active measurement of infrastructure (web servers, etc.) but prohibits sending active measurements to individual's devices (laptop, phone, smart watch, etc.). This would conform with Kantian edicts not to use a person for your ends without their consent.
Third, it does not deal with using data previously collected by others using questionable techniques. (This relates to the first point). I note that, again, the larger community does not agree on this topic. (The medical community still uses data taken in concentration camps in the 1940s). But, at minimum, a recommendation to disclose that the data set is a subject of ethical concern makes sense.
I would also emphasize that a lot of these rules are starting points. A thoughtful experimental protocol, reviewed by others, may find better answers that enable certain important experiments.
First, it does not deal with sharing of measurement data. Sharing of data sets (or making them available to reviewers) is increasingly expected. Ensuring that the data is protected and that users whose information may be in the data set (even unknowingly -- we note that timing analyses have gotten better over time, so data you thought was OK, can be used to harvest unexpected information) are protected is important. One needs to be clear (and if the draft says this and I missed it, my bad) researcher sharing the data is responsible for thinking about such questions and ensuring safety would help. Second, some of the information the draft suggests removing may make it harder for third parties to audit data and I believe we need to think more carefully about that question.
Second, a lot of the concerns about active measurement center around implied consent. A number of thoughtful observers felt that efforts in the mid-2010s to understand censorship systems placed unknown individuals at risk (the experiments involved trying to send forbidden information to random IP addresses within the censored space -- with no knowledge of where those IP addresses were [e.g. someone's laptop] and thus the possibility that an individual would be flagged by the censorship system as a possible consumer of forbidden information). I think the community still lacks a consensus, but perhaps a good starting point is that implied consent is not acceptable for active measurements that may cause harm to individuals. This allows active measurement of infrastructure (web servers, etc.) but prohibits sending active measurements to individual's devices (laptop, phone, smart watch, etc.). This would conform with Kantian edicts not to use a person for your ends without their consent.
Third, it does not deal with using data previously collected by others using questionable techniques. (This relates to the first point). I note that, again, the larger community does not agree on this topic. (The medical community still uses data taken in concentration camps in the 1940s). But, at minimum, a recommendation to disclose that the data set is a subject of ethical concern makes sense.
I would also emphasize that a lot of these rules are starting points. A thoughtful experimental protocol, reviewed by others, may find better answers that enable certain important experiments.
To cite: https://dl.acm.org/doi/10.1145/2896816