OWASP / ASVS

Application Security Verification Standard
Creative Commons Attribution Share Alike 4.0 International
2.73k stars 666 forks source link

new requirement - limit/check sending sensitive data to 3rd parties #1274

Closed elarlang closed 9 months ago

elarlang commented 2 years ago

Sensitive data must be classified into proctection levels - covered by 1.8.1:

V1.8.1 Verify that all sensitive data created and processed by the application has been identified and classified into protection levels, and ensure that a policy is in place on how to deal with sensitive data.

I would like to have separate and clear requirement for checking, that sensitive data is not sent to 3rd (or in general untrusted) parties, like Google Analytics and other "tracking users" solutions.

Risk is that will be collection of data outside of controlled application, similar to logging sensitive data to logs.

Requirement should belong probably to V8.1

csfreak92 commented 2 years ago

I like this requirement. The only question I have is the implementation of it. I cannot think of any clear solutions as of first reading this, but I think it is important to prevent any form of information leakage.

yosignals commented 2 years ago

I’ve some big thoughts on telemetry, and I think there should be a declaration schema - that might be an external thing to ASVS (pedantically privacy isn’t a security concern) but having thought about this problem I believe a declaration schema is the only useful viable way to approach telemetry / data extraction/ excessive sharing

elarlang commented 2 years ago

Implementation - first you need to have documentation what kind of data is allowed to transfer to 3rd party and what kind of data is not allowed (requirement 1.8.1). You need to have business logical decision / requirement / metric to answer at least to the question - WHY you need this data to be sent to 3rd party.

I think it's close to never been done. Just Google Analytics + other similar by default to the application. And then it comes as surprise when some "telemetry" collects authentication form submits == user credentials as plain text.

So, if you can not set limits with configuration, you should not use it.

elarlang commented 2 years ago

Help needed for word-smiting the requirement

yosignals commented 2 years ago

My thoughts here are, limiting telemetry isn't a security concern, it's a privacy issue, dependent on what the business wants, what the user accepts - personally I love zero telemetry but it's not the signs of a bad app, it's just a sign of bad business capitalising in data, that might be what funds the app 🤷‍♂️ - perhaps a subset of privacy specific goals would be nice but, businesses and their partner integrations want scoop it all up for good and bad subjective reasonings - if the goal is to stimulate conversation so at least have some kind of conversation, it might be worth keeping in in terms of declaring the owner of that telemetry piece - I struggle to place it in ASVS tho

elarlang commented 2 years ago

If an application leaks sensitive information to 3rd party, it is clear security concern - it's a confidentiality issue - someone, who is not authorized to own or see the data, can do it.

I have seen how user-password form content travels as "telemetry" to 3rd party. Privacy issue?

Or maybe I just need to point out again for the issue title, which is focusing on sensitive data? Which is defined as sensitive data by documentation (see requirement 1.8.1)?

yosignals commented 2 years ago

we have security officers for security issues and privacy officers or privacy issues

Leaking information isn't the same as allowing information.

This is a permission issue if it's a vulnerability such as IDOR, that has nothing to do with telemetry.

A privacy issue is a privacy issue, an information leak is a control boundary failure

I think we're in the refinement phase

elarlang commented 2 years ago

I have seen how user-password form content travels as "telemetry" to 3rd party. Privacy issue?

yosignals commented 2 years ago

To qualify the weight of data collected and data processed is a task for organisations data protection officers, the role of an application defender is to ensure the safety of the informations integrity,confidentiality and availability (generally speaking), if the scope creeps into data governance it deviates from its focus and strength, and will need a DPO, a role that generally doesn't reside in agile /appsec space

elarlang commented 2 years ago

In general - if leaking passwords (as an example of sensitive data) is not security issue, we can say that most of misconfiguration is not security issue as well.

I struggle to place it in ASVS tho

take a look here: https://github.com/OWASP/ASVS/blob/master/5.0/en/0x16-V8-Data-Protection.md#v83-sensitive-private-data

"That's out of scope" - Said no attacker ever.

From security (tester) or attacker perspective - for me it does not matter who is responsible for the problem in the organization, if sensitive data is leaked from the application, it is problem in the application.

yosignals commented 2 years ago

I think you're confusing scope with responsibility.

No one said it's not a concern.

I don't agree that _privacy_teams issues are _security_teams issues.

The link you provided on CIA is attesting to the CIA of the data, telemetry is the actual agreed data as per DPO sign off, security ensures that agreed data gets there safely, if that telemetry data exceeds the agreement of information leaving, privacy teams need to ensure with 3rd parties that they know before the change happens.

Have you ever worked on 3rd party data sharing agreements? It's quite explicit in its collection, and why, and how it benefits business goals. Unless your an org that blindly installs without considering privacy, and if you're in that team I worry they'll be years away from appreciating ASVS

elarlang commented 2 years ago

We don't need to keep going back and fourth with this.

We don't need to, but If you have this vision, someone else may have as well, so I keep going.

In the issue description:

Risk is that will be collection of data outside of controlled application, similar to logging sensitive data to logs.

Based on your statements so far, we should remove all log related and for example GDPR related requirements from ASVS?

if that telemetry data exceeds the agreement of information leaving

Who is going to check or verify that? Privacy team is going to check the application technically?

yosignals commented 2 years ago

Let me make it easier.

Do you think it's security's responsibility to do privacy's job ?

If your answer is yes, you fail to recognise the systemic challenge of washing away the obnoxiousness of security professionals too scared to let others do their job, within their respective domains, or better yet, work with them enough to worry less about the quality.

Telemetry is a privacy issue, it's not to say security can't help them, it's the ownership and inclusion that you've yet to understand

elarlang commented 2 years ago

I may fail a lot of things, one of them is to understand your agenda here :)

Can you please answer to my questions from my previous comment?

elarlang commented 2 years ago

Do you think it's security's responsibility to do privacy's job ?

you fail to recognise the systemic challenge of washing away the obnoxiousness of security professionals too scared to let others do their job

From this attitude we may say, that we don't need any security team at all - because we trust our developers and administrators. Why to recheck them?

If you are on the security verification side, you really don't care WHO made the mistake (which caused sensitive information leakage) or WHY the mistake was made - your responsibility is to detect it and report it. The problem is spotted in the application, and it does not matter, was it responsibility for developer, administrator, privacy team or security team.

We are here, in ASVS repository, it stands for Application Security Verification Standard. The point for standard is to spot the weaknesses.

Let me make it easier.

Yes, it's easier now. It's more clear now why we need this requirement - for someones, who failed to understand.

cmlh commented 2 years ago

@elarlang states:

I would like to have separate and clear requirement for checking, that sensitive data is not sent to 3rd (or in general untrusted) parties, like Google Analytics and other "tracking users" solutions.

This would cause conflict when it has a direct dependency on revenue such as https://twitter.com/thezedwards/status/1528808790845362176 for example.

Therefore, I'd support this new requirement as a fork.

elarlang commented 2 years ago

No, it's not a conflict.

1st line in issue: "Sensitive data must be classified into protection levels - covered by 1.8.1"

An example: you (your privacy team!) have mapped and declared, what IS sensitive data and what is not, what can travel to 3rd parties and what can not.

My proposed requirement is addressing the situation when the application leaks sensitive data which IS not allowed to travel to 3rd parties. Do not apply it for entire dataflow to trackers.

... and in general, it's quite interesting to read this kind of statements. In ASVS we have requirements (see 6.1.*) to crypt sensitive data for better protecting it, but then some guys come and say, it's is ok to send it to 3rd party...

cmlh commented 2 years ago

... and in general, it's quite interesting to read this kind of statements. In ASVS we have requirements (see 6.1.*) to crypt sensitive data for better protecting it, but then some guys come and say, it's is ok to send it to 3rd party...

Probably marketing and sales who have a greater influence with the CEO.

elarlang commented 2 years ago

I need some word-smithing and validating help here, but I try to move this forward. For a starter: Verify that defined sensitive data is not sent to untrusted parties (e.g. user trackers).

csfreak92 commented 2 years ago

@elarlang, I like the requirement but sounds too general and vague with the current form and I'm afraid it would be harder for anyone to understand and implement, Verify that defined sensitive data is not sent to untrusted parties (e.g. user trackers). maybe we can expand more about it? I just don't know what else we have to add, but my thoughts would be: how sensitive data is defined in the context of the application/system, how do we define which are untrusted parties and not as some 3rd-party libraries could just be sending out telemetry without our knowledge, what are we preventing, and how do we prevent this?

I probably made it more confusing, though I would reword the requirement to Verify that defined sensitive data from the application architecture is not sent to untrusted parties (e.g. user trackers) to prevent unwanted collection of data outside of the application's control. I don't know if that sounds better, but just my thoughts on word-smithing it.

elarlang commented 2 years ago

My too general proposal was to just "get things moving / accelerate the discussion" and get some more feedback, you proved that it worked :)

Question 1 how sensitive data is defined in the context of the application/system

It is defined by separate requirement:

# Description L1 L2 L3 CWE
1.8.1 [MODIFIED, MERGED FROM 8.3.4, LEVEL L2 > L1] Verify that all sensitive data created and processed by the application has been identified and classified into protection levels, and ensure that a policy is in place on how to deal with sensitive data. 213

Question 2 how do we define which are untrusted parties and not as some 3rd-party libraries could just be sending out telemetry without our knowledge, what are we preventing, and how do we prevent this?

My logic here is simple - if you don't know how some tracker works or what kind of data it collects, you should not use it in an application which contains sensitive data. I think some GDPR-like things require you give full description of tracking activities for your user, including details, what kind of data is collected - you need to know those answers in details.

For testing: usually it's enough to analyze network traffic.

Proposal update I took your / @csfreak92 proposal and removed "application architecture" part from it, as it may be confusing in separate independent requirement.

Latest proposal: Verify that defined sensitive data is not sent to untrusted parties (e.g. user trackers) to prevent unwanted collection of data outside of the application's control.

csfreak92 commented 2 years ago

@elarlang, fair enough that 1.8.1 answers question number 1, so I agree let's leave it out of the new requirement to avoid confusion. I like the latest proposal for this requirement though. Not sure if we are still missing, but I would like to hear from others too but for me this is good to convey the message that, "basically if you don't know how some tracker works or what kind of data it collects, then you should not use it in an application which contains sensitive data".

Glad that I got the discussion going. :)

tghosth commented 1 year ago

My inclination is that we need to consider this within the wider data protection chapter so I will allocate to there for now

elarlang commented 9 months ago

Waking up dragons here... any feedback or comments, or I go for PR.

# Description L1 L2 L3 CWE
8.1.8 [ADDED] Verify that defined sensitive data is not sent to untrusted parties (e.g. user trackers) to prevent unwanted collection of data outside of the application's control. 200

The question here is - to limit it only to "defined sensitive data", or general "authorized data"? Data sent to the tracking contains often PII.

jmanico commented 9 months ago

Thumbs up!

tghosth commented 9 months ago

Go for the PR :)