pkp / pkp-lib

The library used by PKP's applications OJS, OMP and OPS, open source software for scholarly publishing.
https://pkp.sfu.ca
GNU General Public License v3.0
307 stars 447 forks source link

Create SECURITY.md #7362

Closed zidingz closed 3 years ago

zidingz commented 3 years ago

Hey there!

I belong to an open source security research community, and a member (@haxatron) has found an issue, but doesn’t know the best way to disclose it.

If not a hassle, might you kindly add a SECURITY.md file with an email, or another contact method? GitHub recommends this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.

Thank you for your consideration, and I look forward to hearing from you!

(cc @huntr-helper)

NateWr commented 3 years ago

Thanks for responsibly disclosing, @zidingz! I found this post which suggests sending to an email address, but I believe this might have changed recently. @asmecher will know for sure and get back to today.

Thanks for the tips on a SECURITY.md file. Once I get confirmation from @asmecher I'll make sure we've got those in each of our application repositories.

NateWr commented 3 years ago

Just recording here that GitHub has a setup tool with a template for the SECURITY.md file here: https://github.com/pkp/ojs/security (must have project access). Here's the template:

# Security Policy

## Supported Versions

Use this section to tell people about which versions of your project are
currently being supported with security updates.

| Version | Supported          |
| ------- | ------------------ |
| 5.1.x   | :white_check_mark: |
| 5.0.x   | :x:                |
| 4.0.x   | :white_check_mark: |
| < 4.0   | :x:                |

## Reporting a Vulnerability

Use this section to tell people how to report a vulnerability.

Tell them where to go, how often they can expect to get an update on a
reported vulnerability, what to expect if the vulnerability is accepted or
declined, etc.
asmecher commented 3 years ago

@NateWr, I've added a SECURITY.md to OJS: https://github.com/pkp/ojs/blob/main/SECURITY.md

I based this on the internal documentation we have in Notion (where it does the community no good at all). Your feedback is welcome -- I'll add similar descriptors to OMP and OPS. I based our end-of-life window for OJS 2.x on the time we removed the upgrade code (to keep things consistent -- I'll mention this to the tech committee).

@zidingz and @Haxatron, I'm just waiting for confirmation that pkp.contact@gmail.com still goes somewhere, but in the meantime you can reach me by email at alec@smecher.bc.ca.

NateWr commented 3 years ago

The SECURITY.md looks good! I think this can be closed?

JamieSlome commented 3 years ago

Just a heads up that these are the two reports:

https://huntr.dev/bounties/4b0299f3-d2bf-4728-8a37-ce18deebd115/ https://huntr.dev/bounties/caaecacf-00c2-494b-9247-fc08aff82a1f/

asmecher commented 3 years ago

Thanks, all -- I'll close this issue and look over those entries.

asmecher commented 3 years ago

@zidingz, it looks like you're in the process of paying out your VC funding to a swarm of folks grepping Github for trivial issues without putting in the work. I'm regularly receiving duplicate low-value reports since engaging with Huntr and I suspect my early interactions are encouraging more users to look to my repos for easy bounties. I've now started receiving private emails asking me to validate these entries. I'm not sure I can really fault the reporters, as it seems they aren't able to see previous declined entries on the same repos.

I welcome reports if they're bona fide, but this is spam, and I hope you'll be able to add mechanisms to the system soon to disincentivize it. The quality of bounties rewarded on the Huntr leaderboard is not encouraging.

I support your goals, but Huntr badly needs tools to prevent this.

(I also raised this on Twitter a few days ago, but I'm raising it here too as the messages keep coming in.)

Haxatron commented 3 years ago

As a researcher on the platform, just thought I'd like to share some POV.

With the current system of huntr, low-priority issues like HTTPOnly and Secure flags get rewarded the same as RCEs. That is why as a researcher, I report every security issue, no matter the severity, I find in repositories (I believe that is what any pentester will do in the real-world too). Of course, I will take my time to search for higher severity issues as well, but other researchers may not. Hence, I agree there may be some issue with effort vs reward ratio on huntr.

In terms of low-value reports, it depends on what you define low-value as. Is the possibility of clickjacking (no X-Frame-Options: SAMEORIGIN and buttons that can modify data) a low-value issue? It is a valid vulnerability, albeit lower severity, and gets reported at penetration tests. It can also be solved at the application level via the use of headers. Thus, I cannot agree with you if you deem those as low-value reports. I'd rather call them low priority / severity than low value, but agree that huntr should reduce the noise (emails) generated from these kind of reports.

But if you meant reports without a proper PoC / explanation of vulnerability, then I certainly agree with you as I have read some of those on Hacktivity and definitely think huntr should put in more measures to discourage this

If you are planning to continue with huntr in the long term, to help mitigate your duplicate issue, perhaps you may want to draft a security policy of bugs you currently accept and bugs you are not looking for in the SECURITY.md of your repository. Some of us do check the SECURITY.md before testing your application.

Once again, thank you for taking the time to review and fix the issues described in the reports. Hopefully, the issues regarding huntr you mentioned will get resolved soon. :)

asmecher commented 3 years ago

@JamieSlome and @zidingz, in this entry the researcher provided a long list of endpoints that they claimed were vulnerable to CSRF attacks, along with reproduce code. It turns out they were incorrect, and presumably never even ran the reproduce code. It cost me quite a bit of time reviewing each endpoint to make sure that it was secure. This is one of your top-10 researchers according to the leaderboard. Regrettably I approved the entry based on the assumption that the researcher would at least try their own reproduce code.

JamieSlome commented 3 years ago

@asmecher - apologies for the delayed response here!

It is our intention when sharing these reports that they give you some security insight. We are taking steps to ensure that new reports receive more thorough checks for quality (pre-disclosure), for example, ensuring that reports do not overlap with previously invalidated reports (i.e. bad permalinks), and the same for valid reports, (i.e. noise) - all with the goal to reduce spam, and save time for both researchers and maintainers (as mentioned in your penult. comment).

We are still in the early stages of the platform, and so don't get everything right today. Your feedback is appreciated and I have shared it with the team.

Let me know if you have any other questions or thoughts.

asmecher commented 3 years ago

@JamieSlome, that sounds positive. Almost all the reports on my repositories don't provide correct code permalinks, probably because the majority of our code is in a common submodule and the researchers aren't looking for those. So duplicate checking based on code permalinks wouldn't help us. But a flagging mechanism for incorrect permalinks and clear expectations articulated to the researchers would help.