Closed rhaning closed 1 year ago
I thought we agreed on approaching a shortlist of projects first to see if they would be receptive to the idea? @jenniferfernick used the example of a phase0.
@lukehinds yes, that is correct. We agreed that defining our approach to engaging with project maintainers and performing this due diligence work would be added to the proposal, and with those updates in mind, we would continue with the vote.
I love this proposal - another bold and visionary contribution by @scovetta!!
Approve, conditional upon an initial idea-validation phase in which we get feedback on the proposed idea (especially the "Alpha" piece) from a small representative set of OSS maintainers. (This is of course the "phase 0" we discussed (@lukehinds, @rhaning) in yesterday's TAC meeting).
Strengths:
Considerations for project success:
FYI, I've reached out to a handful of representative projects and will summarize the responses as they come in, hopefully over the next few days. If anyone else has connections to critical OSS projects that you think would be relevant, please send me a note and I can include them.
Update: I've received three responses so far.
From a representative Python project:
Yes, we'd be up for entertaining this!
From a security organization:
In our experience no one has ever rejected the help. Most projects we've worked with have welcomed us with open arms... The only resistance I've ever encountered was more so dealing with perception. Projects were worried that a review meant "getting a bunch of issues and bug reports dumped on us and then that's it". This perception has been addressed by communicating up front that the security review is a collaboration between the audit team and the project where issues are being discussed and mitigated.
From a security organization:
...When we come with the approach “where can we help?” there is no push back. There might be later on some “don’t have time to focus on that right now” when it comes to actually implementing things, but they were all willing to at least discuss opportunities.
I approve.
Whose the representative of interest @scovetta , someone from the python foundation perhaps?
@lukehinds I'd prefer to not make the names of the folks I reached out to public, but it was the maintainer of one of the most popular/recognizable Python modules. I'd be happy to share all exchanges with the TAC privately.
To be transparent, I reached out to 7 projects last Wednesday and received a response from one of them. I'm "friendly ping"-ing the remaining ones today and will anonymize and post their response here if they respond.
I also connected with two security organizations (quotes above) and started the following thread on Twitter over the weekend:
Granted, the Twitter conversation was not targeted to "most critical" projects, nor was it asking directly, "would you accept help?", but I also didn't see anything to suggest that we far off the mark strategically. I've integrated this feedback into the proposal, but the main point, and I think this is critical for Alpha, is for us to approach each engagement with an open, learning mindset, and be willing to actively help, and not simply generate work for the maintainers. I haven't seen anything to suggest that a significant number of projects would reject help offered in good faith.
My vote is yes if we test out the project with a small set of projects in a phased approach. Overall I like the proposal but it's just grandeur and love to see it go agile approach of starting small.
I think the doc needs to be repurposed on what the TAC has requested and we hold off the vote until that happens.
I personally believe a proper due diligence needs to be carried out before we start discussing budgets, hiring resources and making announcements, press releases etc.
If we can in a specific way refactor the document to have it clearly outline this is to approve a preparation phase ("phase 0"), you have my approve, however right now we are voting on the entirety of the doc, budget and all.
I am not alone here, @raolakkakula and @jenniferfernick I believe have also outlined the same, but I can't see any amendments to reflect this (do point out if I am mistaken).
In terms of "general alignment with the OpenSSF", I'm a +1.
I think the budget stuff needs more work, but my understanding is that's the role of the GB.
I am going to abstain, so the GB can make its decision.
My points still stand though. I think due-diligence upfront is lacking and agree with @raolakkakula, the graduer and scope would be better reduced, to test the idea first to see if it has any legs to it.
Hi all. @iamamoose mentioned this project to me in a related discussion a while ago, and I finally found the time to read the proposal document referenced by @rhaning here. I don't know if it's appropriate for me (someone not involved with OpenSSF) to barge in your discussion (although it's been silent for 2 weeks now, so hopefully I won't distract you from the voting), but FWIW I have a few thoughts to contribute:
I like that the program is multi-year, presumably with intent to prolong it while it works. Is that the intent? Can it be spelled out, and with reason (funding commitments)? One of the issues with previous related efforts (such as CII) was that they were one-off, so a potential contributor/grantee would know that no matter how they perform they'd need to find another project/grant/client/job soon. I heard that this aspect was discouraging to some, resulting in potential collaboration never even starting (why start something potentially long-term if it would be forced to end in a year).
I like Alpha - this is what has been needed for years, where people/teams/companies offering software security audits and projects needing to respond to security findings can both receive funding for that work. If something like this were available 20 years ago and throughout the years, I personally and we at Openwall would have done a lot more in this area during those 20 years (we did a little bit anyway, including a few paid security audits), in either of both of these roles (can help others, and maintain projects of our own).
I think the Alpha-Omega split is reasonable.
As to Omega, I like the idea in principle, but I am concerned:
Just how many projects would realistically be covered by Omega? In one place, the proposal document mentions starting with 500. I think that's sane. However, in another place it mentions 750k+/year. I think that's insane, or at least incompatible with the document also saying that Omega would provide assistance with fixes. No way the available funding and the current market of related security services would accommodate providing assistance on fixes for this many issues per year... and here I assume it'd be 1+ identified issue per project (probably more). I think realistic goals need to be set, limited to where Omega's benefits outweigh the costs and risks.
Then, the proposal document suggests setting up a super-secure triage portal for Omega issues. Nothing can be perfectly secure, and besides the technical setup there are people. In my experience running the (linux-)distros list, besides and more importantly than a secure setup, there must exist and be enforced a policy to make issues public after a predetermined maximum time period, regardless of availability of fixes. For (linux-)distros, the maximum is currently 14 days. Google's Project Zero also does this, with a maximum of 90 days. This should be chosen such that the value of the still-embargoed information to an attacker (an APT) is diminished sufficiently that they wouldn't bother or at least wouldn't make much use of the information before the general public has an equal chance at fixing the issue or otherwise using the information.
OTOH, both (linux-)distros and Google P0 are more like Alpha than Omega. This means two things: we're reminded by these examples that a maximum embargo time should be set in Alpha as well, but we don't yet know whether the same approach works as well for Omega. Yet I think that setting up a database of forever-unpublished likely-security findings doesn't convincingly provide a good benefit vs. risk balance.
I think an example closer to Omega could be syzbot. That one makes everything public right away. So maybe that's the way to go?
I understand that making likely-security issues public without fixes might not play well with companies funding this effort. However, at least Google is OK with P0 and syzbot.
I don't know if it's appropriate for me (someone not involved with OpenSSF) to barge in your discussion (although it's been silent for 2 weeks now, so hopefully I won't distract you from the voting),
This is perfectly welcome! Thank you for the detailed feedback - this is exactly the type of thing we're looking for here.
Agree with @dlorenc -- thanks for the feedback, and please feel free to stick around (we have budget for snacks! :smile:)
To your comments, yes, Alpha is intended to be multi-year commitments -- we'll work out the details over time, and the first year of the program will be a lot of learning and adjusting, but directionally, we don't want this to be "in-and-out".
For Omega, I understand the concern -- from a sizing perspective, my thoughts are that once we have a high-quality, repeatable tool suite for identifying critical vulnerabilities, we can just throw cheap compute and scale up arbitrarily. There will be some challenges along the way (building arbitrary projects where needed, correlating source vs. packages vs. repo vs. binaries, tags vs. commits vs. releases, etc.) -- there's no shortage of deep, interesting challenges that I hope will attract folks who want to help solve them. But we obviously still need to do something with the results.
From a "protect the eggs" perspective, yes, this came up a few times -- I really understand what a data breach would mean for the data we collect. We'll be keeping this top-of-mind as we think about design. If you or anyone has additional thoughts on this, I'd love to listen.
For disclosure -- I'm hoping that we can adjust the "spout" based on the folks we have doing the triage/analysis. But I haven't thought much about "well what if we fall behind, and we effectively become a stockpile of untriaged but likely serious vulnerabilities?" -- that would be a big problem, so we'll need to think about what to do in this case. I suspect/hope that if Omega can essentially be a "critical vulnerability finding/fixing machine", then it should be easy to justify additional investment or approaches.
If you (or anyone else reading this) is interested in participating, please join us at our ("Identifying Security Threats") workgroup meeting on Oct 27th at 10 AM PT (calendar).
Thanks for the feedback and the community perspective Solar; this is VERY valuable for the folks that would be heading up these efforts! Feedback is a gift, no matter when we get it!
Cheers,
CRob Director of Security Communications Intel Product Assurance and Security
From: Solar Designer @.> Sent: Thursday, October 14, 2021 12:24 PM To: ossf/tac @.> Cc: Robinson, Christopher @.>; Mention @.> Subject: Re: [ossf/tac] Vote: Project Alpha-Omega (#61)
Hi all. @iamamoosehttps://github.com/iamamoose mentioned this project to me in a related discussion a while ago, and I finally found the time to read the proposal document referenced by @rhaninghttps://github.com/rhaning here. I don't know if it's appropriate for me (someone not involved with OpenSSF) to barge in your discussion (although it's been silent for 2 weeks now, so hopefully I won't distract you from the voting), but FWIW I have a few thoughts to contribute:
I like that the program is multi-year, presumably with intent to prolong it while it works. Is that the intent? Can it be spelled out, and with reason (funding commitments)? One of the issues with previous related efforts (such as CII) was that they were one-off, so a potential contributor/grantee would know that no matter how they perform they'd need to find another project/grant/client/job soon. I heard that this aspect was discouraging to some, resulting in potential collaboration never even starting (why start something potentially long-term if it would be forced to end in a year).
I like Alpha - this is what has been needed for years, where people/teams/companies offering software security audits and projects needing to respond to security findings can both receive funding for that work. If something like this were available 20 years ago and throughout the years, I personally and we at Openwall would have done a lot more in this area during those 20 years (we did a little bit anyway, including a few paid security audits), in either of both of these roles (can help others, and maintain projects of our own).
I think the Alpha-Omega split is reasonable.
As to Omega, I like the idea in principle, but I am concerned:
Just how many projects would realistically be covered by Omega? In one place, the proposal document mentions starting with 500. I think that's sane. However, in another place it mentions 750k+/year. I think that's insane, or at least incompatible with the document also saying that Omega would provide assistance with fixes. No way the available funding and the current market of related security services would accommodate providing assistance on fixes for this many issues per year... and here I assume it'd be 1+ identified issue per project (probably more). I think realistic goals need to be set, limited to where Omega's benefits outweigh the costs and risks.
Then, the proposal document suggests setting up a super-secure triage portal for Omega issues. Nothing can be perfectly secure, and besides the technical setup there are people. In my experience running the (linux-)distros list, besides and more importantly than a secure setup, there must exist and be enforced a policy to make issues public after a predetermined maximum time period, regardless of availability of fixes. For (linux-)distros, the maximum is currently 14 days. Google's Project Zero also does this, with a maximum of 90 days. This should be chosen such that the value of the still-embargoed information to an attacker (an APT) is diminished sufficiently that they wouldn't bother or at least wouldn't make much use of the information before the general public has an equal chance at fixing the issue or otherwise using the information.
OTOH, both (linux-)distros and Google P0 are more like Alpha than Omega. This means two things: we're reminded by these examples that a maximum embargo time should be set in Alpha as well, but we don't yet know whether the same approach works as well for Omega. Yet I think that setting up a database of forever-unpublished likely-security findings doesn't convincingly provide a good benefit vs. risk balance.
I think an example closer to Omega could be syzbot. That one makes everything public right away. So maybe that's the way to go?
I understand that making likely-security issues public without fixes might not play well with companies funding this effort. However, at least Google is OK with P0 and syzbot.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/ossf/tac/issues/61#issuecomment-943516651, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AQRFDLBSAQJNWF4FWOEKV4TUG37ZZANCNFSM5EP6ZHEA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Thank you guys for the encouragement. Regarding the budget "for snacks" @scovetta, I might actually be able to contribute more in a paid capacity than as a volunteer, and Openwall might be one of those "service providers" mentioned in the proposal. I'm open to discussing this separately.
I kept thinking of the proposal after I posted, and now I also have concerns about Alpha:
$5M/year for 100 top projects isn't much. That's $50k/project/year. Now, the description of Alpha rightly puts security review last - after engagement plan and advice on security practices, threat modeling, and documentation for an outsider to build and understand code. For non-trivial projects, the $50k may run out before even reaching security review of the code. This isn't to say a budget like this is necessarily too low to provide any benefit - but it'd take some very careful and project-specific adjustments to use it well.
The top 100 would probably include huge projects like the Linux kernel. I think the wording in the proposal mostly doesn't apply to those, and they'd need either exclusion or separate treatment.
Another concern is availability of quality service providers. Since funding like this was mostly unavailable for OSS so far, not that many capable businesses were set up. I know of only a handful. If I read it right, the proposal suggests the 100 projects would be processed in 23 months. With 4 or 5 service providers, this means each of them needs to maintain a throughput of 1 project per month. That's not necessarily unrealistic, but it's definitely tough.
Nitpick: IIRC, the proposal suggests that having a security audit done via OpenSSF avoids the need for every company to do their own. Well, maybe for compliance yes, but the budget is too low for comprehensive security audits (except for the smallest projects), and besides in cases where projects were independently audited by different capable providers so far the audit reports differed substantially and complemented each other (uncovering many non-overlapping issues) - mostly stemming from differences in what the auditor chose to keep in scope vs. not, which in turn varied by their specific expertise.
For Omega:
a high-quality, repeatable tool suite for identifying critical vulnerabilities, we can just throw cheap compute and scale up arbitrarily.
Maybe you do have some break-through idea that I don't, but I don't see how the above is possible. "For identifying critical vulnerabilities", we (or the tool) need to know (and somehow specify in machine-readable form!) the program's inputs and threat model (at least partially). This isn't something that can be "scaled up arbitrarily" with no or very little human involvement.
For example, Google offers projects bounties of IIRC up to $20k for adding the projects to their OSS-Fuzz, and even then most haven't yet done so (we haven't for ours, but are considering it). That's precisely because it takes conscious effort to integrate a project into a fuzzer meaningfully.
For another example, Coverity offers Open Source projects the ability to use their hosted (SaaS) static code analysis for free. Quite some do, most don't. (We tried once, and processing the findings was very significant effort, yet the most critical known bug we had in the codebase at the time was missed by their analysis.)
"well what if we fall behind, and we effectively become a stockpile of untriaged but likely serious vulnerabilities?" -- that would be a big problem
Not only that, but also "what if we have triaged vulnerabilities that projects aren't fixing, or not fixing fully". This will also be happening for Alpha. So both need a policy on what to do in those cases, and I suggest having and enforcing a maximum embargo time.
Overall, as described and without further knowledge, to me Omega currently looks like some magic that just isn't going to happen. With this, you could want to repurpose its funds to Alpha (which is rather low on funds for as many as 100 projects if you choose actually important ones).
Alternatively, redefine Omega as an effort more similar to Alpha, just with reduced scope per-project - e.g., with the service providers encouraged to limit scope to running automated tools (which they'd nevertheless need to configure in custom ways for every project, so this is some effort). In fact, a hybrid approach might be possible - OpenSSF creates and manages a platform, which service providers would (learn to and) use to custom-configure then-automated testing of projects (both Alpha and Omega ones; for Alpha this would be just one task of many, but for Omega the main or only one). With this, a ratio of e.g. 100 Alpha + 400 Omega projects might make sense, thus giving some coverage for 500 projects for a budget of only 2x that of 100 Alpha projects.
I hope this isn't too much of me thinking out loud.
Here's a different look at:
"well what if we fall behind, and we effectively become a stockpile of untriaged but likely serious vulnerabilities?" -- that would be a big problem
Actually, if you publish the full reusable and easy to deploy source code behind Omega's setup and don't use any external randomness (only published PRNG seeds, if at all), so that others could and hopefully would use it too, then your own stockpile of untriaged bugs wouldn't be of that much value - anyone could obtain exactly the same from their own setup of Omega's software (with the same PRNG seeds), if somehow they wanted to match yours, for the cost of contributing their own computing resources (which a state-sponsored APT would easily afford). Following this line of logic further, you could as well be making the purely automated findings public right away, syzbot style.
It's triaged yet unfixed bugs that are of greatest value to attackers.
This is absolutely perfect feedback -- please keep it coming. I'll need a little time to digest and think about this, but a couple things off the top of my mind:
Thanks again, keep the feedback coming! 👍
should we close this now, as this has since gone to the GB whom have approved?
It's not clear to me exactly what got approved here, is Alpha Omega an official "top level project" now?
Ref #78
+1 @dlorenc 's comment. Can we close it out or are there further actions for the TAC to take?
@scovetta do you have any answers to the questions from @lukehinds and @dlorenc above?
IIRC ,the (old) TAC approved Alpha-Omega as a top-level project & the governing board approved its funding. I don't remember the dates offhand, but they should be clearly recorded in the meeting minutes. So from that perspective this specific issue should be closed.
There are some great discussions in this issue about Alpha-Omega more generally, they probably belong somewhere else though I'm glad they've happened.
I don't remember an explicit approval in a meeting, I think this issue was the approval mechanism.
I thought the TAC did approve it. Some more recent text in the TAC meeting minutes assume it's an OpenSSF top-level project; See 2021-11-16 in: https://docs.google.com/document/d/18BJlokTeG5e5ARD1VFDl5bIP75OFPCtzf77lfadQ4f0/edit
It's not clearly documented in the meeting minutes... maybe we need to review the Youtube videos?
This issue was opened on September 21st to hold a vote, it looks like Alpha-Omega was discussed the same day in the TAC call. Then it was discussed again on 10-05:
but that doesn't seem like it was decided, voting continued here after.
Alpha-Omega has now been running for some time!
The question of where exactly it lives continues, see https://github.com/ossf/tac/issues/161#issuecomment-1583284190. As part of that issue, we'll sort out if it's an Associated Project (and if so, what that means) or if it's a Project in the Identifying Security Threats Working Group (with Special Interest Funds).
The Alpha-Omega project has been proposed to the TAC and requires a vote to move forward to the Governing Board for budget approval.
TAC Reps, please provide your vote by leaving a comment on this issue with either an 'Approve' or 'Reject'.
Proposal Document