Closed GoogleCodeExporter closed 9 years ago
first post
Original comment by taviso
on 6 Jan 2015 at 7:21
#100, strange, i tried it with simple domain user and it ask for different
credentials.
Original comment by mpange...@gmail.com
on 7 Jan 2015 at 9:24
I think what a lot of you guys are forgetting is that this isn't Google, nor is
it Google's OS development team, nor is it the team behind Android, its their
security research team which sole purpose is to search for and find security
exploits like these. Everyone saying that Google is just finding 0Days in
Windows for the publicity or that they should "concentrate on fixing security
issues in their own products is stupid. Google's Research team find 0Days in
thousands of different products every day, the only reason people are
interested in this one is because its "Windows". These events happen everyday.
Whilst you can debate whether the timeframe was appropriate in this case,
Google did the right thing in not making different rules for different
companies. Some vendors NEED this deadline to get off their lazy ass's and
actually fix the bugs, some (maybe Microsoft) don't. Google can't apply
different policies to different people so they found one that works across the
board. Just because "its Microsoft" doesn't give them any right to be exempt
from Google's policy that was clearly outlined in the original bug report. If
Microsoft needed more time, they should have contacted Google and requested it.
Original comment by jduncanator
on 7 Jan 2015 at 10:41
[deleted comment]
@102 It ask credentials, but if you type the same simple user domain in prompt,
it work with elevated privileges.
Don't forget to lunch Auto-Elevated executable (Computerdefaults.exe) to work.
Original comment by reclad
on 7 Jan 2015 at 11:20
Didn't work on Windows Server 2012 / 64 bit, with UAC enabled, cmd disabled ,
taskmgr disabled, from under user restricted account, with Powershell 1.0
The executables have been run smoothly (modified with XVI hex editor from calc
to notepad etc..) , however haven't got elevated privilegues ... anyone has
some idea?
Original comment by mailo...@gmail.com
on 7 Jan 2015 at 1:36
Despite the precedent and all of the arguments for the positive benefits of
establishing a deadline (in the absence of a federal requirement), those
arguments fall completely flat once actual damages are done to innocent third
parties (businesses and users of the vulnerable product) as a direct result of
publishing the vulnerability. A 13-yr precedent means nothing in today's
infosec ecosystem, where it's not a matter of *if* damages will be done, it's
*when*. I'm certain Google knows that, but they don't seem to mind the
collateral damage. The public is getting fed up with all of the attacks, so
it's really terrible PR on the part of Google to aid such attacks in any way,
regardless of the points they may have in favor of using this tactic to
pressure a software developer.
What really needs to be done:
1) A legal time limit must be established for remediation of certain
vulnerabilities. It can and should include an ability to be extended on a case
by case basis. Until our society can agree on and implement a specific legal
limit, third party research teams must exercise patience and restraint.
2) The law should require that vulnerabilities be reported to the responsible
party and the U.S. government within a short time (i.e. 14 days) of their
discovery. Beyond this reporting requirement, it should be legally mandatory
to secure the details (for at least two months past the public release of the
patch) about any vulnerability that could be exploited for malicious use, and
the law should impose penalties and assign civil liability for any organization
that leaks such details.
3) In the absence of such a law, Google's security research team should stop
releasing any vulnerability information until a patch has been publicly
available for at least two months. Trying to force a specific remediation
effort by imposing penalties on the users of a third-party's products is not
appropriate.
4) Until a law is passed that addresses items 1 and 2 above, legal firms need
to drive changes via civil lawsuits against entities that published information
that was used to compromise a business's or person's computer.
5) Every major news outlet in the U.S. needs to showcase how entities such as
Google's security research team are making it easier to compromise either your
computer or the systems of entities that have your personal information. Think
about it. Let's say the host of a major news outlet starts off their next
story with 'scary' statistics of all the various cyberattacks, all of the
horrible consequences, and the terrible experiences people have gone through to
recover from a stolen identity or deal with the exposure of very personal
private information/photos, then she/he concludes with this statement: "And the
people at Google just made it even easier for hackers to do such things.
GOOGLE, of all people." Then she/he explains Google's practice of releasing
detailed vulnerability information, and EVEN providing a platform for the
discussion of exactly how to exploit the vulnerability immediately below the
announcement. Who do you think the general public will blame, Google, or
Microsoft? I think we could count on one hand the number of days it would take
Google to change their policy after stories like this are broadcast. Any other
security company with similar practices that could be negatively impacted by
bad publicity would follow suit.
Original comment by jmjones8...@gmail.com
on 7 Jan 2015 at 2:54
#107 - jmj,
I mean this in the most respectful way possible but I think the premise of your
argument is completely flawed. While I agree that responsible disclosure is
something that should be legally addressed, not allowing disclosure is equally
as dangerous.
Try to think of this vulnerability in a different way, maybe in terms of a
defective airbag in a vehicle with your child in the front seat. Just because
the manufacturer doesn't know about it, or knows but doesn't tell you that
there is a defect in an airbag firing mechanism (for example) does not mean
that the flaw does not exist. It exists and just because you do not know it
exists doesn't change reality.
Not sure if you are familiar with the idea of recombinant conceptualization, or
multiple independent discovery but it is highly likely if not certain that
another less well-intentioned entity has found this vulnerability (and any
other future vulnerability the Project Zero group discovers) and is already
using it to maliciously exploit systems.
Knowing that the airbag is faulty gives us the opportunity to move our kid to
the backseat, replace the faulty airbag mechanism, or drive a different car -
for lack of a better metaphor.
Google letting us know that this vulnerability exists after giving the good guy
90 days to fix it, and the bad guys 90 days to continue to exploit it is more
than generous if the intention is to help ensure a more secure computing
environment.
R/s,
Lx
Original comment by alexande...@gmail.com
on 7 Jan 2015 at 3:40
Thank you Google for pressuring Microsoft into fixing this vulnerability, as
Microsoft never would have, even with a 90 day head start.
Original comment by KARMA...@gmail.com
on 7 Jan 2015 at 4:29
James,
On Windows 7, you are able to use Cache Flags 4 and 8, which correspond to
Telemetry and Event without TCB. Same goes for the result flags.
However, your PoC uses Cache Flag 1 (AppCompat) and Result Flags 0xFF. While
the Result Flags probably don't matter, I believe without Cache Flag 1, you're
only able to insert the entry with Telemetry and Event flags, so when
CreateProcess will later query SDB/Cache, it will see that this is not an
"AppCompat" entry, but rather a Telemetry entry, and probably ignore it.
To everyone else: this bug has nothing to do with UAC.
Original comment by aione...@gmail.com
on 7 Jan 2015 at 4:57
Thanks Alex,
That's pretty much what I thought would be the case. It seemed likely that the
TCB check would cover adding a cache entry which would likely cause a security
issue. Of course why the developers removed the check is beyond me. But then
again why the actual check was so broken (it's worse in 7 as SeTokenIsAdmin
also doesn't take into account the impersonation level) is also beyond me. At
the very least it wasn't really worth spending more time verifying on Windows 7
considering the 2 or 3 days spent RE'ing 8.1 to prove the vulnerability so that
MS would likely accept it. I'm sure you could have saved me the hassle.
Original comment by fors...@google.com
on 8 Jan 2015 at 12:38
Historically App Compat was 100% the purview of the AppInfo service up until
Widows Server 2003 (which has a delicious unfixed bug in how it impersonates
the caller... back in the LPC days many components had similar issues). This
delayed process creation, because it meant that every single new process had
not only to block on CSRSS, but also on AppCompat.
In Vista, they moved App Compat to the kernel (and introduced 'deferred' CSRSS
process notifications, etc...) as part of an effort to reduce contention on
user-mode services during process creation. But it was a pretty botched
attempt, because when it came to some operations, you still needed a service to
manage app compatibility, and because it bloated the kernel with SBD and app
compat code. Since a service was used, they put that TCB check in there.
In Windows 8, they cleaned things up and put app compat into its own driver
(part of the kernel MinWinization), and I believe completely got rid of the
service such that all app compat actions can come from processes. They still
kept sensitive actions as 'admin-only', but TCB is no longer required. This
also meant that the processes that manage app-compat can run with reduced
privileges, which sounds like a good idea.
In fact, you'll find many places in Win7+ where they removed/reduced privilege
checks in the kerrnel, all under the guise of "reducing privileges held by
user-mode apps".
A really good example is DWM, which ran with very high privileges before (and
had tons of bugs), but now runs as its own virtual service account. Now all its
user-mode bugs become boring since it can't do much. Oh but of course, nobody
thought that this now means all the undocumented Win32k.sys DWM internal APIs
no longer have the privilege checks... ;-)
Original comment by aione...@gmail.com
on 8 Jan 2015 at 4:27
For anyone thinking this is only a UAC bypass (who are ignoring any other
comments to contrary) I do have a PoC which gets arbitrary localsystem
execution from any user account regardless of UAC status which works on Windows
8.1 Update 32bit. The rationale for providing a UAC only PoC to Microsoft was
as a demonstration of the specific issue without unnecessary effort being
expended.
The vendor is typically best placed to make the assessment on whether something
is a security issue or not, our responsibility is to provide what information
we can and a PoC which we feel adequately demonstrates the issue. If Microsoft
had responded stating that this was a UAC bypass vulnerability only then would
further work have been necessary to prove it wasn't. In this case Microsoft
confirmed the issue and planned a fix so it wasn't required.
Original comment by fors...@google.com
on 9 Jan 2015 at 12:59
Thanks for coming back to clear that up! Keep up the good work.
Original comment by misterfi...@gmail.com
on 9 Jan 2015 at 11:38
[deleted comment]
(Comments relating to the disclosure debate are welcome on this issue, but any
comments that are not constructive, or involve ad-hominem attacks, etc. may be
deleted.
See https://code.google.com/p/google-security-research/issues/detail?id=118#c5
as the first example of a reasonable and respectfully constructed argument
against the 90-day disclosure.
See https://code.google.com/p/google-security-research/issues/detail?id=118#c12
as the first example of a reasonable and respectfully constructed argument for
the 90-day disclosure.)
Original comment by cev...@google.com
on 12 Jan 2015 at 8:40
Security not obscurity.
Original comment by joshua.a...@gmail.com
on 12 Jan 2015 at 2:09
It's unbelieveable how people think that letting lazy devs get away with not
fixing a vulnerability after 3 months is fine.
As time goes, this will happen less and less because microsoft will know that
they actually can't get away with not fixing a vulnerability for so long after
being aware of its existence.
The 90-day disclosure is necessary and it will help making the internet safer.
Original comment by rod.jun...@gmail.com
on 12 Jan 2015 at 2:45
One would think that, after seeing and fining the same security issues in EVERY
release of the Windows operating system, they would have a process in place for
banging on each (already) exposed vulnerability BEFORE a release. So sad that
these are being promulgated from version to version. Well done, Google.
Original comment by ed.ott...@gmail.com
on 12 Jan 2015 at 2:47
[deleted comment]
I support google and advise everyone to choose another operating system where
security is taken seriously,
it is unacceptable that microsoft thinks it is OK to wait until last minute ,
instead of fixing it in the NEXT patch tuesday.
Original comment by ovaci...@gmail.com
on 12 Jan 2015 at 3:36
Reading the news articles about this is quite fun. A lot of the early articles
have absolutely no clue what this "Zero" initiative is supposed to be about and
pretty much the only thing of substance they contain is a quote from comment #5.
Many articles say "We contacted Google and got no response", while there are
already "Project member" comments right here starting from #25 on Dec 31, 2014.
This is much better source of official information than some "we contacted
their PR division", why journalists don't use it?
Now this article on BBC http://www.bbc.com/news/technology-30779898 says "On 11
January, Google publicised the flaw. Microsoft said it had requested that
Google wait until it released a patch on 13 January." I wonder where they got
the January 11 date¸
Anyways, that BBC article has a link to Microsoft's senior director of research
Chris Betz's blog post "A Call for Better Coordinated Vulnerability Disclosure"
http://blogs.technet.com/b/msrc/archive/2015/01/11/a-call-for-better-coordinated
-vulnerability-disclosure.aspx
Original comment by jd1...@student.uni-lj.si
on 12 Jan 2015 at 4:14
Remedy for this is easy. Until MS patches this, run a batch at start up that
checks the tmp and temp path, if they differ from what is expected write the
correct path to the registry and alert the user of the deviation.
This is no real big deal.
Google has their 90 day policy, for good or for bad. Microsoft knew that this
was the case, they decided to wait. Reading a little more into it, if Microsoft
did not think it a big deal, then it might not really be.
Go back and read the bug. You will find that wile it is a gaping hole, a short
term fix as what I propose above will act as a finger in the dyke, until
Microsoft has a patch ready. If this bug affects your security, then you should
not be using Microsoft for whatever you are doing.
If you want real security use another OS. If your security needs do not exceed
the needs of the NSA, then Microsoft is fine!
Original comment by k...@ku5e.com
on 12 Jan 2015 at 4:15
At everyone complaining over this:
Imagine the 90 day disclosure policy would not be adhered to, and the message
this would send.
The message would be something like "There's no 90 day disclosure policy, just
a 90 day disclosure empty threat" which would remove the entire point of the
policy; to pressure software developers into actually fixing their problems
within a reasonable time.
Original comment by osingan...@gmail.com
on 12 Jan 2015 at 4:27
How about, instead of disclosing a bug completely, have a 2-step deadline?
You could disclose all information necessary for sysadmins to find and apply a
patch without giving away every detail on how the bug can be exploted.
This could be done after 90 days, with the full disclosure happening a month
later or so.
Original comment by rschuet...@gmail.com
on 12 Jan 2015 at 4:42
Totally irresponsible by Google.
Google are not the "net security police" and after a request by MSoft to delay
the public announcement Google decided to tell everyone. If my system was
attacked by this irresponsibility who would be at fault, clearly MSoft for the
flaw and Google for publicising it. For me Google clearly have some
responsibility to end users. It is not always so easy to patch any software in
a short time as code changes may introduce more vulnerabilities . Although I
agree with highlighting vulnerabilities it does need control over what is
actually made public.
Original comment by DH.Aviat...@gmail.com
on 12 Jan 2015 at 4:59
#122: For background to the BBC article information please refer to issue 123.
It has a PublicOn-2015-Jan-11 flag and January 13th is the first 2015 patchday
for Microsoft Windows.
Fwiw it looks to me like this could have been planned by MS all along to force
this discussion. On the other hand that thought instantly gets irrelevant when
applying Hanlon's razor.
Anyway, in order to ensure a more sophisticated discussion of the matter
(especially in the media) I'd vote for the Google Security Research Team /
Project Zero to have an updated statement (not the 2010 one with a 60-day
disclosure still written in it) about their responsible disclosure policy and
timeline, probably like a "media-relations"-page. This should also address the
first paragraph of comment #122. It can be as simple as an update to the 2010
article with inclusion of some thoughts from the "Announcing Project Zero"
Blogpost, written into a background/media-relations page that not "just gets
lost" like a Blogpost but has several links to it so even the laziest / most
time-limited journalist get a grasp of the background and the greater cause of
this project.
And while I'm here: Thank you Google Security Research Team for making the
Internet a safer place for all of us, even for those who (have to) use software
from one of your main competitors.
Original comment by johannes...@gmail.com
on 12 Jan 2015 at 5:00
#127 Oh, there are two privilege escalation entries! Did not notice the second
one.
Since this become sort of an ad-hoc discussion group, what do you people think
about the other side of this issue -- patching and planned obsolescence.
Regarding especially
https://community.rapid7.com/community/metasploit/blog/2015/01/11/google-no-long
er-provides-patches-for-webview-jelly-bean-and-prior
My take on it is that phone vendors are not delivering the updates to customers
anyways, so it does not really matter whether Google is making them or not.
Microsoft here is at least doing _some_ patching. And Apple managed to keep the
iPhone ecosystem under their control so they can also guarantee security for
their customers. This is the aspect where "united, but not the same" does not
pay off.
Original comment by jd1...@student.uni-lj.si
on 12 Jan 2015 at 6:13
What happened to do no evil? Listen, you could have posted this bug without
all the detail to allow anyone to use the bug to elevate themselves.
Why post all the detail?
This is absolutely disgraceful behaviour for an individual to post let alone
for a supposedly responsible company.
Original comment by downhill...@gmail.com
on 12 Jan 2015 at 7:43
While 90 days seems adequate, high value issues in Q4 should probably get some
extra time, no? It's unlikely that fixes in Q4 could get the same QA resources
as other quarters.
Original comment by Len.Latt...@gmail.com
on 12 Jan 2015 at 8:14
[deleted comment]
While I fully agree that public disclosure is a good thing, releasing this POC
just a couple of days before the fix was to be released during Microsoft's
(very well established) "Patch Tuesday" cycle is bad, bad form - especially if
they had specifically requested you not to.
I understand that you are trying to hold yourselves to higher standards, but in
truth, you've just come across as a bit petty and unprofessional as a result.
If Microsoft failed to adhere to their deadlines, then fine, go ahead and
release. But to not even give them a chance, is pretty poor in my book.
I like the fact that you have referred to your own internal defined schedules
that you agreed "earlier in 2014". Patch Tuesday was introduced in 2003 as a
way to reduce workload and overhead for IT departments when releasing security
patches to large infrastructures, rather than having them come through
piecemeal during the month. Of course, being important security researchers, I
guess you'd know that...
Sorry guys, but I'm pretty disappointed here. I'd have thought that you'd have
taken a more mature attitude to this :-/
Original comment by daernsin...@gmail.com
on 12 Jan 2015 at 9:06
I kind of agree that automatic disclosure is bad, there should be a person who
makes the decision based on certain facts. Like if 90 days have passed and the
next release is within few business days, then hold off on disclosure. But
that would require companies to have well documented release cycles (arguably
MS does), however if the next release is another 15-20 days later, then Google
should release the details. 90+/- a few days is more than enough to fix issues.
These companies are large corporations, so "lack of/tight on resources"
argument doesn't fly. If the disclosure is made flexible, no business/project
manager is ever going to find a resource to work on the issue immediately. I
have been in the software industry long enough to realize that without the
spotlight on a particular issue, there will always be another project that will
have higher priority.
Should there be an allowance for "Holiday" season? Probably not, all of these
companies have a global workforce, it is going to be "Holidays" somewhere in
the world, does that mean we have a perpetual delay in disclosure? Even if it
is not a global company, the workforce is diverse enough. These companies
already staff crews during the holidays (with additional incentives), why can't
developers (I am one), be given a choice to work during holidays?
Original comment by Surabh.S...@gmail.com
on 12 Jan 2015 at 10:35
#132 The problem is that they are not manually releasing these. The PoC is
attached to the post at the time of reporting and these tickets are
automatically set to public after the 90-day deadline. Microsoft has released
emergency patches before regarding certain vulnerabilities, the 90-day
disclosure policy with attached PoC is more than adequate.
They also may have asked Google to hold off releasing, however that itself is
bad form, holding back a vulnerability report is as bad as it not being fixed
and if Google extended it for Microsoft, why shouldn't they do it for other
companies? It sets a bad precedent that requires Google to change their
automatic 90-day disclosure policy simply because developers/vendors would
rather have a vulnerability hidden under a rug, so to speak. Not only that, but
if Google can find these vulnerabilities, then who's to say that hackers don't
already have them?
Original comment by dreamcas...@gmail.com
on 12 Jan 2015 at 10:41
hackers already have them.
On Mon, Jan 12, 2015 at 10:41 PM,
<google-security-research@googlecode.com> wrote:
Original comment by lauri.l...@gmail.com
on 12 Jan 2015 at 11:26
[deleted comment]
I am under the impression that the 90 day limit is to encourage vendors to fix
vulnerabilities. Is there reason to believe MSFT isn't/wasn't doing this?
MSFT has a well publicized, repeatable, and reliable release schedule for
fixes. It fits poorly with the automatic reveal that GOOG uses, but by not
being sensitive to this, GOOG is arbitrarily appointing itself as the keeper of
release standards.
I can understand GOOG releasing information if MSFT had not/does not release a
patch on their publicized patch date that is APPROXIMATELY 90 days after
report. Enforcing without sensitivity to reality is arrogant at the least and
evil at the worst.
Original comment by aleyne.e...@gmail.com
on 13 Jan 2015 at 5:45
#134 I'm quite sure that Google could have manually extended the automatic
release by a couple of days without hurting anyone.
The thing is that these are complicated pieces of software and it can take time
to evaluate, fix and test a release. And that's before you even consider the
numerous versions of Windows that Microsoft have to manage - even a small patch
can result in multiple different versions, each one having to be tested and
validated individually.
And from an IT point of view, it's much more preferably to be able to bulk-test
patches once per month and release them in one go, rather than the enormous
overhead of having them come through in an ad-hoc manner. Microsoft introduced
monthly patch cycles as a response to requests from IT professionals who much
preferred to handle patches in batches rather than one by one.
While I appreciate that 90 days /is/ a reasonable time period to resolve
issues, a failure to be flexible on this (especially when the vendor was
clearly engaged and working on the issue) is, frankly, amateur behaviour. It's
the sort of thing that a spiteful script-kiddie might choose to do, not an
employee of a trusted IT business.
I'd like to think otherwise, but I do wonder if there was an ulterior motive
here - to release the vulnerability 2 days before the patch, hide behind an
arbitrary deadline and point the finger at a vendor who didn't patch the
problem.
Google should have been congratulated for finding this bug, but I'm afraid that
I find myself unable to do so because of the way they did it. Sorry guys :-/
Original comment by daernsin...@gmail.com
on 13 Jan 2015 at 10:33
Disclosure and patch policies have been commented on in great detail. This just
made radar:
https://community.rapid7.com/community/metasploit/blog/2015/01/11/google-no-long
er-provides-patches-for-webview-jelly-bean-and-prior
Original comment by jwal...@securityevaluators.com
on 13 Jan 2015 at 5:50
Fixed in https://technet.microsoft.com/en-us/library/security/ms15-001.aspx
Original comment by fors...@google.com
on 13 Jan 2015 at 6:43
#134 Holding back a vulnerability REPORT is a bad thing indeed but releasing
the full exploit to a vulnerability because they did not release a patch within
an 3rd party's self induced deadline is irresponsible.
Original comment by superhum...@gmail.com
on 13 Jan 2015 at 7:21
So what's the appropriate amount of time to give when a vendor asks for an
extension?
Considering that MS already had 3 "regularly scheduled" patch tuesdays (fine...
1 of which was one day after the report, so let's just say 2, where 1 of them
was with 30 days notice), how much more time do you provide MS (and others) on
request?
2 days? 20 days? 40 days? Always round up to their next regularly-scheduled
patch date (even if that's in 29 days)? What if regularly scheduled patches are
every 2 months?
And what do you do if you have reason to believe that the exploit has already
been discovered by black-hat hackers and is being used in the wild? Let's say
Google found this because it was being used against them. How does that affect
your responsibility to alert the public? Is it responsible to not make it
public (allowing sys admins to detect and prevent) it for 90 days? 110 days?
Original comment by jamesc...@gmail.com
on 13 Jan 2015 at 9:03
Damn! Who has CVE-2015-0001 then?
Original comment by cev...@google.com
on 13 Jan 2015 at 10:48
wordpress, obvs.
On Tue, Jan 13, 2015 at 10:49 PM,
<google-security-research@googlecode.com> wrote:
Original comment by lauri.l...@gmail.com
on 14 Jan 2015 at 1:28
[deleted comment]
Original comment by fors...@google.com
on 14 Jan 2015 at 10:02
Original comment by fors...@google.com
on 14 Jan 2015 at 10:04
You all respect.
Original comment by abu.ja...@gmail.com
on 14 Jan 2015 at 12:55
[deleted comment]
[deleted comment]
Original issue reported on code.google.com by
fors...@google.com
on 30 Sep 2014 at 2:17Attachments: