Closed tiran closed 12 years ago
The secure_compare() function immediately returns False when both strings don't have equal length. With the patch the run time of secure_compare() always depends on the length of the right side. It no longer gives away information about the length of the left side.
The patch should be applied in combination with the patch in issue bpo-14955.
secure_compare leaks the password always. Note that it takes different time to create a result of ord() depending whether it's \<=100 or > 100 due to caching of small numbers. Such functions should be written in C.
We should. Adding secure functions that aren't really secure is something we should rather avoid. :)
Christian, are you willing to do that?
fijal: while I agree with you, the limit for small ints has actually been pushed to 257 in recent CPythons. So it should still theoretically work --- of course, assuming a predictable CPU, which is wrong, and assuming a simple interpreter. (We can probably dig enough to find a timing issue even with CPython, and of course it's clear with any of the other Python interpreters out there.)
I don't see how the function is going to leak this information when both this patch and the patch in bpo-14955 are applied. With http://bugs.python.org/file25801/secure-compare-fix-v2.patch ord() is no longer used and thus avoid the timing difference for integers > 256 (NSMALLPOSINTS is defined as 257, not 100).
Ah unicodes. is encode('unicode-internal') independent on the string characters? I heavily doubt so. you leak at least some information through that function alone.
With PEP-393 unicode objects can have several representations, which makes it unlikely that *really* constant-timing functions can be devised.
Speaking about this particular patch, I don't understand the point. secure_compare() is obviously meant to be used on inputs of some known length, not arbitrary data.
IMHO it's not obvious to all users. Better safe than sorry. ;)
The invariant 'known and equal length' impresses an artificial limitation. Code may need to compare outside data with internal data without exposing too many details about the structure of the internal data.
The other matter should be discussed at bpo-14955.
I don’t want to be the killjoy but I find it highly questionable to add a function that is advertised as "secure" while we can't fully grok the complexities at play. If we can't produce a provable secure one, we should scrub the function for good; or at least rename it somehow.
Unjustified perceived security is IMHO the worst possible case.
I don’t want to be the killjoy but I find it highly questionable to add a function that is advertised as "secure" while we can't fully grok the complexities at play. If we can't produce a provable secure one, we should scrub the function for good; or at least rename it somehow.
The function is probably secure (modulo unseen bugs) in the bytestrings-of-the-same-size case. To make it "provably" secure, we could write a C version (which would be quite easy).
For unicode strings things are a bit trickier though. Again, a C version could provide some guarantees (and could raise an error if the passed unicode strings use a different representation from each other).
Antoine, seriously? You want to explore a function that's called "secure" when the only thing you know about it is "probably secure"? This is extremely tricky business and I think it should be called secure only if you can prove it's secure. Otherwise it's plain insecure and should not be named that.
export not explore. Why can't I edit my own post?
Antoine, seriously? You want to explore a function that's called "secure" when the only thing you know about it is "probably secure"? This is extremely tricky business and I think it should be called secure only if you can prove it's secure. Otherwise it's plain insecure and should not be named that.
What's the methodology to "prove" that it's secure?
We could rename "secure" to "safe" to downtone it a bit, but it's still an improvement on the nominal equality comparison.
For unicode at the very least it's not an improvement at all. With the patch mentioned that does encode it's also not an improvement at all. Prove as in reason about the function in C and make sure it does not do any conditionals depending on the input data. This is much easier than it is in Python.
We did this exercise for PyPy once, just for the sake of it. We looked at generated IR and made sure a comparison is not leaking any data.
As far as the function goes right now - I don't know. For now following the entire code of long_bitwise is a lot of effort - I genuinely can't say that it'll be the same for all numbers of 0-255. Can you? It's easier with low-level language simply (And yes, this is one of the few cases where I would argue it makes sense to implement something in C :)
I recommend to revert the addition of the function, given that it can't be made secure.
I've two suggestions:
rename the function to 'total_compare'. The name explains what the function actually does in comparison to '=='. It takes the total input values into account instead of using short circuit comparison.
restrict the function to bytes. The implementation works well with bytes but not with unicode. It's not an issue since hash digests are bytes. The docs could explain the issue with unicode and how user code can implement a reasonable safe conversion.
Hi Christian. It's either secure or it's not. If it's not, there is no point in introducing it at all as I don't think it's a good idea to have a kind-of-secure-but-i-dont-know functions in stdlib.
If you restrict input to bytes it looks okish, but I looked at all the code that's invoked on the C side and it's quite a lot of code. Does you or anyone else actually go and review all the C code that's called via various operations to check if it does or does not depend on the value of various characters? I can't tell myself, it's too long.
It's either secure or it's not.
I don't think that's true. By that reasoning, Python is not secure so there's no point in fixing crashes or providing a hashlib module.
That said, I think renaming to "total_compare" isn't really helpful. The point of the function is to be secure (as much as possible), not to do a "total" comparison.
Maciej, please read http://mjg59.dreamwidth.org/13061.html
"Secure" vs "not secure" is not a binary state - it's about making attacks progressively more difficult. Something that is secure against a casual script kiddie scatter gunning attacks on various sites with an automated script won't stand up to a systematic attack from a motivated attacker (also see the reporting on Flame and Stuxnet for what a *really* motivated and well resourced attacker can achieve).
The hash randomisation changes didn't make Python completely secure against hashing DoS attacks - it just made them harder, by requiring attackers to figure out the hashing seed for the currently running process first. It's protecting against scatter gun attacks, not targeted ones.
Performing a timing attack on Python's default short-circuiting comparison operation is *relatively easy* because the timing variations can be so large (send strings of increasing length until the time stops increasing - now you know the target digest length. Then try various initial characters until the time starts increasing again - now you know the first character. Repeat for the last character, then start with the second character and work through the string. Now you have the target hash, which you can try to crack offline at your leisure.
The new comparison function is designed to significantly *reduce the variance, thus leaking *less information about the target hash, and making the attack *harder* (albeit, probably still not impossible).
I agree with Christian's last two suggestions: change the name to total_compare, and only allow use on byte sequences (where the integer values are certain to be cached).
Nothing should ever be called "safe" or "secure" in the standard library, because the immediate question from anyone that knows what they're talking about is "Secure/safe against what level of threat and what kind of threat"? People that *don't* know what they're doing will think "Secure/safe against everything" and that's dangerously misleading.
Improving protection against remote timing attacks (e.g. by reducing comparison timing variance to the point where it is small relative to network timing variance) is a *lot* easier than protecting against local timing attacks. Protecting against someone with physical access to the machine hosting the target hash is harder still.
Regardless, the target needs to be *improving the status quo*.
Being able to tell people "using hmac.total_compare will make you less vulnerable to timing attacks than using ordinary short circuiting comparisons" is a *good thing. We just need to be careful not to oversell it as making you *immune to timing attacks.
"Secure" vs "not secure" is not a binary state - it's about making attacks progressively more difficult. Something that is secure against a casual script kiddie scatter gunning attacks on various sites with an automated script won't stand up to a systematic attack from a motivated attacker (also see the reporting on Flame and Stuxnet for what a *really* motivated and well resourced attacker can achieve).
The problem here is, that _if_ you add a "secure" to the name of a method, it becomes binary. At least in the minds of the users. I know you address that, but that's the main point here.
Regardless, the target needs to be *improving the status quo*.
Being able to tell people "using hmac.total_compare will make you less vulnerable to timing attacks than using ordinary short circuiting comparisons" is a *good thing. We just need to be careful not to oversell it as making you *immune to timing attacks.
Why not write a C function which can be more secure than Python code? I would argue that would be an general asset for the stdlib, not just for HMAC (therefore, I'd put it elsewhere).
On 14.06.2012 14:26, Antoine Pitrou wrote:
Antoine Pitrou \pitrou@free.fr\ added the comment:
> It's either secure or it's not.
I don't think that's true. By that reasoning, Python is not secure so there's no point in fixing crashes or providing a hashlib module.
The proper statement is "It's either time-independent or it's not". This *is* a binary state (I agree that being secure is not a binary state).
Being able to tell people "using hmac.total_compare will make you less vulnerable to timing attacks than using ordinary short circuiting comparisons" is a *good thing*.
No, it's not. It's a *bad thing*. The two issues that have been opened since the function was first submitted indicate that people will keep inspecting the code and find out that it's not time-independent. If they had been relying on that it is, they will be upset. Since it's inherently impossible to make the function time-independent, people will be constantly annoyed about this function. I can't find anything good in that.
If nobody else does, I'll revert the addition before the beta. Note that there is no *actual* issue that is being resolved by this function; it was added only because of its cuteness value.
Why not write a C function which can be more secure than Python code?
For Unicode strings, it's impossible to write a time-independent comparison function even in C
I would argue that would be an general asset for the stdlib
I would argue that it's not. No actual use case for this function has been demonstrated so far.
> Why not write a C function which can be more secure than Python code? For Unicode strings, it's impossible to write a time-independent comparison function even in C
Really? Some comments sounded different. That's too bad but also what I suspected in the first place – it seems to complex.
However, this function seems only useful to bytes anyway so why not strip it down if it _is_ possible with bytes? Am I missing something?
> I would argue that would be an general asset for the stdlib I would argue that it's not. No actual use case for this function has been demonstrated so far.
Well, one example: https://github.com/mitsuhiko/python-pbkdf2/blob/master/pbkdf2.py and any other place that compares passwords, tokens, …
Can people please stop raising a false dichotomy and using that as an excuse not to do anything?
The decision is not between "leak some information" and "leak no information". It is between "leak more information" and "leak less information".
The timing variations with standard comparison are relatively massive and relatively easy to analyse (if the time taken goes up, you got the previous digit correct).
With this comparison, they're far more subtle and require much greater analysis to figure out the significance of the timing changes. That reduces the pool of attackers to those capable of performing that analysis (or in possession of tools that will perform that analysis for them).
Yes, the docs and name are currently completely unacceptable. But scorched earth is not a good answer, because that just means people will fall back to using "==" which is *even worse* from a security point of view.
Well, one example: https://github.com/mitsuhiko/python-pbkdf2/blob/master/pbkdf2.py
It says that it needs that, but I fail to understand why. pbkdf2 is used to generate encryption keys from passwords, where you don't need to compare strings at all. Instead, you derive a key from the password, and use the key e.g. for AES encryption.
If you use pdkdf2 for password hashing, then you do need a comparison function, but it's irrelevant whether that is time-independent. If an attacker was able to determine that his hash brings him close to the actual hash, this is no gain in cracking - since similar hashes do not at all mean that the passwords are similar.
and any other place that compares passwords, tokens, …
No no no. Any sensible place to compare passwords would use some sort of one-way function (password hash) before the comparison, so that someone breaking into the machine will not gain the clear text passwords. As a side effect, timing attacks become futile, since hash functions provide confusion and diffusion, so if a timing attack detects that it found a key that hashes similar to the real key, that doesn't get it any closer to revealing the real key.
To repeat, the specific feature being proposed for retention is:
a function called hmac.total_compare() that is clearly documented as being still vulnerable to timing analysis given a sufficiently sophisticated attacker, while still being more resistant to such analysis than the standard comparison operator
restricting that function to operating on bytes, to eliminate timing variations associated with encoding/decoding of Unicode text and reduce those associated with the calculation of integer values
Leaking less information on each comparison is intended to increase the effectiveness of higher level timing attack countermeasures (such as rate limiting and lockouts). Anyone that would use "hmac.total_compare" and call it done is likely using ordinary comparison today (which is even worse).
The timing variations with standard comparison are relatively massive and relatively easy to analyse (if the time taken goes up, you got the previous digit correct).
If you have an application that is vulnerable to such an attack, you better reconsider your entire approach, rather than using a library function that says it will harden your approach (but may actually not).
Yes, the docs and name are currently completely unacceptable. But scorched earth is not a good answer, because that just means people will fall back to using "==" which is *even worse* from a security point of view.
It's not scorched earth. It's the standard procedure for adding features to the standard library. *At a minimum* there needs to be a use case, which has not been demonstrated yet (the OP of the original report thought he had a use case, but then agreed that he was wrong). Then, the use case should be fairly relevant for inclusion in the standard library. I wish there was an AES implementation in the library before we start worrying about stuff like this. Then, ideally, the new library function has been in wide use for some time.
This was rushed, and it needs to be reverted.
To repeat, the specific feature being proposed for retention is:
To repeat, no use case has been demonstrated for that function. It has been added because it was fun to write, not because it is useful.
On Fri, Jun 15, 2012 at 9:41 AM, Nick Coghlan \report@bugs.python.org\wrote:
Nick Coghlan \ncoghlan@gmail.com\ added the comment:
To repeat, the specific feature being proposed for retention is:
a function called hmac.total_compare() that is clearly documented as being still vulnerable to timing analysis given a sufficiently sophisticated attacker, while still being more resistant to such analysis than the standard comparison operator
restricting that function to operating on bytes, to eliminate timing variations associated with encoding/decoding of Unicode text and reduce those associated with the calculation of integer values
Leaking less information on each comparison is intended to increase the effectiveness of higher level timing attack countermeasures (such as rate limiting and lockouts). Anyone that would use "hmac.total_compare" and call it done is likely using ordinary comparison today (which is even worse).
----------
Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue15061\
Nick, I fail to understand why are you opposing writing such a function in C. Such a function can be provably time-independent (and as MvL says this is a binary state), at least as long as it operates on bytes (I'll refrain from asking about unicode, I think it's possible, but I dunno).
For the same function in python it's at the very least much harder to prove (and has bugs as we've seen)
Cheers, fijal
On Fri, Jun 15, 2012 at 9:47 AM, Martin v. Löwis \report@bugs.python.org\wrote:
Martin v. Löwis \martin@v.loewis.de\ added the comment:
> To repeat, the specific feature being proposed for retention is:
To repeat, no use case has been demonstrated for that function. It has been added because it was fun to write, not because it is useful.
----------
Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue15061\
Is comparing passwords against a secure one not useful?
> and any other place that compares passwords, tokens, …
No no no. Any sensible place to compare passwords would use some sort of one-way function (password hash) before the comparison, so that someone breaking into the machine will not gain the clear text passwords.
I agree that this is the right way to do. However I disagree that it's also the only sensible way to do in the real world. Sometimes you just _have_ to compare sensitive strings, whether you like it or not.
I see your point that adding such a function would leverage bad security behavior and thus may be a bad thing. The usefulness of such a function to some(?) people is IMHO not disputable though.
On Fri, Jun 15, 2012 at 9:55 AM, Hynek Schlawack \report@bugs.python.org\wrote:
Hynek Schlawack \hs@ox.cx\ added the comment:
>> and any other place that compares passwords, tokens, … > > No no no. Any sensible place to compare passwords would use some > sort of one-way function (password hash) before the comparison, > so that someone breaking into the machine will not gain the clear > text passwords.
I agree that this is the right way to do. However I disagree that it's also the only sensible way to do in the real world. Sometimes you just _have_ to compare sensitive strings, whether you like it or not.
I see your point that adding such a function would leverage bad security behavior and thus may be a bad thing. The usefulness of such a function to some(?) people is IMHO not disputable though.
Note that this does not relief you from using a time-independent comparison function. If you call some hash function (which time is known to the attacker), then you compare it against a stored hashed version. If you use a normal compare you're leaking the hash. This is indeed not as bad as leaking the password, but it has been demonstrated that one-direction functions are still vulnerable to some sort of attacks, so it's not ideal either.
This point was discussed in bpo-14532 when the new API was added.
From http://bugs.python.org/issue14532#msg158045:
"""Given that this issue has affected a lot of security-sensitive third-party code (keyczar, openid providers, almost every python web service that implements "secure cookies" [1] or other HMAC-based REST API signatures), I do like the idea of adding a warning in the relevant documentation as sbt proposed.
The only reason I'd recommend _not_ putting a time_independent_comparison() function in the hmac module is that it's not really HMAC-specific. In practice, any fixed-length secrets should be compared in a time-independent manner. It just happens that HMAC verification is a pretty common case for this vulnerable construct. :-)"""
For password hashing, the attacker is unlikely to be able to provide the digest directly, but for signature checking it's far more likely to be the case.
The idea is to make it easy for people to reduce the time variance of their digest comparisons as the *default* choice when writing security related code. Deciding whether or not the risk of a timing attack is actually significant requires you to look at the system as a whole and decide "Oh, OK, shortcircuiting comparison doesn't leave us open to timing analysis here, we can use it as a performance enhancement". (Although, in most systems, there are likely to be plenty of other less sensitive places to go after for performance improvements first)
I'm not really opposed to writing it in C - I just don't think rewriting it in C should be a requirement for keeping it. Even in pure Python, it still leaks less information than the standard comparison operator.
Is comparing passwords against a secure one not useful?
I claim that this use case doesn't occur in practice. Everybody uses hashed passwords. If they do compare against a plain-text password, and they want to change something about it, they should switch to hashed passwords.
I see your point that adding such a function would leverage bad security behavior and thus may be a bad thing. The usefulness of such a function to some(?) people is IMHO not disputable though.
I think this entire issue is out of scale. There is really bigger fish to fry. The two people who really need this should post a package to PyPI.
Note that this does not relief you from using a time-independent comparison function. If you call some hash function (which time is known to the attacker), then you compare it against a stored hashed version. If you use a normal compare you're leaking the hash. This is indeed not as bad as leaking the password, but it has been demonstrated that one-direction functions are still vulnerable to some sort of attacks, so it's not ideal either.
But you don't leak the hash - you leak the first byte of the hash if you make 256 tries, and the first two bytes if you make 65536 tries. To leak the first four bytes of the hash, you need to make 2**32 tries. So this is equivalent to a brute-force attack, which works just as well against a time-independent function. So using a time-independent function does not add any security.
On Fri, Jun 15, 2012 at 10:09 AM, Martin v. Löwis \report@bugs.python.org\wrote:
Martin v. Löwis \martin@v.loewis.de\ added the comment:
> Note that this does not relief you from using a time-independent comparison > function. If you call some hash function (which time is known to the > attacker), then you compare it against a stored hashed version. If you use > a normal compare you're leaking the hash. This is indeed not as bad as > leaking the password, but it has been demonstrated that one-direction > functions are still vulnerable to some sort of attacks, so it's not ideal > either.
But you don't leak the hash - you leak the first byte of the hash if you make 256 tries, and the first two bytes if you make 65536 tries. To leak the first four bytes of the hash, you need to make 2**32 tries. So this is equivalent to a brute-force attack, which works just as well against a time-independent function. So using a time-independent function does not add any security.
----------
Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue15061\
Martin, you fail to understand how this works. You don't do 2**32 tries to leak the 4 charaters, you need 4 * 256, that's why this attack is so bad, because the time needed for the next character is brute force, but then you can move on to the next one.
For password hashing, the attacker is unlikely to be able to provide the digest directly, but for signature checking it's far more likely to be the case.
Can you elaborate? What is the application, where is the digest checking, and what is the threat?
Martin, you fail to understand how this works. You don't do 2**32 tries to leak the 4 charaters, you need 4 * 256, that's why this attack is so bad, because the time needed for the next character is brute force, but then you can move on to the next one.
How so? Assume we have a hashed password, and assume we have somehow guessed the first three bytes. How can I then find out the fourth byte in only 256 tries?
I would have to generate passwords whose *hash* matches in the first three bytes. This is not feasible, for any hash function that is worth its salt.
That's why the vulnerable cases are far more likely to be related to *signature* checking. In those you can generally provide both the hash input (the message) and the hash target (the purported "signature").
If the signature check uses a time-dependent comparison that exhibits a lot of externally visible variance, then you can use a timing attack to find the signature that corresponds to a particular message (by keeping the message constant and changing the "signature"). Depending on the nature of the message, you're potentially done at that point (since on your final attempt your signed message was accepted), or else you may be after data that you can feed into an analysis aimed at breaking the signing key itself (a much harder prospect, but still possible given a sufficiently large sample, or a signing algorithm that is vulnerable to leaking the key as a result of chosen plaintext attacks).
Yes, system level defences are also important (that's why multiprocessing turned out to not, in fact, be vulnerable to an attack based on time dependent signature comparisons), but minimising information leakage is just a good principle of secure design.
For example, Django uses time independent comparison to compare signatures of signed cookies. A signed cookie consists of a plain-text value followed by a signature.
An attacker wants to construct a cookie that has a malformed value and a valid signature for that value. Let's assume that a signature is a string of 16 hex characters.
If a short-cut comparison was used, the attacker would require at most 16 tries to find out the first character. He first tries the signature "000...0", then "100...0", and so on until he notices that Django takes a slightly longer time to respond. Now he know what's the first character of the hash, let's assume it's "8". He then tries "8000...0", "810...0", and so on until he finds the second character. He continues this until he has the correct 16 characters. This takes at most 16 * 16 tries.
But because Django uses a constant-time comparison function, the attacker cannot guess one character at a time, and he needs 16 ** 16 tries.
In real world, 16 * 16 tries is not enough, of course. But repeating the same requests many times, the timing variations can be used to reveal which is the correct character in each step.
FWIW, Petri's example also explains why leaking the expected length of the string is considered an acceptable optimisation in most reimplementations of this signature check comparison: the attacker is assumed to already know the expected length of the signature, because it's part of a documented protocol or API.
However, I think it's more reasonable for a standard library implementation to omit that optimisation by default.
That's why the vulnerable cases are far more likely to be related to *signature* checking. In those you can generally provide both the hash input (the message) and the hash target (the purported "signature").
I see. I wonder how feasible this attack is, though, since the signature validation involves a lot of computation (including hashing the original message), of which comparing the signature values is but a tiny step.
In addition, for public-key signatures (RSA and DSA), I fail to see the threat. I can verify the signature offline, so I don't need to rely on timing attacks to find out what the correct signature would be. Plus I still need to brute-force the signatures, whether for offline generation, or in the timing attack, because it's *not* the case that the expected signature gets compared with the provided signature. Instead, something derived from the message gets compared with something derived from the purported signature.
It's really only HMAC which may be vulnerable. In
http://seb.dbzteam.org/crypto/python-oauth-timing-hmac.pdf
the author discusses that the real protection against this attack is to fix the actual bugs in the OAuth implementation (when OAuth is designed to prevent replay attacks which would also prevent this attack). In the end, he claims that using time-independent comparison would "add even more security", but in the same hand-waving fashion shown in this issue (i.e. without providing any proof that his proposed attack actually works - which I claim it won't, due to the noise caused by the HMAC generation).
Oh dead god, what have I done ... I threw a small stone and caused a major landslide. :)
I'm all with Nick on this topic. A correctly named and documented function provides a tool to users that greatly reduced the change of a side channel attack. It's all about teaching good practice. I also agree that we must neither call it 'secure' nor documented it as 'secure'. I believe the correct term is 'hardened against timing analysis and side channel attacks'
I could wrap up a quick C implementation if you like. The operator module is a better place for a total_compare() function. Do you a agree?
I recommend that you read/watch Geremy Condra's PyCon talk "Through the Side Channel: Timing and Implementation Attacks in Python". The slides contain timing analysis diagrams.
I could wrap up a quick C implementation if you like. The operator module is a better place for a total_compare() function. Do you a agree?
I think the function is fine in either hashlib or hmac. Putting it in one of these modules is a hint that it's security-related. On the other hand, linking to it from these modules' documentations is just as fine, if it is put in the operator module.
If you make a C implementation, it could also be interesting to cover the pure-ASCII unicode case.
As a first step, I'm going to make a change to:
If a C implemented operator.total_compare is made available, then hmac.compare_digest could be updated to use it (retaining the length shortcircuiting behaviour)
New changeset f36af3766a20 by Nick Coghlan in branch 'default': Issue bpo-15061: Don't oversell the capabilities of the new non-shortcircuiting comparison function in hmac http://hg.python.org/cpython/rev/f36af3766a20
OK, the worst aspects (the misleading name and documentation) have been dealt with, so that leaves the questions of:
Avoiding leaking the length information (seems unnecessary, since most digests are part of protocols where they have a known, published length, or you can find out the length by looking at public source code)
Providing a C implementation via the operator module (given the restriction to bytes values, and the assumption of caching for all relevant integers, would a C reimplementation really be buying us much additional security?)
As far as restoring unicode support (even in a C implementation) goes, I actually don't like the idea. For the general unicode case, as suggested in the updated documentation for hexdigest(), I believe the better approach is to try to stay in the bytes domain as much as possible, and avoid having a Unicode->bytes conversion for the expected value anywhere in the comparison timing path.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields: ```python assignee = None closed_at =
created_at =
labels = ['type-security', 'library']
title = 'hmac.secure_compare() leaks information about length of strings'
updated_at =
user = 'https://github.com/tiran'
```
bugs.python.org fields:
```python
activity =
actor = 'christian.heimes'
assignee = 'none'
closed = True
closed_date =
closer = 'christian.heimes'
components = ['Library (Lib)']
creation =
creator = 'christian.heimes'
dependencies = []
files = ['26003', '26068', '26079', '26106', '26112', '26120']
hgrepos = []
issue_num = 15061
keywords = ['patch', 'needs review']
message_count = 96.0
messages = ['162739', '162758', '162759', '162760', '162761', '162762', '162763', '162764', '162765', '162766', '162767', '162768', '162769', '162770', '162773', '162775', '162777', '162778', '162838', '162845', '162846', '162847', '162848', '162850', '162852', '162853', '162855', '162856', '162857', '162858', '162859', '162860', '162861', '162862', '162863', '162864', '162865', '162866', '162867', '162868', '162871', '162872', '162873', '162875', '162877', '162880', '162882', '162885', '162888', '162891', '162892', '162893', '162895', '162899', '162914', '162949', '162950', '163159', '163163', '163168', '163170', '163186', '163188', '163192', '163193', '163196', '163204', '163329', '163333', '163343', '163347', '163365', '163366', '163368', '163371', '163377', '163378', '163385', '163390', '163468', '163469', '163613', '163614', '163615', '163617', '163619', '163623', '163625', '163626', '163627', '163630', '163652', '163671', '163696', '163780', '163784']
nosy_count = 13.0
nosy_names = ['loewis', 'georg.brandl', 'gregory.p.smith', 'ncoghlan', 'pitrou', 'christian.heimes', 'alex', 'fijall', 'python-dev', 'petri.lehtinen', 'hynek', 'serhiy.storchaka', 'Jon.Oberheide']
pr_nums = []
priority = 'normal'
resolution = 'fixed'
stage = 'resolved'
status = 'closed'
superseder = None
type = 'security'
url = 'https://bugs.python.org/issue15061'
versions = ['Python 3.3']
```