Open tiran opened 11 years ago
I like to propose a new feature for bytes and perhaps bytearray. None of the bytes types support bitwise operations like &, | and ^. I like to add the bitwise protocol between byte objects of equal length:
>>> a, b = b"123", b"abc"
>>> bytes(x ^ y for x, y in zip(a, b))
b'PPP'
>>> a ^ b
b'PPP'
bytearray should certainly get the same functionality as bytes.
I assume you have a use case?
Use int's.
(int.from_bytes(a, 'big') ^ int.from_bytes(b, 'big')).to_bytes(len(a), 'big')
Adding & and | operations to bytes will be confused because bytes is a sequence and this will conflict with set operations.
The int-based spelling isn't very pretty though. And sequences are not sets :-) I suppose the use case has something to do with cryptography?
When you work with int's instead of bytes it is spelled pretty enough.
Sets are sequences. So having for example iteration and | operation in one function we can't be sure what it means.
Please don't turn Python to APL or Perl.
Perhaps separated mutable bitset (or bitlist?) type will be more useful.
Sets are sequences.
>>> from collections import abc
>>> issubclass(abc.Set, abc.Sequence)
False
>>> issubclass(abc.Sequence, abc.Set)
False
Perhaps separated mutable bitset (or bitlist?) type will be more useful.
Then I would prefer a view, using e.g. the buffer API, but without any copies.
You got me, Antoine! I'm working on a Python-only implementation of PBKDF2_HMAC. It involves XOR of two bytes in one place.
Serhiy, I'm not going to turn Python into Perl. I merely like to provide a simple and well defined feature with existing syntax. I propose bitwise ops only between bytes and bytearrays of equal size. Everything else like unequal size, bitwise ops between integers and bytes etc. are out of the question.
+1
'XOR of two bytes in one place' strikes me as a thin excuse for a new feature that abbreviates a simple, short, one-liner. To me, bytes(x ^ y for x, y in zip(a, b)) looks fine. a and b can be any iterables of ints.
On 19 October 2013 04:56, Terry J. Reedy \report@bugs.python.org\ wrote:
Terry J. Reedy added the comment:
'XOR of two bytes in one place' strikes me as a thin excuse for a new feature that abbreviates a simple, short, one-liner. To me, bytes(x ^ y for x, y in zip(a, b)) looks fine. a and b can be any iterables of ints.
---------- nosy: +terry.reedy
Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue19251\
Hm... I think you are right.
bytes(x ^ y for x, y in zip(a, b)) is super-slow if you have to do XOR inside a hot loop for a couple of ten thousand times. int.from_bytes + int.to_bytes is about ten times faster. I expect bitwise ops of bytes to be even faster and more readable.
$ python3.3 -m timeit -n 100000 -s "a = b'a'*64; b = b'b'*64" "bytes(x ^ y for x, y in zip(a, b))"
100000 loops, best of 3: 7.5 usec per loop
$ python3.3 -m timeit -n 100000 -s "a = b'a'*64; b = b'b'*64" "i = int.from_bytes(a, 'little') ^ int.from_bytes(b, 'little'); i.to_bytes(64, 'little')"
100000 loops, best of 3: 0.866 usec per loop
Christian, we need multiple motivating use cases to warrant API expansion for fundamental types.
I'm concerned about starting to move numpy vector-op functionality into the core of Python because it optimizes your one use crypto use case.
I see that the feature idea is more controversial than I initially expected. It's probably best to have a long bike-shedding discussion on Python-ideas... :) Right now from_bytes/to_bytes trick is fast enough for my needs anyway. Therefore I'm deferring the proposal for 3.5.
By the way I'd also be happy with a set of vector ops in the operator module, e.g. vector_xor(a, b).
On 21 October 2013 14:45, Christian Heimes \report@bugs.python.org\ wrote:
Christian Heimes added the comment:
I see that the feature idea is more controversial than I initially expected. It's probably best to have a long bike-shedding discussion on Python-ideas... :) Right now from_bytes/to_bytes trick is fast enough for my needs anyway. Therefore I'm deferring the proposal for 3.5.
By the way I'd also be happy with a set of vector ops in the operator module, e.g. vector_xor(a, b).
+1, a generic vector_xor function looks like a better idea.
By the way I'd also be happy with a set of vector ops in the operator module, e.g. vector_xor(a, b).
This is one direction. Other direction is adding the bitarray or bitset container and the bitview adapter.
"You got me, Antoine! I'm working on a Python-only implementation of PBKDF2_HMAC. It involves XOR of two bytes in one place."
If you want super-fast code, you should probably reimplement it in C. Python is not designed for performances...
I'm very skeptical of this. I expect it would cause quite a few surprises for people who aren't used to bitmask operations on integers, let alone on (byte) strings.
Two years ago I suggested this:
Then I would prefer a view, using e.g. the buffer API, but without any copies.
... and of course Numpy already provides such an API:
>>> a, b = b"123", bytearray(b"abc")
>>> p = np.frombuffer(a, dtype=np.int8)
>>> q = np.frombuffer(b, dtype=np.int8)
>>> p ^ q
array([80, 80, 80], dtype=int8)
>>> b[0] = 64
>>> p ^ q
array([113, 80, 80], dtype=int8)
Another possibility if you don't mind a copy is to use int.from_bytes():
>>> p = int.from_bytes(a, 'little')
>>> q = int.from_bytes(b, 'little')
>>> (p ^ q).to_bytes(len(a), 'little')
b'qPP'
For what it's worth, I looked at some old code (a clean re-implementation of a crypto algorithm in Python, used as a unit test for production code in C++) and found this:
class BytesBits(bytes):
# from a quick test, 1.12us in NumPy, 1.39us in C, 2.55us this way, 46.1us with bytes(genexpr), so we don't need numpy
def _bitwise(self, other, op):
iself = int.from_bytes(self, 'little')
iother = int.from_bytes(other, 'little')
b = op(iself, iother)
return b.to_bytes(len(self), 'little')
def __or__(self, other):
return self._bitwise(other, int.__or__)
__ror__ = __or__
def __and__(self, other):
return self._bitwise(other, int.__and__)
__rand__ = __and__
def __xor__(self, other):
return self._bitwise(other, int.__xor__)
__rxor__ = __xor__
It doesn't do as much error checking as you want, but it was good enough for my purposes.
At any rate, the fact that it's trivial to wrap this up yourself (and even more so if you just write functions called band/bor/bxor instead of wrapping them up in a subclass) implies to me that, if it's not used all that often, it doesn't need to be on the builtin types.
I'm very skeptical of this. I expect it would cause quite a few surprises for people who aren't used to bitmask operations on integers, let alone on (byte) strings.
Maybe it makes more sense to implement such operation on the array.array type? But only for integer types (ex: not for 'd', float)?
FWIW a long time ago I wanted fast XORing of 512-byte “sectors” of Rar files. Initially I think I used array.array with the largest word size available, or numpy if available. Later when I learnt more Python I discovered the int.from_bytes() trick, and used int(hexlify(...), 16) in Python 2. So I guess the array module was a relatively obvious place to look, and the long integer trick was unexpected (I’d never programmed with unlimited size integers before).
Serhiy’s bit array idea also seems interesting. I bet somebody has already written a package. Maybe it could also be useful for things like Huffman encoding, where you string various bit strings together into a sequence of bytes. But I do wonder if all these things are too specialized for Python’s standard library.
Serhiy’s bitarray idea has another benefit. Bit masking is only haft of the story. Since Python's int type has no length information, it not possible to handle shifting with overflow and rotation. A bit array can provide bit shifting ops like rshift, lshift, rotr and rotl.
There are a number of existing libraries on PyPI for bit arrays and bit sets. There are a lot more API choices than you'd think of in advance, and between them they explore most of the useful space. And I don't think any of them need to be in the stdlib.
Just mention it here :)
I've attached a diff that adds ^, |, and & to bytes and bytearray object on the master branch (err the analogous hg thing).
It also includes a test file which definitely is in the wrong place, but demonstrates what is working.
Personally this came up while I was playing with toy crypto problems. I expected to already be part of the language, but it wasn't. I think this is a natural expectation. I don't think it is obvious to newer python users that they need to cast bytes to ints to do bitwise operations on them.
And I do not understand how bitwise operations work on arbitrary precision integers. So perhaps it is not as simple of a concept as "bytes xor bytes".
Some folks have suggested using NumPy, but that is a very heavy dependency, and not useful outside of cpython.
I searched for code this would clean up in the wild. It was easy to find.
This would also catch bugs when bytes objects are different lengths, but there is no check. Like this code I found
# XOR each byte of the roundKey with the state table
def addRoundKey(state, roundKey):
for i in range(len(state)):
state[i] = state[i] ^ roundKey[i]
p.s. this is my first cpython issue/patch please let me know if I'm doing something wrong.
On Jan 10, 2016, at 06:48, cowlicks \report@bugs.python.org\ wrote:
Personally this came up while I was playing with toy crypto problems. I expected to already be part of the language, but it wasn't. I think this is a natural expectation.
Maybe if you're coming to Python from APL or R or something, but C, C#, Scala, Ruby, Haskell--almost any other language you pick, there's no bitwise operations on (byte) strings. (And if you _are_ coming to Python from something like R, you definitely should be using NumPy.)
And I do not understand how bitwise operations work on arbitrary precision integers.
It's obvious once you think of them as infinite-sized fixed-size ints: 0x27 is the same number as 0x0027, so 0x27 & 0x0134 is 0x0024. (Of course not and negation aren't quite as obvious, but once you think about it, there's only one sensible thing 2's complement could do, and only two sensible things 1's complement could do, and Python is sensible, so it's not surprising once you try it out.)
Some folks have suggested using NumPy, but that is a very heavy dependency, and not useful outside of cpython.
It's useful outside of CPython. While NumPyPy isn't 100% yet, it's usable enough for many projects.
More to the point, if you're looking for arrays that have really fast and highly readable elementwise operations, that's exactly what NumPy is all about. Sure, you can get bits and pieces of similar functionality without it, but if you're thinking about your code in NumPy's terms (elementwise operations), you really do want to think about NumPy.
Meanwhile, have you looked around PyPI at the various bitarray, bitstring, etc. libraries? Are they too slow, too heavyweight, or too inconveniently-API'd? I know whenever I want to play with things like Huffman coding or the underlying bit representation of IEEE floats or anything else bitwise besides basic C integer stuff, I reach for one of those libraries, rather than trying to hack things up around bytes strings. (Except for that example I posted above--clearly for some reason I _did_ hack things up around bytes strings that time--but it only took me that simple class to make things convenient and efficient.)
@Andrew Barnert
Maybe if you're coming to Python from... I'm not sure if your trying argue that my expectations are unusual? Python is my first programming language. To reiterate: I expected cpython to support bitwise operations on binary data. I don't think that is so strange.
No I have not looked at PyPi. What I did was have an idea to do this, and there happened to be an open bug on it that needed a patch. So I wrote one.
And yes, I realize NumPy can do this, but it is still a very large dependency.
Anyway, here are some random projects which would look a lot nicer with this:
An implementation of the blake2 hash function in pure python. Consider this line: https://github.com/buggywhip/blake2_py/blob/master/blake2.py#L234
self.h = [self.h[i] ^ v[i] ^ v[i+8] for i in range(8)]
Which would become something like:
self.h ^= v[:8] ^ v[8:]
Which is much easier to read and much faster.
Or consider this function from this aes implementation: https://github.com/bozhu/AES-Python/blob/master/aes.py#L194-L201
def __mix_single_column(self, a):
# please see Sec 4.1.2 in The Design of Rijndael
t = a[0] ^ a[1] ^ a[2] ^ a[3]
u = a[0]
a[0] ^= t ^ xtime(a[0] ^ a[1])
a[1] ^= t ^ xtime(a[1] ^ a[2])
a[2] ^= t ^ xtime(a[2] ^ a[3])
a[3] ^= t ^ xtime(a[3] ^ u)
This would become something like:
def __mix_single_column(self, a):
a ^= a ^ xtime(a ^ (a[1:] + a[0:1]))
Clearer and faster.
Another piece of code this would improve: https://github.com/mgoffin/keccak-python/blob/master/Keccak.py#L196-L209
These were easy to find so I'm sure there are more. I think these demonstrate that despite what people *should* be doing, they are doing things in a way that could be substantially improved with this patch.
This does resemble NumPy's vectorized functions, but it is much more limited in scope since there is no broadcasting involved.
Here is a quick benchmark:
$ ./python -m timeit -n 100000 -s "a=b'a'*64; b=b'b'*64" "(int.from_bytes(a, 'little') ^ int.from_bytes(b, 'little')).to_bytes(64, 'little')"
100000 loops, best of 3: 0.942 usec per loop
$ ./python -m timeit -n 100000 -s "a=b'a'*64; b=b'b'*64" "a ^ b"
100000 loops, best of 3: 0.041 usec per loop
NumPy is the slowest but I'm probably doing something wrong, and its in ipython so I'm not timing the import:
In [13]: %timeit bytes(np.frombuffer(b'b'*64, dtype=np.int8) ^ np.frombuffer(b'a'*64, dtype=np.int8)) 100000 loops, best of 3: 3.69 µs per loop
About 20 times faster,
in order to increase perofrmance even more, use block operation on bytes. I.e. Xor by 8 bytes first (on 64-bit system) while size remainig is bigger or equal to 8, then by 4 bytes using same loop, and then xor remaining bytes by one byte. This will increase performance roughly to 8 times on 64bit systems and by 4 times on 32bit systems.
See my PR https://github.com/KeepSafe/aiohttp/pull/687/files for details
To reiterate, this issue would make more readable, secure, and speed up a lot of code.
The concerns about this being a numpy-like vector operation are confusing to me. The current implementation is already vector-like, but lacks size checking. Isn't "int ^ int" really just the xor of two arbitrarily long arrays of binary data? At least with "bytes ^ bytes" we can enforce the arrays be the same size.
I like to add the bitwise protocol between byte objects of equal length
Would it make sense to add such operators in a new module like the existing (but deprecated) audioop module?
If yes, maybe we should start with a module on PyPI. Is there a volunteer to try this option?
Can you point to some examples of existing code that would become more readable and faster when this feature exists? Separately, how is it more secure?
On Mon, Apr 25, 2016 at 9:17 AM, cowlicks \report@bugs.python.org\ wrote:
cowlicks added the comment:
To reiterate, this issue would make more readable, secure, and speed up a lot of code.
The concerns about this being a numpy-like vector operation are confusing to me. The current implementation is already vector-like, but lacks size checking. Isn't "int ^ int" really just the xor of two arbitrarily long arrays of binary data? At least with "bytes ^ bytes" we can enforce the arrays be the same size.
----------
Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue19251\
I have wanted bytes/bytearray bit operations (be sure to include the in place operators for bytearray) when using micropython where it is normal to be operating on binary data.
that said, i'd need someone from micropython to chime in as to if they can magically detect # Equivalent of: c = b ^ a c = bytes(x ^ y for x, y in zip(a, b)) and make it run fast.
what is a similar expression for an in place bytearray modification? # Equivalent of: a ^= b assert len(a) == len(b) for i, b_i in enumerate(b): a[i] ^= b_i ?
Why both of the above are slow is obvious: tons of work looping within python itself, creating and destroying small integers and/or tuples the entire time instead of deferring to the optimal loop in C.
Neither of the above "look as nice" as a simple operator would. But they are at least both understandable and frankly about the same as what you would naively write in C for the task.
Security claims? Nonsense. This has nothing to do with security. It is *fundamentally impossible* to write constant time side channel attack resistant algorithms in a high level garbage collected language. Do not attempt it. Leave that stuff to assembler or _very_ carefully crafted C/C++ that the compiler cannot undo constant time enforcing tricks in. Where it belongs. Python will never make such guarantees.
NumPy? No. That is a huge bloated dependency. It is not relevant to this as we are not doing scientific computing. It is not available everywhere.
The int.from_bytes(...) variant optimizations? Those are hacks that might be useful to people in CPython today, but they are much less readable. Do not write that in polite code, hide it behind a function with a comment explaining why it's so ugly to anyone who dares look inside please.
So as much as I'd love this feature to exist on bytes & bytearray, it is not a big deal that it does not.
Starting with a PyPI module for fast bit operations on bytes & bytearray objects makes more sense (include both pure python and a C extension implementation). Use of that will give an idea of how often anyone actually wants to do this.
If yes, maybe we should start with a module on PyPI. Is there a volunteer to try this option?
bitsets – ordered subsets over a predefined domain bitarray – efficient boolean array implemented as C extension bitstring – pure-Python bit string based on bytearray BitVector – pure-Python bit array based on unsigned short array Bitsets – Cython interface to fast bitsets in Sage bitfield – Cython positive integer sets intbitset – integer bit sets as C extension
Is it enough? Ah, and NumPy.
@gvanrossum in this previous comment https://bugs.python.org/issue19251?@ok_message=msg%20264184%20created%0Aissue%2019251%20message_count%2C%20messages%20edited%20ok&@template=item#msg257964
I pointed out code from the wild which would be more readable, and posted preliminary benchmarks. But there is a typo, I should have written:
def __mix_single_column(self, a):
t = len(a) * bytes([reduce(xor, a)])
a ^= t ^ xtime(a ^ (a[1:] + a[0:1]))
As @gregory.p.smith points out, my claim about security isn't very clear. This would be "more secure" for two reasons. Code would be easier to read and therefore verify, but this is the same as readability. The other reason, doing some binary bitwise op on two bytes objects enforces that the objects be the same length, so unexpected bugs in these code samples would be avoided.
bytes(x ^ y for x, y in zip(a, b))
(int.from_bytes(a, 'big') ^ int.from_bytes(b, 'big')).to_bytes(len(a), 'big')
# XOR each byte of the roundKey with the state table
def addRoundKey(state, roundKey):
for i in range(len(state)):
state[i] = state[i] ^ roundKey[i]
I'll look through the list @serhiy.storchaka posted and make sure this still seems sane to me.
> If yes, maybe we should start with a module on PyPI. Is there a volunteer to try this option?
bitsets – ordered subsets over a predefined domain
This is an array of *bits*, not an array of bytes (or integers).
bitarray – efficient boolean array implemented as C extension
This one is also an array of *bits*, but I see .frombytes() and .tobytes() methods.
bitstring – pure-Python bit string based on bytearray
Array of *bits*. I don't see how to use an array of integers (or a byte string) with it.
BitVector – pure-Python bit array based on unsigned short array Bitsets – Cython interface to fast bitsets in Sage bitfield – Cython positive integer sets intbitset – integer bit sets as C extension
I'm too lazy to check these ones.
I didn't check these modules support operations like x^y.
Is it enough? Ah, and NumPy.
I'm quite sure that NumPy supports operations like x^y ;-) And NumPy supports a wide choices of arrays.
Victor
I'd like speak my support for bitwise ops on bytes and bytearray (agree, on equal lengths only).
I've got 2 arguments here:
readability: a ^ b, a | b and so forth are clear and direct
all the various incantation presented must be _understood_, not to mention invented anew by anyone wanting to do they same, with the same burden of getting it correct; of course one can say the same of any feature not already present in a language but that is trite; there are several ways to say this and all have varying degrees of speed, obtuseness and verbosity. And they're all SLOW.
Regarding some of the counter arguments in the discussion:
Maybe cowlicks should have said "reliable", though to my naive eye a normal implementation would be constant time for a given size. I would argue that the clarity and directness of just writing "a^b" immediately makes for trivially correct code, which itself is a necessary prerequisite for secure code.
This is not an argument against the feature. That one had to perform similar activitie in Python as in C merely reflects the present lack of these operators, not a preexisting gleaming sufficiency of operator richness.
Anyway, I an for this feature, for the record.
Amendment: I wrote above: "Just because a lot of things can be written/constructed as one liners doesn't mean they should be operators". Of course I meant to write "doesn't mean they should _not_ be operators".
I'm back in the embedded software world now, and hence working with the combination of:
It's the kind of environment where having the struct module in the standard library is incredibly valuable, and the main things that better support for direct manipulation of binary data could potentially offer us is avoiding some "memory -> struct.unpack -> process -> struct.pack -> memory" round trips, as well as potentially reducing the overall amount of code we have to maintain.
So I'll keep an eye out for potential opportunities for code simplification - while crypto algorithms, file formats, network protocols, and hardware interfaces can all call for this kind of thing, I'm less sure how often we're encountering it in situations where having it available would have let us avoid invoking struct entirely.
@ncoghlan
Could you please create Pull-request on Github ?
This issue isn't at the stage where a PR would help - the core question is still "Should we add better native support for multi-byte bitwise operations?", not the specifics of what they API might look like or how we would implement it.
Nick, for your tasks you may be interested in PEP-3118 which still is not completely implemented (bpo-3132).
Thanks for the link Serhiy (I'd forgotten about the struct changes proposed in PEP-3118), but the existing struct formatting codes are fine for my purposes.
The question is whether we might be able to avoid some bytes->Python-objects->bytes cycles if there were a few more contiguous-binary-data-centric operations on bytes and/or memoryview (similar to the way the ASCII-centric operations on bytes and bytearray help to avoid bytes->text->bytes cycles).
Le 17/05/2018 à 18:20, Nick Coghlan a écrit :
The question is whether we might be able to avoid some bytes->Python-objects->bytes cycles if there were a few more contiguous-binary-data-centric operations on bytes and/or memoryview (similar to the way the ASCII-centric operations on bytes and bytearray help to avoid bytes->text->bytes cycles).
Can you elaborate on your question?
I'd second the proposal of considering the "array.array" type for this, instead of the bytes/bytearray types. Why? Because it is somewhat of a niche case after all, and the bytes type seems too exposed for it. Also, array.array supports more diverse base item types than just plain bytes, which could potentially cover more use cases or at least simplify certain data processing needs.
Alternatively, implement a lightweight SIMD-like memoryview type that operates on arbitrary buffers (which covers bytes, bytearray and array.array). But that's either NumPy then, or something new that would best spend its early days on PyPI.
Please, let's have this API discussion outside of the bug tracker. This deserves a PEP. Also because I have an alternative API to suggest :-)
I think Antoine's right that another venue (such as python-ideas) might be a better venue for this discussion, but I'll still try to explain the potential analogy I see to bytes.upper()/.lower()/etc: those operations let you treat ASCII segments in otherwise binary data as ASCII text, *without* needing to convert them to str first. While doing the str conversion is more formally correct, being able to stay in the raw binary domain frequently offers significant practical benefits by reducing both the runtime performance overhead and the amount of code needed.
Offering bitwise operations for bytes segments of equal length (perhaps via memoryview, or a memoryview subclass that only supports C-contiguous views) *might* turn out to offer a similar benefit when it comes to manipulating sections of a data buffer that represent integers (or anything else with a well-defined binary representation). With the right buffer exporter, you could even use it for direct bit-bashing of memory-mapped registers (which then gets quite interesting in the context of MicroPython applications).
The last three posts have convinced me that 'efficient bit operations', not tied to the int type, are worth exploring, without immediate restriction to a particular API. I can see that micropython is a significant new use case.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields: ```python assignee = None closed_at = None created_at =
labels = ['interpreter-core', 'type-feature', '3.8']
title = 'bitwise ops for bytes of equal length'
updated_at =
user = 'https://github.com/tiran'
```
bugs.python.org fields:
```python
activity =
actor = 'MGilch'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Interpreter Core']
creation =
creator = 'christian.heimes'
dependencies = []
files = ['41569']
hgrepos = []
issue_num = 19251
keywords = ['patch']
message_count = 52.0
messages = ['199785', '199786', '199787', '199792', '199794', '199801', '199802', '199805', '199835', '200330', '200359', '200401', '200696', '200735', '200739', '200740', '202120', '257727', '257728', '257729', '257730', '257748', '257754', '257755', '257761', '257765', '257911', '257914', '257964', '260470', '264184', '264187', '264188', '264190', '264192', '264323', '264324', '264325', '265560', '265561', '316938', '316939', '316940', '316941', '316956', '316957', '317038', '317040', '317095', '317107', '317109', '317116']
nosy_count = 18.0
nosy_names = ['georg.brandl', 'rhettinger', 'terry.reedy', 'gregory.p.smith', 'ncoghlan', 'pitrou', 'scoder', 'vstinner', 'christian.heimes', 'cameron', 'socketpair', 'Ramchandra Apte', 'martin.panter', 'serhiy.storchaka', 'abarnert', 'josh.r', 'cowlicks', 'MGilch']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue19251'
versions = ['Python 3.8']
```