Java's default UTF-16, 2-bytes-per-character string encoding, is inefficient
for strings which otherwise could be encoded with a single byte per character.
It should be possible to represent characters in the trees using only a single
byte per character, when working with compatible strings. This may reduce
memory overhead by 50%.
Original issue reported on code.google.com by ni...@npgall.com on 20 Oct 2013 at 10:20
Original issue reported on code.google.com by
ni...@npgall.com
on 20 Oct 2013 at 10:20