Currently using a bare "char" as the alphabet type. We assume this is signed, which itself is a problem on some architectures. Really we should be going to unsigned, so we can reference chars with the high bit set in the way that is common when discussing utf and unicode: using hex chars. Negative values are really uncomfortable and don't lend themselves to specifying ranges.
Currently using a bare "char" as the alphabet type. We assume this is signed, which itself is a problem on some architectures. Really we should be going to unsigned, so we can reference chars with the high bit set in the way that is common when discussing utf and unicode: using hex chars. Negative values are really uncomfortable and don't lend themselves to specifying ranges.
More information in: #97 #81