In the binary representation of signed integers, the least significant bit stores the sign and the subsequent bits store the absolute number value. So for example, the number 7 is represented as 1110 and -7 as 1111. After this transformation, the normal unsigned integer encoding algorithm can be used.
That claim does not match the reference implementations and produces incorrect results when decoding polylines returned by HERE. The encoding actually goes like this:
signed 7 = unsigned 14 = binary 1110
signed 6 = unsigned 12 = binary 1100
signed 5 = unsigned 10 = binary 1010
signed 4 = unsigned 8 = binary 1000
signed 3 = unsigned 6 = binary 0110
signed 2 = unsigned 4 = binary 0100
signed 1 = unsigned 2 = binary 0010
signed 0 = unsigned 0 = binary 0000
signed -1 = unsigned 1 = binary 0001 (not 0011)
signed -2 = unsigned 3 = binary 0011 (not 0101)
signed -3 = unsigned 5 = binary 0101 (not 0111)
signed -4 = unsigned 7 = binary 0111 (not 1001)
signed -5 = unsigned 9 = binary 1001 (not 1011)
signed -6 = unsigned 11 = binary 1011 (not 1101)
signed -7 = unsigned 13 = binary 1101 (not 1111, which is what the quoted paragraph claims)
That claim does not match the reference implementations and produces incorrect results when decoding polylines returned by HERE. The encoding actually goes like this: