Open shangjiaxuan opened 3 years ago
A corrected current possible implementation here:
trait MyUnsigned:
bitstream_io::Numeric
+ std::ops::Add
+ std::ops::BitAnd
+ num::Zero
+ num::ToPrimitive
+ std::ops::Shl
{
type S : MySigned;
// same with, bitwise, native endian
fn cast_signed(self)->Self::S
{
assert_eq!(std::mem::size_of::<Self>(),std::mem::size_of::<Self::S>());
unsafe {
*((&self as *const Self) as *const Self::S)
}
}
fn plus_one(self)->Self
{
return self + Self::one();
}
}
trait MySigned:
bitstream_io::SignedNumeric
+ std::ops::Add
+ num::Zero
+ std::ops::Neg<Output= Self>
+ num::ToPrimitive
{
type U : MyUnsigned;
// same with, bitwise, native endian
fn cast_unsigned(self)->Self::U
{
assert_eq!(std::mem::size_of::<Self>(),std::mem::size_of::<Self::U>());
unsafe {
*((&self as *const Self) as *const Self::U)
}
}
}
impl MyUnsigned for u8{ type S = i8; }
impl MyUnsigned for u16{ type S = i16; }
impl MyUnsigned for u32{ type S = i32; }
impl MyUnsigned for u64{ type S = i64; }
impl MyUnsigned for u128{ type S = i128; }
impl MySigned for i8{ type U = u8; }
impl MySigned for i16{ type U = u16; }
impl MySigned for i32{ type U = u32; }
impl MySigned for i64{ type U = u64; }
impl MySigned for i128{ type U = u128; }
fn write_golumb_unsigned<W, U>(mut writer : W, num : U) -> io::Result<()>
where
W: BitWrite,
U: MyUnsigned
{
let num = num.add(U::one());
let num_bits = U::zero().leading_zeros() - num.leading_zeros();
io_err!(writer.write_unary1(num_bits));
writer.write(num_bits - 1, num)
}
fn write_golumb_signed<W, S>(mut writer : W, num : S)->io::Result<()>
where
W: BitWrite,
S: MySigned
{
let coded =
if num>S::zero()
{
// code as 2x+1
// overflow only happens if is maximum value
(num.cast_unsigned()<<1).plus_one()
}
else
{
// code as -2x
// overflow only happens if is minimum value
num.neg().cast_unsigned()<<1
};
write_golumb_unsigned(writer, coded)
}
I was trying to implement a write operation for exponential golumb-coded integers in bitstream (used in h264 and other format). While I was trying to implement one for unsigned and one for signed, it seems that the declaration of Numeric trait conflicts with Unsigned trait from num-traits. Using the original numeric and implementing other things again is not really elegant.
Code is:
Using Numeric for PrimInt will complain about one and zero definitions. numeric-trait seems to be well maintained, and it seems changing Numeric from the current implementation to
will not hurt any current users, and will reduce the amount of code needed to maintain (except one additional dependency).