birdal46 / grafx2

Automatically exported from code.google.com/p/grafx2
0 stars 0 forks source link

Cleanup variable types #140

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Currently there are three schemes used for variables :
byte/word/dword from original grafx2 sourcecode,
char/short/int,
uint8_t, uint16_t, uint32_t.

The first one should be removed and the last one should be used instead. 
This will allow for the same level of portability. char should be used 
only for character and strings, and int should be used for calculations 
where we want to use "whatever the size of native integers are for the 
CPU", so we get fast calculations. Maybe we could use uintl32_t to ensure 
they are at least 32 bit wide but i don't think we'll have problems with 
that :)
Avoiding custom types seems a good idea.
What do you think about it ?

Original issue reported on code.google.com by pulkoma...@gmail.com on 21 Mar 2009 at 3:44

GoogleCodeExporter commented 8 years ago
It's a good idea, but in grafx2 I prefer we didn't. The byte type in particular 
is
used almost exclusively for pixel color and palette components, and they should
always be unsigned. I find it reassuring that there doesn't even exist a
similarly-named type for signed 8-bit. If it all became uint8_t, who would 
detect the
horrible bug caused by an innocent int8_t in a function prototype ?

Visually, it's subjective of course, but I observe we already have a lot of
underscores (word separators in all symbols), so I'm reluctant to have an 
additional
one in each type.

About strings and individual characters (and pointers): Just use "char", it's 
short
and helpful to understand the data type. Normally, with our Windows-1252 
encoding, it
would be much more code-friendly to have unsigned chars for this, everywhere; 
but I
fixed the parts that needed it (string printing), so now it no longer matters.

I looked, and we apparently ALSO have Uint32, Sint32 etc. in part of the code. 
They
are from SDL, so they work everywhere for us. On the other hand, uint16_t is 
from
stdint.h, which seems to come from C99 standardization. I'm pretty sure some
compilers don't have it.

Original comment by yrizoud on 21 Mar 2009 at 11:44

GoogleCodeExporter commented 8 years ago
Actually, i've no problem using byte/word/dword as "low level" vars, however, 
there 
are many place in the code where we use "short". This one should be replaced by 
"word", or we could decide to have "word" for unsigned and "short" for signed 
16-
bits.
The uint_t notation is C99 and a bit painful to use, but i think a #define byte 
uint8_t should be enough.
I have noticed (using splint) that in many places we are messing between the 
types, 
and it feels a bit dangerous, if someone ever compiles on a compiler where 
short is 
not 16 bits, mixing it with word (particularly if there is a pointer involved) 
sounds quite bad.

So, the better solution seems :
byte, word, dword, qword : unsigned 8, 16, 32 or 64 bits
char, short, int, long : should be avoided where possible. Use "char" for 
strings, 
int for calculations.
Sint8, Sint16, Sint32, Sint64 as signed equivalents to byte, word, dword, 
qword. 
Maybe find other names for them.

In any case, i can't see any use for the "short" type. I think i have 
introduced all 
the occurences of it myself when i began the porting from dos, and they should 
probably be replaced again with "word".

Original comment by pulkoma...@gmail.com on 22 Mar 2009 at 10:01

GoogleCodeExporter commented 8 years ago
Replace short by word ?? Words are unsigned and shorts are signed, that's the 
reason
for the two types, and if you mistake one for the other you can run into some 
very
difficult bugs. In the current code, shorts (signed: -32768 to 32767) are 
mainly used
for screen or image coordinates, the program is limited to 9999x9999 images 
anyway,
and the negative part is *very* helpful to handle clipping in the top and left 
parts. 
To allow the user to move the viewport, it would be best if everything was 
signed,
actually.
Computers where short is not 16bit ? After some searching, I find there are 
only two
standards like that: ILP64 (Cray supercomputers), and SILP64 (no implementation 
exists).

Original comment by yrizoud on 22 Mar 2009 at 3:58

GoogleCodeExporter commented 8 years ago

Original comment by pulkoma...@gmail.com on 22 Jul 2009 at 4:00