Hello, I am developing a ZPU simulator on a very old machine (don't ask why), and came across a strange issue.
When the ZPU is running for a (very) long time, _readCycles() returns unexpected negative values. They are unexpected because they should appear much later on since the timer is 64bits. After investigation, it appears that the returned value have a specific pattern. They looks like this in binary: 0b11...11xy...zt more precisely, exactly 33 bits set to 1, followed by 30 random 0/1 bits. Actually the upper part of the 64bit value is all trashed with ones.
I have checked the (emulated) timer and the upper 32 bits are correctly read as a positive integer. What seem to happen is that each time the lower "int" of the timer is negative, the high "int" gets replaced with 0xff...ff (i.e. -1), resulting in a negative "long long" (64 bit value) where one expect a mix of 0/1 bits in there.
After checking the source code for _readCycles(), I think I have spotted the root cause for this.
The fix is fairly easy: one should declare TIMER as "unsigned int" to prevent the propagation of the sign bit of TIMER[0] during the cast: static volatile unsigned int *TIMER;
I have tested this simple change and I dont get these unexpected negative "cycles counts" anymore. Problem seem to be fixed (at least for me). I also decided to share this to you as I think you might be interested to have a feedback on this old project.
PS: By the way, one can also declare unsigned long long_readCycles() to have everything correctly set to unsigned, but this isn't as important as the upper bits being trashed as soon as the ZPU starts its 2^31 cycle.
Hello, I am developing a ZPU simulator on a very old machine (don't ask why), and came across a strange issue.
When the ZPU is running for a (very) long time, _readCycles() returns unexpected negative values. They are unexpected because they should appear much later on since the timer is 64bits. After investigation, it appears that the returned value have a specific pattern. They looks like this in binary:
0b11...11xy...zt
more precisely, exactly 33 bits set to 1, followed by 30 random 0/1 bits. Actually the upper part of the 64bit value is all trashed with ones.I have checked the (emulated) timer and the upper 32 bits are correctly read as a positive integer. What seem to happen is that each time the lower "int" of the timer is negative, the high "int" gets replaced with 0xff...ff (i.e. -1), resulting in a negative "long long" (64 bit value) where one expect a mix of 0/1 bits in there.
After checking the source code for _readCycles(), I think I have spotted the root cause for this.
Look at the declaration of TIMER[] in crt_io.c: https://github.com/zylin/zpugcc/blob/6714b053f6c701ab3fe75f87f7daa16f345780c3/toolchain/gcc/libgloss/zpu/crt_io.c#L13 As you can see, it is declared as a signed
int
value. This mean that whenever TIMER[0] has its 31th bit (sign bit) set, the following code in _readCycles(): https://github.com/zylin/zpugcc/blob/6714b053f6c701ab3fe75f87f7daa16f345780c3/toolchain/gcc/libgloss/zpu/crt_io.c#L88 will cast the negative int, into a negative long long. At this pointclock
now has all its upper bits sets to 1, masking the bits for the next "or" with TIMER[1]. This results in a returned value having this specific pattern: 0xFFFFFFFF<some negative 32bit value>.The fix is fairly easy: one should declare TIMER as "unsigned int" to prevent the propagation of the sign bit of TIMER[0] during the cast:
static volatile unsigned int *TIMER;
I have tested this simple change and I dont get these unexpected negative "cycles counts" anymore. Problem seem to be fixed (at least for me). I also decided to share this to you as I think you might be interested to have a feedback on this old project.
PS: By the way, one can also declare
unsigned long long_readCycles()
to have everything correctly set to unsigned, but this isn't as important as the upper bits being trashed as soon as the ZPU starts its 2^31 cycle.