Closed serisman closed 4 years ago
This is confirmed working, but still needs PADIER fix for proper ILRC and BG calibration.
I created a feature branch for this so we can work together on this. I also moved some fixes/changes not related to PMS152 and merged them to development directly.
I created a feature branch for this so we can work together on this. I also moved some fixes/changes not related to PMS152 and merged them to development directly.
Thanks!
How do you intend joint development on this new branch? Do I have access to write to that branch, or should I create a new fork and submit a PR for that branch?
Thanks for merging those un-related changes as well.
I noticed that you pull out the INTEN/INTRQ changes:
#define INTEN_PA0_BIT 0
#define INTEN_PB5_BIT 0
#define INTEN_PB0_BIT 1
#define INTEN_PA4_BIT 1
#define INTEN_T16_BIT 2
#define INTEN_COMP_BIT 4
#define INTEN_PWMG_BIT 5
#define INTEN_TM2_BIT 6
#define INTRQ_PA0_BIT 0
#define INTRQ_PB5_BIT 0
#define INTRQ_PB0_BIT 1
#define INTRQ_PA4_BIT 1
#define INTRQ_T16_BIT 2
#define INTRQ_COMP_BIT 4
#define INTRQ_PWMG_BIT 5
#define INTRQ_TM2_BIT 6
I had added those in to make it easier to use macros like this (https://github.com/serisman/Padauk/blob/master/include/util.h) :
setBit(INTEN, INTEN_T16_BIT);
...
if (isBitSet(INTRQ, INTRQ_T16_BIT))
clearBit(INTRQ, INTRQ_T16_BIT);
Which I think is cleaner and easier to read than directly using the masks.
Is there a reason we can't have both? We already have both for the flag definitions:
//flag definitions
#define FLAG_ZF 0x01
#define FLAG_CF 0x02
#define FLAG_AC 0x04
#define FLAG_OV 0x08
#define FLAG_ZF_BIT 0
#define FLAG_CF_BIT 1
#define FLAG_AC_BIT 2
#define FLAG_OV_BIT 3
Do you really need this compilcated macros?
e.g.
setBit(INTEN, INTEN_T16_BIT);
is same as
INTEN |= INTEN_T16;
if( isBitSet(INTRQ, INTRQ_T16_BIT) )
is same as
if( INTRQ & INTRQ_T16 )
clearBit(INTRQ, INTRQ_T16_BIT);
is same as
INTRQ &= ~INTRQ_T16;
So code is much cleaner and compiler can optimize it better.
I forgot to cleanup "FLAG_BIT" they are also not needed.
Well, NEED is a strong word. I agree they probably aren't NEEDED, but I do find them easier to read.
What about inline assembly?
Right now, I am doing something like this:
__asm
t1sn _REG(INTRQ), #INTRQ_T16_BIT
goto 00001$
...
set0 _REG(INTRQ), #INTRQ_T16_BIT
00001$:
__endasm;
Without the BIT values, wouldn't we have to resort to less efficient opcodes to do the comparison of the mask?
EDIT:
I noticed that SDCC does not create the most efficient code for if( INTRQ & INTRQ_T16 )
currently. I don't think it does any better with my macros either, so might be something to look into.
If I remember right, I think avr-gcc always uses the BIT values and has a _BV macro to convert to mask when needed.
So, something like INTEN |= INTEN_T16;
would be INTEN |= _BV(INTEN_T16);
or just INTEN |= (1<<INTEN_T16);
, and multiple defines (one for mask, another for bit) wouldn't be needed.
That also has the benefit of slightly easier/cleaner definitions with less chance for bugs in header files. (i.e. bit positions are easier to read from the datasheet than calculating masks).
The compiler is smart enough to do exact what you outline in assembly. It will use bit tests / sets even when you write (REG & 0x80).
But I understand your request.
In case we change to bit definitions then the masks should be calculated automatically in the header file.
Personally I don't like the _BV() macro since a simple combination like this
INTEN = INTEN_TM2 | INTEN_T16;
becomes much more unreadable :
INTEN = _BV(INTEN_TM2_BIT) | _BV(INTEN_T16_BIT);
or
INTEN = (1<<INTEN_TM2_BIT) | (1<<INTEN_T16_BIT);
The compiler is smart enough to do exact what you outline in assembly. It will use bit tests / sets even when you write (REG & 0x80).
I'm not sure I understand this.
The compiler definitely doesn't know that t1sn _somereg, #0x80
should really be t1sn _somereg, #7
. So, I still don't know how to write proper inline assembly to make use of the bit instructions with masks. Do you have an example of what you are think of?
In case we change to bit definitions then the masks should be calculated automatically in the header file.
Are you suggesting that we have both BIT and MASK values in the header files, but have the MASK values defined as (1 << BIT_VALUE)
?
Personally I don't like the _BV() macro since a simple combination like this
INTEN = INTEN_TM2 | INTEN_T16;
becomes much more unreadable :INTEN = _BV(INTEN_TM2_BIT) | _BV(INTEN_T16_BIT);
orINTEN = (1<<INTEN_TM2_BIT) | (1<<INTEN_T16_BIT);
To each their own, I guess. That is part of the reason I created other macros. i.e. INTEN = BV2(INTEN_TM2, INTEN_T16)
I think the cleanliness is even more clear when you are trying to clear bits:
clearBit(INTEN, INTEN_T16)
INTEN &= ~INTEN_T16
clearBits(INTEN, BV2(INTEN_TM2, INTEN_T16))
clear2Bits(INTEN, INTEN_TM2, INTEN_T16)
INTEN &= ~(INTEN_TM2 | INTEN_T16)
I get tripped up sometimes on the &= ~ | logic combinations
The compiler definitely doesn't know that t1sn _somereg, #0x80 should really be t1sn _somereg, #7.
Indeed compilers are doing this optimization. In case they see code like if( REG & 0x80) they optimize it to T0SN REG, #7
Are you suggesting that we have both BIT and MASK values in the header files, but have the MASK values defined as (1 << BIT_VALUE)?
Yes this was my idea.
I get tripped up sometimes on the &= ~ | logic combinations
So we should do both then :-)
By the way, a lot of the other macros in my util.h file help making code more configurable/portable.
I can #define PIN_BTN PA,5
and then do things like: setPinInput(PIN_BTN)
, setPinPullup(PIN_BTN)
, isPinLow(PIN_BTN)
, etc... without having to remember exactly which registers to use for everything. It is closer to the arduino way of things without all the bloat. The big benefit is PIN_BTN could technically be overridden (i.e. passed in from Makefile) without any code changes (well, the define would have to be wrapped in a #ifndef PIN_BTN
).
Indeed compilers are doing this optimization. In case they see code like if( REG & 0x80) they optimize it to T0SN REG, #7
That hasn't been my experience so far. I'll have to test further, but I believe SDCC was generating less optimal instructions for that exact line of code.
And, that doesn't answer how to create optimal bit code when writing inline assembly. But, it looks like you are ok with having both BIT and MASK values, so the point is moot.
Are you suggesting that we have both BIT and MASK values in the header files, but have the MASK values defined as (1 << BIT_VALUE)?
Yes this was my idea.
Ok, that works for me. I guess the only decision left would be to finalize naming conventions. I wasn't real happy with just appending _BIT on the end (especially if the _BIT version is the source of truth that is then used to generate the MASK version). But, that was a short/easy solution to get past the problem I was experiencing (with inline assembly) and move on.
I get tripped up sometimes on the &= ~ | logic combinations
So we should do both then :-)
:+1:
I think this is not related to the IC itself anymore and should be part of your own "util.h"
To keep complexity low I think the programmer software and header files should be minimal, just having the definitions for the IC.
If we make this nice and simple enough the IC header files might be added to SDCC.
I think this is not related to the IC itself anymore and should be part of your own "util.h"
Yes, I agree. I was intending to keep my util.h separate.
To keep complexity low I think the programmer software and header files should be minimal, just having the definitions for the IC.
If we make this nice and simple enough the IC header files might be added to SDCC.
:+1:
Technically, this is very similar to the avr-gcc 'avr/io.h' and supporting files. Might be worth a deeper investigation and try to stay a bit more consistent with what they have already solved (and what they choose to keep in IC IO specific files vs. other places).
https://github.com/vancegroup-mirrors/avr-libc/blob/master/avr-libc/include/avr/io.h https://github.com/vancegroup-mirrors/avr-libc/blob/master/avr-libc/include/avr/iom8.h
I had a look in SDCC (sdcc/devices/include/...) and it looks like mostly mask values are used.
Maybe we can find an example there for BIT definitions?
in "compiler.h" I found sbit and sfrbit is used in some places, but I'm not sure if PDK SDCC compiler supports this.
I had a look in SDCC (sdcc/devices/include/...) and it looks like mostly mask values are used.
Maybe we can find an example there for BIT definitions?
in "compiler.h" I found sbit and sfrbit is used in some places, but I'm not sure if PDK SDCC compiler supports this.
Yeah, I know MCS51 uses sbit and sfrbit (and also allows a __bit type).
I'm not sure that PDK through SDCC supports that currently though. I know the compiler didn't like bit when I tried to use it. Not sure about sbit and __sfrbit though.
Well,
then we might go with your initial idea to add a prefix/suffix to the defines to make clear that they are bit values.
//flag definitions
#define FLAG_ZF 0x01
#define FLAG_CF 0x02
#define FLAG_AC 0x04
#define FLAG_OV 0x08
#define FLAG_ZF_BIT 0
#define FLAG_CF_BIT 1
#define FLAG_AC_BIT 2
#define FLAG_OV_BIT 3
==>
//flag definitions
#define FLAG_ZF_BIT 0
#define FLAG_CF_BIT 1
#define FLAG_AC_BIT 2
#define FLAG_OV_BIT 3
#define FLAG_ZF (1<<FLAG_ZF_BIT)
#define FLAG_CF (1<<FLAG_CF_BIT)
#define FLAG_AC (1<<FLAG_AC_BIT)
#define FLAG_OV (1<<FLAG_OV_BIT)
What do you think ?
Well,
then we might go with your initial idea to add a prefix/suffix to the defines to make clear that they are bit values.
//flag definitions #define FLAG_ZF 0x01 #define FLAG_CF 0x02 #define FLAG_AC 0x04 #define FLAG_OV 0x08 #define FLAG_ZF_BIT 0 #define FLAG_CF_BIT 1 #define FLAG_AC_BIT 2 #define FLAG_OV_BIT 3
==>
//flag definitions #define FLAG_ZF_BIT 0 #define FLAG_CF_BIT 1 #define FLAG_AC_BIT 2 #define FLAG_OV_BIT 3 #define FLAG_ZF (1<<FLAG_ZF_BIT) #define FLAG_CF (1<<FLAG_CF_BIT) #define FLAG_AC (1<<FLAG_AC_BIT) #define FLAG_OV (1<<FLAG_OV_BIT)
What do you think ?
That would certainly work, although a bit verbose. When we have BIT values for everything, I would probably use them way more often than MASK values. (the BIT value is the bare minimum, the MASK is just a helper so you don't have to expand it elsewhere) So, my preference would be to reverse is so FLAG_ZF is the BIT value, and then something like FLAG_ZF_MASK is the expanded value. But, I understand that might not be better for you or others. I feel like there is probably a better way to describe these, but it hasn't come to mind yet.
I had a look in SDCC (sdcc/devices/include/...) and it looks like mostly mask values are used.
Which file(s) were you looking at that used mostly mask values?
I am looking at the 8051 file (https://sourceforge.net/p/sdcc/code/HEAD/tree/trunk/sdcc/device/include/mcs51/8051.h)
And it looks like it uses __sbit for everything that it can, and only resorts to mask values for things that aren't bit accessible.
I probably read the reason at some time and just forgot it, but why are there two different definitions for each register?
__sfr __at(0x00) _flag;
...
#define FLAG _flag
I know with 8051 (and I assume other devices), there is usually only one definition, i.e:
__sfr __at(0x00) FLAG;
Is there a reason for defining it both ways that I am missing?
Bottom of the file you mentioned:
/ TMOD bits /
some more examples: https://github.com/darconeous/sdcc/blob/master/device/include/mcs51/at89S8252.h https://github.com/darconeous/sdcc/blob/master/device/include/mcs51/regc515c.h
The extra register definitions (in #defines) are somehow required for VSCode to get syntax parser happy and have colorful highlighting.
Bottom of the file you mentioned:
Don't miss this comment from a few lines above: / BIT definitions for bits that are not directly accessible / This seems to imply that __sbit is the preference, but resort to MASK for things that aren't bit accessible. I'm not sure if that last part applies to PDK. Most (if not all) IO registers are bit accessible, if I recall correctly.
The extra register definitions (in #defines) are somehow required for VSCode to get syntax parser happy and have colorful highlighting.
If I understand correctly, VSCode just prefers the uppercase syntax, so just naming the __sfr with uppercase directly (like is done in other device files) may be sufficient.
I just tested it, and SDCC does not like the __sbit syntax for PDK unfortunately.
I'll probably start a few feature branch for header file optimization (I have some idea to try out), because at this point it really has nothing to do with adding PMS152 support.
VSCode struggles with the __sfr() syntax, that's why the separate define is required.
A new feature branch is perfect for this to try
This PR adds (compile time) support for the PMS152 OTP IC. (This is currently the cheapest 16-pin MCU on LCSC). It also fixes a few issues in the other IC support files (and has some other minor cleanup/standardization).
This is untested (so far), as the easypdkprog programmer doesn't yet have write support for this IC. But, everything compiles fine, as as far as I can tell the new support file matches the datasheet and the PMS152.INC file from the original Padauk IDE.
I am not sure how to know which BG calibration routine to use, so that is commented out for now. Also, I just assumed that the IHRC/ILRC calibration routines should use _H8/_L8 like most of the other ICs, but haven't proven it yet.