Closed BrendanSimon closed 5 years ago
The following might be a fix ?
static void stm32f4_gpio_bsrr_post_write_callback(Object *reg, Object *periph,
uint32_t addr, uint32_t offset, unsigned size,
peripheral_register_t value, peripheral_register_t full_value)
{
STM32GPIOState *state = STM32_GPIO_STATE(periph);
Object *odr = state->reg.odr;
assert(odr);
// 'value' may be have any size, use full_word.
uint32_t bits_to_set = (full_value & 0x0000FFFF);
uint32_t bits_to_reset = ((full_value >> 16) & 0x0000FFFF);
// Clear the BR bits and set the BS bits.
uint32_t new_value = (peripheral_register_get_raw_value(odr)
& (~bits_to_reset)) | bits_to_set;
stm32_gpio_update_odr_and_idr(state, odr, state->reg.idr, new_value);
#if 1 //FIXME:BJS: clear BR and BS bits in the BSRR register after ODR is set
peripheral_register_write_value( state-reg.>bsrr, 0 );
#endif
}
Guess you would have to do a similar thing for all the families (F0, F1, etc).
Might be possible to consolidate that in one place and clear the brr
register as well, to cover all families ?
static void stm32_gpio_update_odr_and_idr(STM32GPIOState *state, Object *odr,
Object *idr, uint16_t new_value)
{
assert(odr);
// Preserve old value, to compute changed bits
uint16_t old_value = peripheral_register_get_raw_value(odr);
// Update register value. Per documentation, the upper 16 bits
// always read as 0, so write is used, to apply the mask.
peripheral_register_write_value(odr, new_value);
#if 1 //FIXME:BJS: clear BSRR and BRR registers after ODR is set
peripheral_register_write_value( state->reg.bsrr, 0 );
peripheral_register_write_value( state->reg.brr, 0 );
#endif
stm32_gpio_set_odr_irqs(state, old_value, new_value);
stm32_gpio_update_idr(state, idr, new_value);
}
I think peripheral_register_write_value()
would work ok. It only sets the state value of a register object.
An alternative is to use peripheral_register_set_raw_value()
which does the exact same thing, but without masking out non-writable bits. It's a little more efficient and probably ok to use for this use case (i.e. clearing BSRR register after the bits have been finished with).
As an alternative to the above, still using the STM32F4 HAL CMSIS libraries, the BSRRL and BSRRH can be mimicked with these macros in BlinkLed.cpp
#define BLINK_GPIOx(_N) ((GPIO_TypeDef *)(GPIOA_BASE + (GPIOB_BASE-GPIOA_BASE)*(_N)))
#define BLINK_GPIOx_BSRRL(_N) (*((volatile uint16_t*)&(BLINK_GPIOx(_N)->BSRR)))
#define BLINK_GPIOx_BSRRH(_N) (*((volatile uint16_t*)(&BLINK_GPIOx_BSRRL(_N)+1)))
and
void
BlinkLed::turnOn ()
{
if (fIsActiveLow)
{
BLINK_GPIOx_BSRRH(fPortNumber) = (uint16_t)fBitMask;
}
else
{
BLINK_GPIOx_BSRRL(fPortNumber) = (uint16_t)fBitMask;
}
}
void
BlinkLed::turnOff ()
{
if (fIsActiveLow)
{
BLINK_GPIOx_BSRRL(fPortNumber) = (uint16_t)fBitMask;
}
else
{
BLINK_GPIOx_BSRRH(fPortNumber) = (uint16_t)fBitMask;
}
}
for my test, I modifed main.cpp
to contain this:
// Perform all necessary initialisations for the LEDs.
for (size_t i = 0; i < (sizeof(blinkLeds) / sizeof(blinkLeds[0])); ++i)
{
blinkLeds[i].powerUp ();
}
while (1)
{
for (size_t i = 0; i < (sizeof(blinkLeds) / sizeof(blinkLeds[0])); ++i)
{
blinkLeds[i].turnOn ();
}
timer.sleep (Timer::FREQUENCY_HZ);
for (size_t i = 0; i < (sizeof(blinkLeds) / sizeof(blinkLeds[0])); ++i)
{
blinkLeds[i].turnOff ();
}
timer.sleep (Timer::FREQUENCY_HZ);
}
What I observed was that the four LEDs turned on, all LEDs turn off except for the blue LED.
I expected none of the LEDs to turn off, so maybe there is something funky with the last one to turn off?
Actually, it's the last one to turn on, can't be turned off. I reversed the order of the loops and now the green LED is the one that wont turn off.
while (1)
{
for (size_t i = (sizeof(blinkLeds) / sizeof(blinkLeds[0])); i > 0; --i)
{
blinkLeds[i-1].turnOn ();
}
timer.sleep (Timer::FREQUENCY_HZ);
for (size_t i = (sizeof(blinkLeds) / sizeof(blinkLeds[0])); i > 0; --i)
{
blinkLeds[i-1].turnOff ();
}
timer.sleep (Timer::FREQUENCY_HZ);
}
I confirm the problem.
It is caused by an erroneous persistence of the BRR/BSRR registers. Setting bits in these registers should have only side effects (changing bits in ODR or triggering interrupts) and do not leave bits set, since they'll interfere with subsequent writes, as reported.
The proposed solution is to add implementations for the pre_write
methods, that are intended to process the value to be written and store only the bits that need to be persistent.
For the BRR/BSRR registers none of the bits are persistent, thus 0 can be safely returned.
@BrendanSimon, please review the commit https://github.com/gnu-mcu-eclipse/qemu/commit/298c20ab9319592589e744dc3cb279d03044b24e and let me know if you think it fixes your problem.
It would be great to take a look at the other registers too, possibly other peripherals too, and if there are bits that need not be persistent, to open separate issues to fix them.
Looks ok. A little more work than my suggested kludge, but architecturally cleaner to use the existing pre_write callback mechanism.
A second iteration, with the new "persistent-bits" field (https://github.com/gnu-mcu-eclipse/qemu/commit/b92779f4af8d53bad2b761828cc1692a73dc213e, https://github.com/gnu-mcu-eclipse/qemu/commit/086a505a4af4823859b864a40d18e9532a9291f8).
A pre-release mac binary is available from https://www.dropbox.com/s/k1mj6105aljt9mu/gnu-mcu-eclipse-arm-none-eabi-gcc-8.2.1-1.4-20190208-2254-macos.tgz
I'll test it later this day, but now I ned to leave for a few hours...
ok. I've been using Win10 recently (as I'm trying to get it in a state where I can get work to consider using GME for some projects). I'll try on macos too.
The dropbox link is for the GCC tools. Is it supposed to be QEMU to test BSRR?
I added a new test to the QEMU tests:
https://github.com/gnu-mcu-eclipse/qemu-eclipse-test-projects/tree/master/f407-disc-gpio-brr-16bit
The pre-release binary seems ok on my mac; if you confirm that your tests also pass, I'll proceed and build the binaries for all platforms.
I tried debugging my app and QEMU does not startup. I had installed Eclipse-2018-12 (not a fan of the numbering system!!). It does not run with QEMU 2.8.0-3 or 2.8.0-4 (pre-release).
Finally went back to Eclipse-2018-09 (did I mention the numbering system?). I can debug with QEMU 2.80-3, but using 2.80-4 does not launch. The graphic flashes up for a fraction of a second and then dies. Error message below.
I installed QEMU 2.80-4 by unpacking the archive and placing it in ~/opt/gnu-mcu-eclipse/qemu/2.8.0-4-20190210-1051/
.
I then changed my global MCU settings for QEMU to point to the bin
directory. I made sure there are no workspace or project MCU settings.
Error in final launch sequence
Failed to execute MI command:
-target-select remote localhost:1234
Error message from debugger back end:
Remote replied unexpectedly to 'vMustReplyEmpty': PacketSize=1000;qXfer:features:read+
Failed to execute MI command:
-target-select remote localhost:1234
Error message from debugger back end:
Remote replied unexpectedly to 'vMustReplyEmpty': PacketSize=1000;qXfer:features:read+
Remote replied unexpectedly to 'vMustReplyEmpty': PacketSize=1000;qXfer:features:read+
I can confirm the new build works on mac.
Everytime I change the QEMU path I need to quit Eclipse and restart it for QEMU to work, otherwise I get the errors in the previous post. Doesn't matter if I'm going up to 2.80-4 or down to 2.80-3.
It might only need to be done if I've run QEMU and a restart might be required if changing the setting after QEMU has run?
Runs ok in 2018-09 and 2018-12
I can confirm the new build works on mac.
ok, then I'll proceed and build all binaries.
Everytime I change the QEMU path I need to quit Eclipse and restart it for QEMU to work
this is very unusual, I never encountered these messages.
are you sure the previous debug session was really closed before starting a new one?
what toolchain did you use?
While working on the new release, one question came into my mind: what is expected to happen if in a single 32-bit access both the set and reset bits are 1? Will the pin go high or low? And, more important, is QEMU behaving the same?
I don't have time to investigate now, but for the next release perhaps it would be good to know.
The new release is out, please xpm install it on windows, since I tried it only on mac.
I read a spec/reference that said if both bits are set, then setting it takes priority, which is what QEMU does :)
http://www.eng.auburn.edu/~nelson/courses/elec2220/slides/ARM%20Parallel%20IO.pdf
page 24 states:
“set” has precedence if bit=1 in both BSSRL and BSSRH"
The examples show writes with BSRRL and BSRRH, which doesn't illustrate that statement. The only way BSRRL and BSRRH can be written to simultaneously is with a 32-bit write to BSRR.
The new release is out, please xpm install it on windows, since I tried it only on mac.
xpm install --global @gnu-mcu-eclipse/qemu
installed 2.8.0-4.1 on macos ok.
However, it didn't install on Windows. Nothing happened as if everything was up to date.
xpm install --global @gnu-mcu-eclipse/qemu@2.8.0-4.1
did install on Windows ok.
Maybe I just tried too early ?
I've confirmed the fix in 2.8.0-4.1 also works on Windows :)
it didn't install on Windows
hmmm... something weird also happened to me, installing on windows without a version brought -3.1, but explicitly asking for @2.8.0-4.1 brought it.
checking the latest available version is correct, both in the web page and in the console:
$ npm dist-tag ls @gnu-mcu-eclipse/qemu
latest: 2.8.0-4.1
I don't know what to say; a small problem with the npm servers?
a small problem with the npm servers?
no, I did some further tests and it looks like the problem is in the module used by xpm to get the latest version. :-(
entering 'bug fix' mode...
... xpm ...
I could not reproduce the problem, either the servers had a transient problem, or the module used by xpm to check the versions was too old for the current servers.
anyway, I updated the xpm dependencies to the latest versions, it should be fine now.
Your work is awesome ...
Thank you, Brendan, I appreciate your kind words.
Your help in improving the project is also highly appreciated.
Description
I have a function that sets a pin state to high or low, and code that calls that function to set the red LED on the STM32F4 Discovery board to high and low (it pulses).
The code works on a real Disco board, but in QEMU the LED turns on, but never turns off.
The codes uses the
BSRRH
register to set the pin low, and uses theBSRRL
register to set the pin high.These are 16-bit registers. Not the 32-bit BSRR register which combines BSRRH and BSRRL as a 32-bit register.
See the forum discussion here:
Example functions are:
Steps to Reproduce
turnOn()
andturnOff()
methods inBlinkLed.cpp
with the following.Note: might need to define the
BSRRL
andBSRRH
macros? I'm using the older STD_PERHIPHERAL library, which has these defined instead ofBSRR
. Seems the blinky codes uses STM32F4 HAL with CMSIS v2.5.0 (April 2016) and I'm using CMSIS v1.0.0 (September 2011).Either way, QEMU should do the "right thing (TM)" (assuming "the right thing" is what the Disco board does).
Expected behaviour:
Red link to turn on when setting bit with BSRRL and to turn off when setting bit with BSRRH.
Actual behaviour:
Red LED turns on, but never turns off.
Versions