Open kpg141260 opened 3 weeks ago
I have observed the same behaviour - thread hangs - when trying to access a external memory location that is being passed as a pointer to the thread - a pointer to a dictionary. I need this functionality, as I am running a permanent loop on Core 1 of the Pico that handles communication via UART to a second pico. The file access is required to log Core 1 events. The workaround I have developed is to push the events into a string list buffer, that is defined within the scope of the thread. I then created a asyncio wrapper function within the scope of the thread, which I start from the main process. That asyncio function is checking if there are any entries in the buffer and if yes, write them to the logfile. That approach actually works, but uses a lot of unnecessary resources.
Writing multi-core code is difficult because the protection offered by the Python GIL is absent. This unofficial doc attempts to identify some of the issues. Hardware access from core 1 can be problematic.
In general it is much simpler to use asyncio
on one core to achieve concurrency.
@peterhinch
Hi Peter, I agree, it is difficult and so far has cost me quite some time to make it work.
However, in this particular project I need the communication over uart to run on the second core, as I have another important task running on core 0, which cannot be interrupted as it is controlling water inflow to a tank. The second Pico W is acting as a web server so I can remotely access the tank control. So, it is a server/client configuration, in which the two devices communicate via uart.
Client sends current status to server and through the web page on the second pico I can control the client.
I could use a Raspberry Pi for this, but I like a challenge. :-)
As Micropython is still in development, I wanted to highlight this issue, as even after much research I could not find any mention of this behaviour elsewhere.
And, using the string buffer and a asynch task to access it solves my problem for now; however, my expectation would be that over time these issues could be solved in Micropython.
With asyncio
you can run bidirectional UART communication concurrently with other activities by using the stream mechanism. The second core is a valuable resource, but I only actually use it when absolutely necessary.
Well, my uart comms is working on core 1 as a thread on both client and server, so I will use the async task on core 0 to perform the file io for logging the events. It seems that as long as you define the shared storage within the class object that is the thread, core 0 has no issues accessing it. Its working fine the way I designed it now. Just a petty its not completely working with the thread. Good learning exercise for me though. Still, would be nice to have the file io working on Core 1 moving forward. So in all I now have 8 async tasks running on Core 0 and 1 thread on Core 1. Actually, I am quite amazed by the power this little RP2040 has - looking back at my early days with Zilog Z80 development donkey years ago. Btw, also Peter - regards.
I have observed the same behaviour - thread hangs - when trying to access a external memory location that is being passed as a pointer to the thread - a pointer to a dictionary. I need this functionality, as I am running a permanent loop on Core 1 of the Pico that handles communication via UART to a second pico. The file access is required to log Core 1 events. The workaround I have developed is to push the events into a string list buffer, that is defined within the scope of the thread. I then created a asyncio wrapper function within the scope of the thread, which I start from the main process. That asyncio function is checking if there are any entries in the buffer and if yes, write them to the logfile. That approach actually works, but uses a lot of unnecessary resources.
could you share some short example code of what you mean? I think I am having a similar issue to that described. I tried to pass a circular buffer (implemented as an object with a list used as the buffer) and I get the hanging when I run it (although it seems to work when I run it from the REPL strangely enough).
main.py
import _thread
import time
from queue import Queue
def producer(queue: Queue):
for i in range(1000):
while not queue.push(i):
pass
def consumer(queue: Queue):
last_time = time.time()
while True:
success = False
item = None
while not success:
success, item = queue.pop()
if time.time() - last_time > 1:
return
last_time = time.time()
print(f"Got item: {item}")
def main():
time.sleep(5)
queue = Queue()
_thread.start_new_thread(producer, (queue,))
consumer(queue)
if __name__ == "__main__":
main()
queue.py
class Queue:
"""A thread-safe queue implementation based on a circular buffer"""
def __init__(self, size: int = 10) -> None:
self._size = size
self._buffer = [None] * self._size
self._in = 0
self._out = 0
def push(self, item) -> bool:
"""
Add an item to the queue.
Returns:
True if the item was successfully added to the queue or False otherwise.
"""
if self._out == (self._in + 1) % self._size:
# The queue is full!
return False
self._buffer[self._in] = item
self._in = (self._in + 1) % self._size
return True
def pop(self) -> tuple:
"""
Pop an item from the queue.
Returns:
success, item: success is a boolean that is True if an item was removed
from the list and item is the item removed from the list. If the list
was empty then the item will be returned as None.
"""
if self._in == self._out:
# queue is empty
return False, None
item = self._buffer[self._out]
self._out = (self._out + 1) % self._size
return True, item
I know the queue does not use a lock but I have done a proof in uni that the logic makes it thread safe so it should not be the problem. It runs correctly in python on my laptop as well.
I can run in the repl with rshell
by importing the main function and then calling it directly.
Hi,
Here is a stripped down example of what I am doing.
import _thread
import uasyncio
from collections import deque
import time
# Create a lock and a deque to serve as a queue
queue = deque()
queue_lock = _thread.allocate_lock()
# Thread function to read from UART (simulated)
def uart_reader():
while True:
# Simulate reading data from UART
uart_data = "Data from UART"
# Add data to the queue
with queue_lock:
queue.append(uart_data)
print("Data added to queue")
# Simulate time between UART reads
time.sleep(1)
# Async task to process the data from the queue
async def process_queue():
while True:
# Check if there is data in the queue
with queue_lock:
if queue:
data = queue.popleft()
print(f"Processing: {data}")
# Yield to other tasks
await uasyncio.sleep(0.1)
async def main():
# Run the async processing task
await uasyncio.gather(process_queue())
# Start the UART reading thread
_thread.start_new_thread(uart_reader, ())
# Run the asyncio loop on core 0
uasyncio.run(main())
Regards, Peter
Hi Tariq, To clarify further:I am using 2 Raspberry Pi Picos, one is a normal pico, the second a W and I am using Micropyton 1.23.As such your code won’t work on the pico, as the Queue lib is not available in Micropython. Kind regards,Peter
Hi Tariq, To clarify further:I am using 2 Raspberry Pi Picos, one is a normal pico, the second a W and I am using Micropyton 1.23.As such your code won’t work on the pico, as the Queue lib is not available in Micropython. Kind regards,Peter
Thanks for sharing your code.
I posted the code for the Queue implementation I wrote as well in my comment. I also just came across https://github.com/micropython/micropython/issues/15192#issuecomment-2144749042 which seems to suggest a workaround for this hanging behaviour. I haven't tried it yet and I am not sure it will fix my bug if it only related to writing files but I'll give it a go.
Alternatively, they suggest downgrading to 1.21.0. Perhaps one of these workarounds would work for you?
Hi Tariq,
Thanks for the update. I'll try 1.21 and #15192 too.
Hi Tariq,
Thanks for the update. I'll try 1.21 and #15192 too.
For me downgrading to 1.21.0 has seemed to work!
UPDATE: On further investigation the pico appears to hang sometimes when I try to write to a file in core 1. I do not know if this is specifically an issue with core 1 (although I thought I read it somewhere). I was using the logging module from micropython-lib which I have also now realised is not thread safe (even though it is in regular Python). I may try playing around more with this if I have time but for now I am just not logging in core 1 at all.
Port, board and/or hardware
Raspberry Pi Pico
MicroPython version
MicroPython v1.23.0 on 2024-06-02; Raspberry Pi Pico with RP2040
Reproduction
Expected behaviour
Expected to see 'some text to log' in file test.log on pico.
Observed behaviour
After starting the thread it hangs after printing 'entering core1_thread'.
Additional Information
No, I've provided everything above.
Code of Conduct
Yes, I agree