niciBume / Cat_Prey_Analyzer

Cat Prey Image-Classification with deeplearning
MIT License
142 stars 22 forks source link

Problem with threads #11

Closed larsenglund closed 3 years ago

larsenglund commented 3 years ago

Hi, thanks a bunch for your effort with this project! I have modified camera_class.py to read frames from a mjpeg stream instead of a raspi cam. Everything seems to be working just fine (detecting cats etc.) up until a thread is supposed to be started to send a message to the telegram bot, then I get this:

NO CAT FOUND!
CLEARED QUEQUE BECAUSE EVENT OVER WITHOUT CONCLUSION...
CatCamPy: C:\Users\englund\Documents\GitHub
Traceback (most recent call last):
  File ".\cascade.py", line 743, in <module>
Traceback (most recent call last):
  File "<string>", line 1, in <module>
    sq_cascade.queque_handler()
  File ".\cascade.py", line 341, in queque_handler
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
    self.queque_worker()
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 126, in _main
  File ".\cascade.py", line 303, in queque_worker
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
    p.start()
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\englund\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.RLock' object

Any thoughts on why this is happening and how to fix it would be very appreciated!

The new camera class looks like this:

class Camera:
    def __init__(self,):
        time.sleep(2)

    def fill_queue(self, deque):
        while(1):
            gc.collect()
            i = 0
            stream = urllib.request.urlopen('http://localhost:8000/camera/mjpeg')
            bytes = b''
            while True:
                bytes += stream.read(1024)
                a = bytes.find(b'\xff\xd8') # JPEG start
                b = bytes.find(b'\xff\xd9') # JPEG end
                if a!=-1 and b!=-1:
                    jpg = bytes[a:b+2] # actual image
                    bytes= bytes[b+2:] # other informations
                    image = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR) 
                    deque.append(
                        (datetime.now(pytz.timezone('Europe/Zurich')).strftime("%Y_%m_%d_%H-%M-%S.%f"), image))
                    #deque.pop()
                    print("Quelength: " + str(len(deque)) + "\tStreamsize: " + str(sys.getsizeof(stream)))
                    i += 1

                if i == 60:
                    print("Loop ended, starting over.")
                    stream.close()
                    break
niciBume commented 3 years ago

Hey Lars! Nice to see that the repo is being used! I can see that you're using it on Windows, so I assume you already corrected all the paths (no idea how python os package works on windows 😬).

File ".\cascade.py", line 303, in queque_worker self = reduction.pickle.load(from_parent) EOFError: Ran out of input Have you changed the cascade.py file? I can't recall this line, so I'm not quite sure if the problem lies on the multiprocess calls. As the error suggests not to have any input: https://stackoverflow.com/questions/24791987/why-do-i-get-pickle-eoferror-ran-out-of-input-reading-an-empty-file

However I do not know how windows handles the multiprocessing, as Process.start() performs an os.fork() which is only available in Linux systems. _Since Windows lacks os.fork() it has a few extra _restrictions__ as in https://docs.python.org/2/library/multiprocessing.html

So I'm not quite sure if you can run this on windows without rewriting the telegram multiprocessing part 😬😬

larsenglund commented 3 years ago

Thanks for the quick reply! I'll start by spinning up a virtual machine with linux then to eliminate that possible error source!

larsenglund commented 3 years ago

Tried setting it up in lubuntu but I get the following error when starting cascade.py

CNN is ready to go!
Traceback (most recent call last):
  File "cascade.py", line 724, in <module>
    sq_cascade = Sequential_Cascade_Feeder()
  File "cascade.py", line 75, in __init__
    self.base_cascade = Cascade()
  File "cascade.py", line 417, in __init__
    self.pc_stage = PC_Stage()
  File "/home/osboxes/CatPreyAnalyzer/model_stages.py", line 269, in __init__
    self.pc_model = tf.keras.models.load_model(os.path.join(PC_models_dir, self.pc_model_name))
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 146, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 212, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
    printable_module_name='layer')
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/sequential.py", line 352, in from_config
    custom_objects=custom_objects)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
    printable_module_name='layer')
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1121, in from_config
    process_layer(layer_data)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1105, in process_layer
    layer = deserialize_layer(layer_data, custom_objects=custom_objects)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
    printable_module_name='layer')
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object
    return cls.from_config(cls_config)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 446, in from_config
    return cls(**config)
  File "/home/osboxes/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_layer.py", line 80, in __init__
    raise ValueError('Unrecognized keyword arguments:', kwargs.keys())
ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

Any thoughts on what's happening? Cheers!

niciBume commented 3 years ago

Oh wow. Seems like it has problems with the keras dependency, as it fails to correctly load the prey_classifier hd5 model file. Are you using a prebuilt tensorflow wheel, in that case go for the regular pip install? The code should run as long as you use tensorflow 2.0 or above.

larsenglund commented 3 years ago

I followed the guide you linked to (https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi) and used pip3 install tensorflow, same as I did on windows. But python3 -c 'import tensorflow as tf; print(tf.__version__)' does return 1.14.0. On windows it returns 2.3.1

larsenglund commented 3 years ago

Upgrading to tensorflow 2.3.0 with pip3 install tensorflow==2.3.0 did the trick, thanks alot!

Now I have a problem with the queue but that's maybe related to me grabbing frames from the mjpeg stream at 5 fps instead of whatever rate you got the from the pi cam? It always deletes the queue after detecting a cat (I'm holding my phone with a cat image in front of the webcam). If no cat is detected it never deletes the queue.

Quelength: 40   Streamsize: 64
Quelength: 41   Streamsize: 64
Prey Prediction: True
Pred_Val:  0.81
Total Runtime: 1.1894433498382568
Runtime: 1.1894965171813965
Timestamp at Done Runtime: 2020_10_28_22-29-39.825807
Overhead: 1.310322
CUMULUS: -31
Quelength: 1    Streamsize: 64
DELETING QUEQUE BECAUSE OVERLOADED!
Quelength: 2    Streamsize: 64
Quelength: 3    Streamsize: 64

Could I just remove the oldest images from the queue and keep it under QUEQUE_MAX_THRESHOLD in lenght?

larsenglund commented 3 years ago

Removing old images from the queue made everything work! Thanks for your help! image Now I can proceed to make the actual hardware installation, looking forward to not being woken up at 3 by the cats playing with prey under the bed >_<

niciBume commented 3 years ago

Awesome! Hope you get some rest! 😜