Closed bertelschmitt closed 2 years ago
Recent "support for Weights and Biases" changes have been backported to multistreamYOLO
Hi @bertelschmitt
Great job - it looks really good. I will link it in the Readme for now.
Thanks for your work!
Anton
Thank YOU, Anton.
B
Linked in Readme -> closing as an issue.
@johnjaiharjose, @shahzaibraza37, @NiklasWilson. @AntonMu et al: With great trepidation, I am finally releasing the multi-stream-multi-model-multi-GPU version of TrainYour OwnYOLO into the wild. It is at https://github.com/bertelschmitt/multistreamYOLO/. It consists of a modified yolo object, and a monster of an application that tries to use the object.
The YOLO object could become the universal interface between AI and the uninitiated like me: Give it a model (you can make one yourself with TrainYourOwnYOLO) and feed it images. The YOLO object will tell you if and where it found something you are looking for. Doesn’t matter whether it’s single images, or a video. After all, a video simply is a series of images.
Sometimes you want to process a few videos (series of images) at a time. And you’ll have a problem.
When you try to initialize more than one old-style YOLO object at the same time, you will get nothing, because Tensorflow/Keras monopolize the GPU after one init_YOLO. Even if you init_YOLO in a completely separate process, it won't work.
The modified YOLO object allows to partition the GPU, and to make it accessible by a number of totally independent python processes.
Now, each process can run its own video source through its own YOLO model, display the result, and optionally save it as a video and data file. I manage to run 10 processes on one 11 Gigabyte 1080ti, when then new 24 Gbyte RTX 3090 finally can be purchased, it should be good for > 20 processes. You can achieve the same with two 1080ti, because the modified YOLO object also allows to address individual GPUs, and run them in parallel.
The changes to the YOLO object were made rather quickly and months ago. The modded YOLO object has been running for months 24/7 on two machines without a hitch.
What took time was MultiDetect.py, the app that actually USES the modified object. It takes in any multiple video sources, mpeg-file, stream, IP-camera, or webcam. It alerts when objects are found, and it can start recording automatically. It was a tough lesson in bringing YOLO to the real world.
A lot of things need to be right before YOLO should even be initiated, and MultiDetect.py tries to make sure that there is a CUDA, a GPU, a valid video source, and so forth. When running multiple streams, you quickly run into frame drift. MultiDetect.py will try its best to mitigate this ever-present problem. When you record video after you ran it through YOLO, you need to do it at the proper frame rate. But what if the video stream doesn’t expose its frame rate, or worse, gives you a totally bogus one, as many Chinese IP cameras are wont to do? MultiDetect.py will try getting around it.
MultiDetect.py also revealed a big problem of YOLO, namely the total lack of information as far as errors go. If YOLO develops a problem, the call unceremoniously dies, and it never comes back. The calling application has no idea whether there was just a little hiccup, or a big problem. I’m not ready to rip apart the YOLO object in order for it to return proper status and error information to the application layer, and frankly, I know way too little to do that. Right now, MultiDetect.py juggles a bunch of timers that allow it to declare a process defunct if it is tired of waiting, but that’s about it.
Please be warned that MultiDetect.py is an atrocious monster of spaghetti-code, it started as a little monitor of the modified YOLO object, and it quickly got out of hand. I am far from a professional programmer. I got into computers half a century ago when there were no objects, and it shows. Also, it is the first time I used tkinter for the GUI and it shows again. Please, don’t be too brutal with me.
No changes were made to the training part, except for a few small tweaks that allow a change of the hardcoded repo name “TrainYourOwnYOLO” to “multistreamYOLO,” which was necessary to keep the two apart until they are merged. Except for the above, “multistreamYOLO” is the same as current “TrainYourOwnYOLO.” I incorporated the recent move to TensorFlow 2.x, and I will backport any future “TrainYourOwnYOLO” improvements for as long as “multistreamYOLO” is in a separate repo. Support for Weights and Biases was added to TrainYourOwnYOLO while multistreamYOLO was readied for publishing. Support for Weights and Biases will be added to multistreamYOLO forthwith. (I hope the separation won’t be for too long.)
Speaking of separate, I suggest keeping , “multistreamYOLO” just that for a little while and beat the crap out of it before it can be submitted for a PR. It has not been tested with Windows or Mac. I can do Windows, but I don’t have a Mac, and I need help with it.
The recognition and the glory belong entirely to @AntonMu and company, and I really don’t want to take anything of that away from him/them. Maybe we’ll just incorporate the modded object into mainstream TrainYourOwnYOLO and keep MultiDetect.py in its own cave until it has matured.
P.S.: I don’t think detect_video(), and especially not detect_webcam(), should be in the object. Both should be application-level. At its core, detect_video() is the same as detect_image(), 24 (or whatever) times a second. detect_webcam() will break if the cam is not in USB(0), and often, it is not. Its address might even have changed when re-plugged. Apart from that, detect_webcam() is largely a repetition of detect_video().
detect_video() will give alarming results if vid.get(cv2.CAP_PROP_FPS) produces nothing, or worse, produces FPS in the thousands, as it is the case with many Chinese IP cams, see above. IMHO, the place for detect_webcam() and detect_video() are code snippets that show the best use of detect_image().