Closed ShootingStarDragon closed 1 year ago
mediapipe works but is laggy as hell, how to fix? very unsure tbh just do some basic timing and flush it out: print FRAMEINT, TIME, TIMEDELTA
timedeltaB! 87628 0.7135674953460693 0.016666666666666666
confirmed, results = holistic.process(image) is the thing that's lagging, so this is basically 1 fps... how to drop frames then? unsure... NEED MORE TESTING
possibly add the option to drop frames??
Running on GPU
Graph: [mediapipe/graphs/holistic_tracking/holistic_tracking_gpu.pbtxt](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/holistic_tracking/holistic_tracking_gpu.pbtxt)
Target: [mediapipe/examples/desktop/holistic_tracking:holistic_tracking_gpu](https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/holistic_tracking/BUILD)
only works on linux.... depression https://google.github.io/mediapipe/getting_started/cpp.html#option-2-running-on-gpu
https://stackoverflow.com/questions/68745309/how-to-make-mediapipe-pose-estimation-faster-python https://github.com/google/mediapipe/issues/3303 https://github.com/google/mediapipe/issues/2840 https://github.com/google/mediapipe/issues/460 https://github.com/google/mediapipe/issues/1740 https://github.com/google/mediapipe/issues/3158
FOUND A GUY THAT HAS A FAST ONE https://www.youtube.com/watch?v=oRrRl5VDCWM https://github.com/niconielsen32/ComputerVision/blob/master/poseDetection.py https://github.com/niconielsen32/ComputerVision/blob/master/mediapipe/faceDetectorYT.py
so .pose is faster than .holistic, also ~20 fps is super usable... better than the 1fps I was getting with .holistic
https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html
UUUUUUUUUUUHHHHHHHHHHH
dill said mp_holistic
IS PICKLEABLE WHAT
so u can just pass it to subprocess? (maybe/maybe not since it might be too big to serialize/deserialize ON EVERY run)
potentially open it once in the subprocess then manually close?? unsure https://stackoverflow.com/questions/865115/how-do-i-correctly-clean-up-a-python-object
it's time to ping tshirtman, I have the two files (no kivy/no multiprocessing and kivy/multiprocessing) time to cry
it was actually tito not tshirtman https://github.com/tito/experiment-tensorflow-lite
how to reset mediapipe: https://github.com/google/mediapipe/issues/3881#issuecomment-1326215146
with pose.Pose(static_image_mode=True) as pose_obj:
results = pose_obj.process(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
pose_obj.reset()
good reference repo by robert flatt https://github.com/Android-for-Python/c4k_tflite_example#overview
new strategy: maybe I don't have enough ram? clear a bunch of windows then try again
https://stackoverflow.com/questions/56967553/building-kivy-android-app-with-tensorflow https://github.com/azizovrafael/Kivy_Tensorflow_Android https://github.com/tito/experiment-tensorflow-lite https://github.com/google/mediapipe/issues?page=9&q=is%3Aissue+is%3Aclosed+label%3Aplatform%3Apython https://github.com/google/mediapipe/issues/2983#issuecomment-1011682889 https://github.com/google/mediapipe/issues/3482 closing/resetting mediapipe https://github.com/google/mediapipe/issues/3881#issuecomment-1326215146
model cards https://google.github.io/mediapipe/solutions/models#pose https://google.github.io/mediapipe/getting_started/python https://google.github.io/mediapipe/solutions/pose#python-solution-api https://google.github.io/mediapipe/getting_started/python_framework.html
https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html https://github.com/sign-language-processing/3d-hands-benchmark https://github.com/sign-language-processing/3d-hands-benchmark/blob/master/benchmark/systems/mediapipe/main.py https://github.com/AI4Bharat/OpenHands/blob/main/scripts/mediapipe_extract.py
filipe got it to work by just having the fast one write to a file then kivy just blit buffers latest file....
another strategy have the subprocess do all mediapipe work and just read from subprocess... is it possible to share holistic between subprocesses?
https://github.com/google/mediapipe/issues/1160#issuecomment-872706753 MediaPipe graphs will queue/drop frames when the input is faster than the system can handle. The more powerful the device (or more simple the graph), the less frames dropped/queued, the higher fps can be achieved.
https://stackoverflow.com/questions/69722401/mediapipe-process-first-self-argument
# alternatively you could do: results = mp_hands.Hands().process(imgRGB)
https://github.com/ShootingStarDragon/Kivy-mediapipe-multiprocessing THIS LETS MEDIAPIPE RUN AS FAST AS POSSIBLE but the kivy window isn't even updating it's because mediapipe/kivy are blocking each other. But how is it even possible to block when using multiprocessing? the new strategy is to:
[x] run kivy with trio
[x] have ONE subprocess to run: open camera -> read a frame -> execute mediapipe -> add to shared dictionary (because of https://github.com/google/mediapipe/issues/1160#issuecomment-872706753), so I suspect a lot of the speed is because it predicts from previous frames. when i give one frame at a time then it gets laggy (this is still slow...)
[x] don't think I can run kivy with trio and spawn a subprocess...
[x] ~https://stackoverflow.com/questions/51171145/spawn-processes-and-communicate-between-processes-in-a-trio-based-python-applica Unfortunately, as of today (July 2018), Trio doesn't yet have support for spawning and communicating with subprocesses, or any kind of high-wrappers for MPI or other high-level inter-process coordination protocols.
This is definitely something we want to get to eventually, and if you want to talk in more detail about what would need to be implemented, then you can hop in our chat, or this issue has an overview of what's needed for core subprocess support. But if your goal is to have something working within a few months for your internship, honestly you might want to consider more mature HPC tools like dask.~
[x] NOT TRUE, needed a literal comma to make the args a tuple.... (,)
[x] kivy, no trio, 1 subprocess that does everything: 3fps
THERE IS ANOTHER OPTION RUN MEDIAPIPE IN MAIN AND KIVY AS A SUBPROCESS
do as much as you can for the 2 good tutorials you watched: