cgtinker / BlendArMocap

realtime motion tracking in blender using mediapipe and rigify
GNU General Public License v3.0
916 stars 111 forks source link

Error when trying to automate detection and transfer #159

Open NikKaem opened 1 year ago

NikKaem commented 1 year ago

I'm currently trying to build a script that automates the whole end-2-end process with BlendARMocap

Minimal Code looks like this:

import bpy
import os

video_path = "xxx"
smoothing_coeffcient = 10

bpy.ops.object.armature_human_metarig_add()
bpy.ops.pose.rigify_generate()

bpy.data.scenes["Scene"].cgtinker_mediapipe.mov_data_path = video_path
bpy.data.scenes["Scene"].cgtinker_mediapipe.key_frame_step = smoothing_coeffcient

bpy.ops.wm.cgt_feature_detection_operator()

bpy.data.scenes["Scene"].cgtinker_transfer.selected_driver_collection = bpy.data.collections["cgt_DRIVERS"]
bpy.data.scenes["Scene"].cgtinker_transfer.selected_rig = bpy.data.objects["rig"]

bpy.ops.button.cgt_object_apply_properties()

This works if I execute the terminal command blender -P Script.py. However, as soon as I try to keep blender in the background with blender -b -P Script.py the detection part fails with the error RuntimeError: expected class WM_OT_cgt_feature_detection_operator, function cancel to return None, not set. I initially thought it was a problem with opening the detection window but even if I disable the drawing of the window the error persists. My ultimate goal is to run it all in a docker container which would be difficult if I need to show a frontend for it to work. Is there any workaround for that?

cgtinker commented 1 year ago

I guess the issue is that bpy.ops.wm.cgt_feature_detection_operator is bound to blenders window manager (mainly to show realtime updates) and while running blender headless this results in an error.

BlendArMocap/src/cgt_mediapipe/cgt_mp_detection_operator.py if you check the operator, you'll see it uses the window manager. You can just use the operator as baseline to create your own operator, remove the modal, window manger stuff and just use a while loop in the execute method.

If you have an issue with the opencv image display (idk if that's an issue in the docker as there is no window manager), you'll find a get stream method in the operator. BlendArMocap/src/cgt_mediapipe/cgt_mp_core/cv_stream.py -> You'll probably just overwrite the streams 'draw' method so it just passes on draw and should be good in this case.

So either overwrite in the add-ons operator directly or create an own operator :)

Quick outline:

class DOCKER_CGT_MP_modal_detection_operator(bpy.types.Operator):
    # register the new operator with a custom id
    ...

    def get_chain(self, stream) -> cgt_nodes.NodeChain:
        ...
       return chain

    def get_stream(self):
        # overwrite the stream if necessary
        return stream

    def execute(self, context):
        # here are some changes
        self.user = context.scene.cgtinker_mediapipe  # noqa

        # init stream and chain
        stream = self.get_stream()
        self.node_chain = self.get_chain(stream)
        if self.node_chain is None:
            self.user.modal_active = False
            return {'FINISHED'}

        # memo skipped frames
        self.memo = []
        while self. overwritten_modal(context) == {'PASS_THROUGH'}:
              pass
        return {'OPERATOR_FINISHED'}

    @staticmethod
    def simple_smoothing(memo, cur):
         ...
        return memo

    def overwritten_modal(self, context):
        """ Run detection as modal operation, finish with 'Q', 'ESC' or 'RIGHT MOUSE'. """
       if self.user.detection_input_type == 'movie':
           # get data
           data, _frame = self.node_chain.nodes[0].update([], self.frame)
           if data is None:
               return self.cancel(context)
           self.simple_smoothing(self.memo, data)
           if self.frame % self.key_step == 0:
               for node in self.node_chain.nodes[1:]:
                   node.update(self.memo, self.frame)
               self.memo.clear()

           self.frame += 1
        return {'PASS_THROUGH'}

    def cancel(self, context):
        """ Upon finishing detection clear the handlers. """
        del self.node_chain
        return {'FINISHED'}
NikKaem commented 1 year ago

I just found the time to adapt the operator and it works like a charm. Haven't build a Docker container out of it yet but I can fully run the end-2-end process from the terminal now. Thank you so much! Is that something you would be interested in having as well?

cgtinker commented 1 year ago

You are welcome, glad it works! Feel free to create a separate folder like src/docker with a short readme so others which might need to run detection in a docker can use your implementation swell :)

Niko-shvets commented 1 year ago

@NikKaem It will be great if you would sare your solution. Thank you!