Closed vivi90 closed 2 years ago
def convert_cam_to_3d_trans(cams, weight=2.):
(s, tx, ty) = cams[:, 0], cams[:, 1], cams[:, 2]
depth, dx, dy = 1./s, tx/s, ty/s
trans3d = np.stack([dx, dy, depth], 1)*weight
return trans3d
def run_romp(receiver, sender, run, gpu, pid):
settings = argparse.Namespace(GPU=gpu, center_thresh=0.2, input=None, mode='webcam', onnx=False,
model_path=os.path.join(model_path, 'ROMP.pkl'), smpl_path=os.path.join(model_path, 'smpl_packed_info.pth'),
smooth_coeff=3.0, temporal_optimize=True,
show_largest=False, calc_smpl=False, render_mesh=False, show=False, show_items='mesh,tracking', renderer='sim3dr')
romp = ROMP(settings)
outputs_all = romp(image, signal_ID=pid)
if outputs_all is None:
continue
poses = getAxisAngle(outputs_all['smpl_thetas'])
if 'cam_trans' not in outputs_all:
trans = convert_cam_to_3d_trans(outputs_all['cam']).tolist()
else:
trans = outputs_all['cam_trans'].tolist()
track_ids = outputs_all['track_ids']
I use simple-romp 1.0.2, so it might be different in other version.You can print the attributes of the output to get your need.The cam_trans
is position and the smpl_thetas
is quaternions(maybe you need convert it from the axis-angle representation as my blender addon does).Track_ids is the people id in the webcam.
The above code may have some variables or parameters that are not quite right, but the body logic should be understandable. However, I'll be writing an interface for simple_romp that will call up the local camera and drive it in Blender for you to check out.
For VMC,I don't know much about it. However, I recently used Webscokets and WebRTC to transfer camera images and ROMP-generated parameters so that I could use webcam and ROMP to drive 3D characters in a web page (render 3D characters using three.js). Open source soon.
@yanch2116 Oh wow, very detailed answer! π
[..] The
cam_trans
is position and thesmpl_thetas
is quaternions [..]
Ah i see.
But maybe i described it not clear enough:
With 'position' i mean the position for the bones, not the camera.
Or is it, what convert_cam_to_3d_trans()
does for me?
[..] I'll be writing an interface for simple_romp that will call up the local camera and drive it in Blender for you to check out.
That would be really great, thank you! π
By testing a little bit with the code i get in Blender the error ModuleNotFoundError: No module named 'romp'
:
Read blend: E:\Projekte\py\cdba\blender\Beta.blend
Traceback (most recent call last):
File "E:\Projekte\py\cdba\blender\Beta.blend\characterDrivenLocal.py", line 5, in <module>
ModuleNotFoundError: No module named 'romp'
Error: Python script failed, check the message in the system console
Even after installing it via pip:
(base) PS C:\Program Files\Blender Foundation\Blender 3.0\3.0\python\bin> python.exe -m pip freeze
brotlipy==0.7.0
certifi==2021.10.8
cffi @ file:///C:/ci/cffi_1636523946399/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
conda==4.12.0
conda-package-handling @ file:///C:/ci/conda-package-handling_1618262410900/work
cryptography @ file:///C:/ci/cryptography_1639414720302/work
DateTime==4.4
idna @ file:///tmp/build/80754af9/idna_1637925883363/work
menuinst==1.4.16
numpy==1.22.3
osc4py3==1.0.8
Pillow==9.1.0
pycosat==0.6.3
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1635333100036/work
PySocks @ file:///C:/ci/pysocks_1605307512533/work
pytz==2022.1
pywin32==302
requests @ file:///opt/conda/conda-bld/requests_1641824580448/work
ruamel-yaml-conda @ file:///C:/ci/ruamel_yaml_1616016898638/work
six @ file:///tmp/build/80754af9/six_1623709665295/work
tqdm @ file:///tmp/build/80754af9/tqdm_1635330843403/work
urllib3==1.26.7
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306162074/work
wincertstore==0.2
wxPython==4.1.1
youtube-dl @ file:///home/conda/feedstock_root/build_artifacts/youtube-dl_1640026230460/work
zope.interface==5.4.0
(base) PS C:\Program Files\Blender Foundation\Blender 3.0\3.0\python\bin> python.exe -m pip install simple-romp
Collecting simple-romp
Using cached simple_romp-1.0.5-cp39-cp39-win_amd64.whl
Requirement already satisfied: setuptools>=18.0.0 in c:\tools\miniconda3\lib\site-packages (from simple-romp) (58.0.4)
Collecting torch
Using cached torch-1.11.0-cp39-cp39-win_amd64.whl (157.9 MB)
Collecting cython
Using cached Cython-0.29.30-py2.py3-none-any.whl (985 kB)
Collecting lap
Using cached lap-0.4.0-cp39-cp39-win_amd64.whl
Collecting opencv-python
Using cached opencv_python-4.6.0.66-cp36-abi3-win_amd64.whl (35.6 MB)
Collecting typing-extensions>=4.1scipy
Using cached typing_extensions-4.2.0-py3-none-any.whl (24 kB)
Requirement already satisfied: numpy>=1.21.0 in c:\tools\miniconda3\lib\site-packages (from simple-romp) (1.22.3)
Installing collected packages: lap, typing-extensions, opencv-python, cython, torch, simple-romp
Successfully installed cython-0.29.30 lap-0.4.0 opencv-python-4.6.0.66 simple-romp-1.0.5 torch-1.11.0 typing-extensions-4.2.0
Outside of Blender it works:
#!/usr/bin/env python3
import romp, cv2
# ROMP
settings = romp.main.default_settings
romp_model = romp.ROMP(settings)
outputs = romp_model(cv2.imread('test.jpg'))
print(outputs["body_pose"])
So it seems, Blender runs the python not inside the conda environment.
How to install the simple-romp
module for Blender? π
Instead of running ROMP in Blender, just use TCP to send the results of ROMP over as I did in my repository.
Why do you need the location of the bones? As far as I know, forward kinematics only requires quaternions to drive.
For VMC,I don't know much about it. [..]
To keep it short: The VMC protocol is used to send bone positions and quaternions from one application to another. like you do it currently between ROMP and Blender, but more standardized. There are a lot of applications, supporting it. So thats, why i want to implement it for ROMP. There is also an existing VMC protocol addon for Blender: VMC4B.
Instead of running ROMP in Blender, just use TCP to send the results of ROMP over as I did in my repository.
Yes, thats right. I can do it this way:
This way might be the best one (if ROMP implements VMC protocol directly):
Or maybe an good alternative way:
I am familiar with the VMC protocol but not sure, which solution is the best one.
Why do you need the location of the bones? As far as I know, forward kinematics only requires quaternions to drive.
hm, yes your right. At the moment i only require the quaternions. Because the VMC protocol speaking application i want to use with ROMP supports inverse kinematics. Just also having the bone locations might be an useful extra for further development maybe.
@yanch2116
def convert_cam_to_3d_trans(cams, weight=2.): (s, tx, ty) = cams[:, 0], cams[:, 1], cams[:, 2] depth, dx, dy = 1./s, tx/s, ty/s trans3d = np.stack([dx, dy, depth], 1)*weight return trans3d
def run_romp(receiver, sender, run, gpu, pid): settings = argparse.Namespace(GPU=gpu, center_thresh=0.2, input=None, mode='webcam', onnx=False, model_path=os.path.join(model_path, 'ROMP.pkl'), smpl_path=os.path.join(model_path, 'smpl_packed_info.pth'), smooth_coeff=3.0, temporal_optimize=True, show_largest=False, calc_smpl=False, render_mesh=False, show=False, show_items='mesh,tracking', renderer='sim3dr') romp = ROMP(settings) outputs_all = romp(image, signal_ID=pid) if outputs_all is None: continue poses = getAxisAngle(outputs_all['smpl_thetas']) if 'cam_trans' not in outputs_all: trans = convert_cam_to_3d_trans(outputs_all['cam']).tolist() else: trans = outputs_all['cam_trans'].tolist() track_ids = outputs_all['track_ids']
[..] the
smpl_thetas
is quaternions(maybe you need convert it from the axis-angle representation as my blender addon does). [..]
How is in your example code getAxisAngle()
defined?
The part with the ROMP-results-to-quaternion-conversion is not fully understandable to me.
If i look in characterDrivenLocal.py then it uses only poses
instead of smpl_thetas
?
https://github.com/yanch2116/CharacterDriven-BlenderAddon/blob/6c288acf8540f900202de10adc2463ab2ca75f04/src/characterDrivenLocal.py#L19
Also it does some conversion with it, i also not understand completely:
https://github.com/yanch2116/CharacterDriven-BlenderAddon/blob/6c288acf8540f900202de10adc2463ab2ca75f04/src/characterDrivenLocal.py#L74
Sorry, for having so much questions π
@yanch2116 Okay here my short testing code:
#!/usr/bin/env python3
import romp, cv2
# ROMP
settings = romp.main.default_settings
romp_model = romp.ROMP(settings)
outputs = romp_model(cv2.imread('test.jpg'))
print(outputs["smpl_thetas"][0].reshape(24, 3))
And my result:
[[ 2.94803524e+00 -1.42663017e-01 -7.88208991e-02]
[-9.88707423e-01 2.52666175e-02 4.28940564e-01]
[-1.14964235e+00 2.46785283e-02 -3.80045295e-01]
[ 7.81143069e-01 -2.53749732e-02 4.88573825e-03]
[ 1.20549548e+00 5.43232150e-02 -1.45304590e-01]
[ 1.43561351e+00 -1.05927981e-01 1.64200649e-01]
[ 2.08218731e-02 -1.65508091e-02 1.05584832e-02]
[-1.09371156e-01 9.65581164e-02 -1.12654939e-01]
[-1.14813052e-01 -9.41924900e-02 1.42180398e-01]
[ 4.55470709e-03 -2.39811037e-02 8.59344937e-03]
[-2.75181592e-01 2.03453958e-01 2.49218971e-01]
[-2.56546527e-01 -1.35578856e-01 -2.47987211e-01]
[-1.42983630e-01 8.89663398e-03 -8.55871662e-03]
[ 1.12185907e-02 -4.22098696e-01 -2.92974502e-01]
[ 8.57210543e-04 4.21224028e-01 2.51460761e-01]
[-2.34054159e-02 -3.74585614e-02 3.04763876e-02]
[-1.22376801e-02 -5.23299575e-01 -8.54798675e-01]
[ 3.60559858e-02 4.96336907e-01 8.63629162e-01]
[ 2.78691691e-03 -8.45407069e-01 2.70991951e-01]
[ 6.24685101e-02 1.06210124e+00 -3.48785102e-01]
[ 5.90867512e-02 -3.81427072e-02 2.39067242e-01]
[ 5.54307438e-02 5.38437404e-02 -2.21134484e-01]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00]]
The Quaternions, that i need, have 4 components (x, y, z and w). There are only 3 components for each of the 24 SMPL joins?
@yanch2116 according to this at page 32, what type of rotation is it? Euler angles?
@yanch2116 according to this at page 32, what type of rotation is it? Euler angles?
It's the first type Axis-Angle. But it's the product of ΞΈ and w. You can use this function to convert it to ΞΈ and w.
def getAxisAngle(poses_list):
AxisAngles = []
for poses in poses_list:
AxisAngle = []
for i in range(0, len(poses), 3):
theta = np.sqrt(poses[i]**2 + poses[i+1]**2 + poses[i+2]**2)
w = poses[i:i+3]/(theta+1e-6)
for key in w:
AxisAngle.append(key)
AxisAngle.append(theta)
AxisAngles.append(AxisAngle)
return AxisAngles
SMPL_theta
and poses
are the same thing, just different names. It depends on the simple_romp version.
The position of each bones can be obtained using forward kinematics with quaternion and the global of root bone(the cam_trans).
If you are not in a hurry to solve these problems in these two days, I will update my repo next week, and you can refer to my writing method then.
For VMC, I can't give you advice immediately.After I finish updating my repo, I will learn about VMC.
@yanch2116
[..] from the axis-angle [..]
Oh no, it was my fault π
In combination with getAxisAngle(outputs_all['smpl_thetas'])
it was a little bit misleading to me, sorry. π
#!/usr/bin/env python3
import romp, cv2
from scipy.spatial.transform import Rotation as r
# ROMP
settings = romp.main.default_settings
romp_model = romp.ROMP(settings)
outputs = romp_model(cv2.imread('pose.png'))
rotation_by_axis = outputs["smpl_thetas"][0].reshape(24, 3)[0]
rotation_by_quaternion = r.from_rotvec(rotation_by_axis).as_quat()
print("Rotation by axis: " + str(rotation_by_axis))
print("Rotation by quaternion: " + str(rotation_by_quaternion))
Seems to return the quaternions now:
Rotation by axis: [-2.3814533 0.2776083 -1.8047591]
Rotation by quaternion: [-0.7916114 0.0922789 -0.59991432 0.07027662]
The position of each bones can be obtained using forward kinematics with quaternion and the global of root bone(the cam_trans).
Okay. Any suggestions for independent libs to use for?
If you are not in a hurry to solve these problems in these two days, I will update my repo next week, and you can refer to my writing method then.
To be honest, i am a little bit in a hurry. π But thats no problem, i will just try my best and looking forward to your code.
[..] After I finish updating my repo, I will learn about VMC.
If you have any questions, don't hestitate to ask me. π
@yanch2116 Some progress now: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1152999499
@yanch2116 Some progress now: Arthur151/ROMP#193 (comment)
You can refer to this file as to why your posture is wrong. I had the same problem too.
The reason is that the initial orientation of the bone differs from that of SMPL.
for bone in bones:
object.data.edit_bones[bone].tail[0] = object.data.edit_bones[bone].head[0]
object.data.edit_bones[bone].tail[1] = object.data.edit_bones[bone].head[1]+2
object.data.edit_bones[bone].tail[2] = object.data.edit_bones[bone].head[2]
object.data.edit_bones[bone].roll = 0
The solution is to start the bones on the Y-axis. (This is right in Blender, but may not be true in other software)
@yanch2116 I see, you are rotating the bones according to these differences in the Armatures: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1150462895 Unfortunately there is not the option to just change the bone's tail and head, because only values are transmitted. It seems, i do some rotation mistakes π https://github.com/Arthur151/ROMP/issues/193#issuecomment-1154465269 Any ideas? π
Can you modify character skeletons in Blender? Then import it into the software you need.
@yanch2116
Can you modify character skeletons in Blender? Then import it into the software you need.
Not sure, if it works with every VMC protocol receiver if i am loading an converted non-standard model into it. So changing the users model is not an option.
As an alternative my script may make use of blender for the calculations you proposed but this leads to another issues:
a) using romp
inside Blender (not worked out of the box for me) or
b) using bpy
outside of Blender (not worked out of the box for me)
So this idea is even hard to test for me ( a) and b) issues not able to solve yet π).
In the meanwhile i tested some other ideas here: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1155171390
Also i have invited you for full access to my draft: https://github.com/vivi90/python-vmc/invitations
@yanch2116 Current progress: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1156649194
@yanch2116 It seems, that i simply fail to change the rotation of the quaternions: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1156960708
Project closed: Arthur151/ROMP#193 (comment)
Sorry, I was so busy these two days that I forgot to reply.
The skeleton of your model is not uniform, can't be unified into the SMPL skeleton, and can't be manipulated in Blender.
The only way seems to be to recalculate the poses from the skeleton information.I haven't studied kinematics systematically, so I don't know exactly how to recalculate.
I have queried the relevant information before, but gave up halfway, you can refer to it, maybe there is some help for you.
I also had a skeleton mismatch, but when I imported the BVH directly from ROMP into Blender, the pose was completely correct. I checked Motion Capture File Formats Explained and learned that Blender recalculated the poses before importing BVH.οΌThe appendix in the PDF has C++ code for parsing BVH, which you may also want to refer to.οΌ
So the next thing I looked up import_bvh.py in Blender source code, lines 521~588 (maybe other lines) may said how to recalculate, but I didn't follow up on that.
I'm sorry I couldn't help youοΌ
Sorry, I was so busy these two days that I forgot to reply.
It's not your fault. π
The skeleton of your model is not uniform, can't be unified into the SMPL skeleton, and can't be manipulated in Blender.
It can be manipulated and unified. But just mathematically. Here is an image to show you, why is it so: Every user uses it's own model thats uniform with the VRM standard. So if i change my model to be uniform with the SMPL standard might be nice. But it's not an solution. Maybe my english is just not good enough to explain it, i don't know.
The only way seems to be to recalculate the poses from the skeleton information.I haven't studied kinematics systematically, so I don't know exactly how to recalculate.
I even fail by just calculating some mathematical stuff: https://stackoverflow.com/questions/72637394/how-to-rotate-an-rotation-by-an-euler-rotation-in-python/72641895 So i don't think the issue is complex.
I have queried the relevant information before, but gave up halfway, you can refer to it, maybe there is some help for you.
I also had a skeleton mismatch, but when I imported the BVH directly from ROMP into Blender, the pose was completely correct. I checked Motion Capture File Formats Explained and learned that Blender recalculated the poses before importing BVH.οΌThe appendix in the PDF has C++ code for parsing BVH, which you may also want to refer to.οΌ
So the next thing I looked up import_bvh.py in Blender source code, lines 521~588 (maybe other lines) may said how to recalculate, but I didn't follow up on that.
What is BVH
?
@yanch2116 Project reopened due to deadline extension until 1st of August 2022.
@yanch2116 Bones fixed! π₯³ https://github.com/Arthur151/ROMP/issues/193#issuecomment-1166276307
@yanch2116 How to get the real root
rotation?
My issue: https://github.com/Arthur151/ROMP/issues/193#issuecomment-1166676235
Any ideas? π
@yanch2116 Now i want to test the 'webcam' mode in my script and come back to your code snippet:
def run_romp(receiver, sender, run, gpu, pid): settings = argparse.Namespace(GPU=gpu, center_thresh=0.2, input=None, mode='webcam', onnx=False, model_path=os.path.join(model_path, 'ROMP.pkl'), smpl_path=os.path.join(model_path, 'smpl_packed_info.pth'), smooth_coeff=3.0, temporal_optimize=True, show_largest=False, calc_smpl=False, render_mesh=False, show=False, show_items='mesh,tracking', renderer='sim3dr') romp = ROMP(settings) outputs_all = romp(image, signal_ID=pid) if outputs_all is None: continue poses = getAxisAngle(outputs_all['smpl_thetas']) if 'cam_trans' not in outputs_all: trans = convert_cam_to_3d_trans(outputs_all['cam']).tolist() else: trans = outputs_all['cam_trans'].tolist() track_ids = outputs_all['track_ids']
[..]
[..] the body logic should be understandable. [..]
To be honest, i don't understand the logic π How do i get the results, if romp is still running?
#!/usr/bin/env python3
import numpy as np
import romp, cv2
subject_id = 0
settings = romp.main.default_settings
settings.mode = 'webcam'
romp_model = romp.ROMP(settings)
outputs_all = romp(image, signal_ID=pid)
Or do i need to use threads and read the webcam frames by my own script and passing it frame by frame to romp?
Yes, you only need to put the image obtained by webcam into romp. Use romp_model(image)
to get the results.
I tried to use it with simple-romp but it did not work. I already created an issue at the ROMP repository and described my problem here and here in detail.
The author of the ROMP repository answered there:
and closed the issue.
@yanch2116 So how to solve it?
My goal is to send the positions and quaternions of ROMP (or any other SMPL based solution) over the VMC protocol to other application (not Blender). Any ideas, whats the best way to do it?