Open foocker opened 1 year ago
@foocker great work! 2.4s also includes face enhancing (gfpgan) time, right?
@foocker great work! 2.4s also includes face enhancing (gfpgan) time, right?
no, the result is ok, so i removed it.
@foocker great work! 2.4s also includes face enhancing (gfpgan) time, right?
no, the result is ok, so i removed it.
Cool!
@foocker Do you still have the repo?
@foocker Do you still have the repo?
yes,but close as private.
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still SadTalkerTriton
?
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still
SadTalkerTriton
?
yes
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still
SadTalkerTriton
?yes
Thanks @foocker 👌
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still
SadTalkerTriton
?yes
Thanks @foocker 👌
@foocker Sorry, when you mention you exported all submodels to onnx
(esp. generator, kp_detector, etc...) , what models from SadTalker are you referring to?
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still
SadTalkerTriton
?yes
Thanks @foocker 👌
@foocker Sorry, when you mention you exported all submodels to
onnx
(esp. generator, kp_detector, etc...) , what models from SadTalker are you referring to?
url in readme is all you need.
@foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still
SadTalkerTriton
?yes
Thanks @foocker 👌
@foocker Sorry, when you mention you exported all submodels to
onnx
(esp. generator, kp_detector, etc...) , what models from SadTalker are you referring to?url in readme is all you need.
Hey @foocker
I saw the URL and downloaded the content, but the generator.onnx
is missing. Do you have it?
g.onnx in readme
Thibault Bridel-Bertomeu @.***> 于2024年8月19日周一 17:46写道:
@foocker https://github.com/foocker Do you still have the repo?
yes,but close as private.
Is there any chance we can get in touch in private and talk about it?
its open,only few day.
Great, thank you! Is it still SadTalkerTriton?
yes
Thanks @foocker https://github.com/foocker 👌
@foocker https://github.com/foocker Sorry, when you mention you exported all submodels to onnx (esp. generator, kp_detector, etc...) , what models from SadTalker https://github.com/OpenTalker/SadTalker/blob/main/scripts/download_models.sh are you referring to?
url in readme is all you need.
Hey @foocker https://github.com/foocker
I saw the URL and downloaded the content, but the generator.onnx is missing. Do you have it?
— Reply to this email directly, view it on GitHub https://github.com/Spycsh/xtalker/issues/9#issuecomment-2296140444, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFHSUMKDGHAHCNJM4EWZDMDZSG5I7AVCNFSM6AAAAABMBA2K2WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJWGE2DANBUGQ . You are receiving this because you were mentioned.Message ID: @.***>
Alright, thanks!
@foocker I can't run it out of the box - the *_reference.txt
you left does not work with any python3 I tried. Could you help me again with the version of python you were using? And the 22.11 triton image you were using? Thanks!!
@foocker 老哥好,我在复现您代码的时候,triton抛出了个错误,好像是说weights没有指定输出形状。
我是按照你的readme的第四步 4. In server container: tritonserver --model-repo ./
运行,然后抛出的错误
您方便看一下是不是这个问题吗?或者我可能别的地方哪里错了
如果您感觉回github麻烦,我微信是EST_Tracer,我是在北美做自动驾驶的,有空也可以多交流!
@foocker 老哥好,我在复现您代码的时候,triton抛出了个错误,好像是说weights没有指定输出形状。
我是按照你的readme的第四步 4. In server container:
tritonserver --model-repo ./
运行,然后抛出的错误您方便看一下是不是这个问题吗?或者我可能别的地方哪里错了
如果您感觉回github麻烦,我微信是EST_Tracer,我是在北美做自动驾驶的,有空也可以多交流!
你自动驾驶的跑这个干嘛?这个版本都有点老了,我好久没搞这个,看起来需要重新转一下onnx,我记得里面有转的脚本?不行就按照我的思路,在最新版本上搞一下吧。
@foocker 老哥好,我在复现您代码的时候,triton抛出了个错误,好像是说weights没有指定输出形状。 我是按照你的readme的第四步 4. In server container:
tritonserver --model-repo ./
运行,然后抛出的错误 您方便看一下是不是这个问题吗?或者我可能别的地方哪里错了 如果您感觉回github麻烦,我微信是EST_Tracer,我是在北美做自动驾驶的,有空也可以多交流!你自动驾驶的跑这个干嘛?这个版本都有点老了,我好久没搞这个,看起来需要重新转一下onnx,我记得里面有转的脚本?不行就按照我的思路,在最新版本上搞一下吧。
工作原因,有个新项目要测试。现在它跑起来了,我留点instructions给后人参考: 我上面遇到的错误,和你跑这个repo可能遇到的绝大多数问题,可能是triton、ONNX runtime, tensorrt、cuda或cudnn的版本不对齐造成的。查询正确的对齐版本,从下面两个链接: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel_20-03.html 在对齐版本时要以requirements里的triton版本为主。 只要版本解决了,其他的都是小问题。
SadTalkerTriton 25 fps , 84 frame, cost 2.4s。3090ti