-
If I want to infer my own image using the demo.py, how can I get the json file of 2D poses result?
-
Hi, my group is trying to plug in another CoreML model into this app (a GAN model to generate new targets with same poses, paper: everybody dance now). However, we are all newbies with Swift, and we s…
-
I use the command 'python compute_coordinates.py' to detect the poses of target images. Everything goes well but I get all -1 results. Is there any requirement for input person image (e.g. resolution)…
-
-
### What problem are we trying to solve?
This project aims to solve the problem of making drivers aware in real-time of near-by cyclists to decrease collisions between drivers and cyclists. Our…
-
## Motivation
The most natural expectation is to have a demo script to run results on your own data, which is the main "selling point" of the paper. To make it more convenient for the users, it wou…
-
-
I have set up everything but it gives an error of pybind11
`Starting OpenPose Python Wrapper...
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
(540, 96…
-
-
Is it possible to use ControlNet architecture to create a model that works like a model produced by LoRA training?
For example:
In Stable Diffusion, I have 2 inputs: text prompt and an image of s…