Closed hunterstats closed 8 months ago
Swapping VR Videos isn't trivial as the face detection models can't cope with deformed faces. I once found a repo just for VR Swapping here: https://github.com/g0kuvonlange/vrswap
If you're a coder, I would be happy to have this integrated by PR.
I'd do it if I had the proper coding skills 😆 Maybe I'll take it as a challenge if nobody beats me to it 😉
For now, regarding VR (didn't want to open up a new post just for that question): Is there a way to make "selected face" working for the same face twice (for left and right side)? It seems to identify and swap the first face (left) and move on. Maybe a simple way would be to add a "vr-checkbox", making it take a second run at the frame? All faces, all male and all female work fine if there's only a single person, but not with multiple.
Maybe a simple way would be to add a "vr-checkbox", making it take a second run at the frame? All faces, all male and all female work fine if there's only a single person, but not with multiple.
Yes that would be simple and easy but what about the distortion problem? It still wouldn't work with close-ups, would it?
Yes you're right, that doesn't solve the distortion problem. Like I said, maybe I'll have a go at it trying to implement "vrswap" via PR. But to be honest, the complexity goes a bit over my head and coding skills since you switched to gradio. It would be a nice challenge for me, but first I'll have to find the time and overcome my "adhd downward spiral paralasys time" 🤣
but first I'll have to find the time and overcome my "adhd downward spiral paralasys time" 🤣
Hahaha then I just have to get you into panic mode. How about: "NOBODY EVER WILL DO THIS FOR VR PORN AND IT IS ONLY YOU WHO CAN ACHIEVE THIS!" 😠Ahem, sorry. I'm a long time VR User too but it still is such a small userbase...I need to loook into the vrswap code if it even cares about the distortation part. The quick and easy vrmode I added yesterday, but it isn't public yet.
You forgot to set the "9 PM, TODAY!!!"-deadline 😆 Uhm yeah, still have to even try it's functionality, too.. Damn I need a second rig 😄 And thanks for adding the quick 'n dirty, will test as soon as available ;)
I've noticed almost no distortion in the old 360 degree fisheye movies (swapping working as it should) but a lot of it in the newer 180 (equirectangular) ones. Unfortunately it seems newer movies are almost always the latter format 😞
Any movement on this? I recently got a quest 3 and I'm super keen for being able to faceswap VR180 videos. If no one gets to it it'll go on my backlog of things to try and implement I'm sure.
Any movement on this? I recently got a quest 3 and I'm super keen for being able to faceswap VR180 videos. If no one gets to it it'll go on my backlog of things to try and implement I'm sure.
The simple stereo swapping is included in the latest update and I started extracting the relevant bits from the vrswap project mentioned above -> https://github.com/C0untFloyd/roop-unleashed/blob/main/roop/vr_util.py It doesn't seem to be that easy though, I currently lack the time to take a deep dive into it and restructure the workflow.
This guy did a decent job I think, not really roop here though, he used VAM (3D nsfw program) as the base video. Here is the original video https://www.youtube.com/watch?v=9e0V3gKdi0I&t and the the comfyui updated version https://www.youtube.com/watch?v=HCEeMbbWpfA
I watched it on the Quest 3 in Skybox VR player (more clear than youtube vr app) and both the left and right eyes were very consistent. You could do this same process on a VR180 video for faceswapping by dreambooth training a checkpoint first then adding loras that were also trained on the person at low strength (like 0.25) to make it even more accurate. Then run the video through comfyui using what he says below. This would also make the proportions of the face and body more accurate than just face swapping if you had body images in the training dataset. Seems like a lot of work though.
The creator said this:
The trick is to first make something in virt a mate (or unity or blender) and then export all the frames in vr format. For virt a mate i use the plugin: EOSIN Video Renderer for 3D VR180. Then process all frames to lineart or canny. then batch process all frames with animatediff (including an ipadapter for consisteny and a strong controlnet on the lineart to preserve the left right stereoscopic effect) then upscale all images. I use 2560*1280 resolution. A short clip can take multiple days to render. Lower resolutions can also look decent. I use a first pass and then a latent upscale with 40% denoise. I basically use the standard animatediff evolved comfyui workflow. I also use some nodes for automatic batch prosessing.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
Describe the bug By default, when viewing VR videos on a flat screen, the fisheye effect is quite noticeable. The center displays a clear and distinct image, while the outer areas appear distorted and stretched. Consequently, when a face enters the 'fisheye' effect zone, the model fails to detect it.
To Reproduce Steps to reproduce the behavior:
Details What OS are you using?
Are you using a GPU?
Which version of roop unleashed are you using? Latest
Screenshots *random image from internet, so eyes in example is censored