Open Rakly3 opened 2 years ago
First of all this is too much work. You're not the first person to ask for this or have this idea.
I've experimented with this in the past and came to the conclusion that it doesn't work. I don't know where people get the idea from that they are able to see where a stroke begins or ends when they see the frames side by side. I refuse believe they do.
We're talking about images which are at best 41 ms apart from another.
If you look at two pictures which are 16 ms apart from another you'll think they are the same picture. The only way you can actually see movement in those two pictures is by flipping between them. Otherwise you're just staring at two pictures trying two spot the difference. You'll just have a major headache after a while.
It might just be me who lacks the ability to spot these things in a sequence of pictures but that's reason enough for me to not pursue this.
It's not about toggling between two images. you're seeing 10+ images. Instead of having to go frame by frame for an unknown amount of images to the next stroke would, seeing what's coming allows you to go to the next stroke point much faster.
Have you ever used video editing software? It's pretty obvious on a reel view where you need to be next.
Anticipating the next dot when scripting 'on the fly' is also more accurate.
BTW, any one reading this. I'm looking to hire some one to make custom scripting software.
Again 10 frames at 60 fps is like ~166 ms of video on screen (most strokes are longer). And those 10 frames are not going to be very visually distinct. You probably don't want more than 10 Frames on screen (even on an ultra wide). I still don't think this is useful.
But if your goal isn't to be frame perfect then you could start omitting frames. The individual frames would be more distinct and you could fit a lot more "time" on screen. So for 60 FPS you could display every 6th frame so the 10 frames on screen would be a whole second. This I can see being useful. In this case you would want to align the frames with the script timeline at which point it really would start to look like video editing. ๐คจ
BTW, any one reading this. I'm looking to hire some one to make custom scripting software.
Unless you want to hire me this isn't the correct place to post something like this. ๐ I assure you these issues are not viewed by many people.
I don't think you understand what I'm trying to do :) I go through a whole scene frame by frame. If I can see the next x frames I know/can see the upcoming frame where to put a dot; Or where a jumpcut is. - Now I have to press the next key until I go past the frame I am looking for. I could cut the time I spend scripting almost in half if I could see the next couple of frames in advance. Especially in the PMV's.
Or, depending on the action, I play the scene at 0.2 speed and add dots. Like I said, it is pretty obvious to see the frame coming if I could see multiple at the same time. Especially if the change in direction is in between two frames. - Hell sometimes I'm playing it at 0.05 speed and i still have to pause it, go backwards to the correct frame.
This is my workflow. Get the original video file. Extract all frames into RAW images and crop to just the left eye. add timecode to frames. Go through the scene frame by frame and add dots.
It takes me a week to script a 10 minute VR PMV, halving that would really help a lot.
Also, I didn't want to make a new topic for this, is there a way I can easily detect double-placed dots? I have them all the time and takes a lot of work to find all of them. I recorded what I mean, it's safe for work. https://drive.google.com/file/d/1olwEgBLUqgvuV7gITkT2GuVN1oP4DL2C/view?usp=sharing
So for 60 FPS you could display every 6th frame so the 10 frames on screen would be a whole second.
That's not enough, In one second you often have three to five strokes. Even 10 strokes. I don't get how you don't see the difference between two frames. - How do you decide which of the 5 frames you keep going back-and-forth over to place the dot if you don't see any difference? - While if i could see all 5 frames at the same time, I can pick the correct one out immediatly.
The whole "Film Reel view" is a technical challenge. Also a UI/UX nightmare but mainly a technical challenge. It's noted but I don't see this any time soon (or ever???).
The main problem is "getting frame n". Which sounds stupid but I can't just "get" the frames I want and display them. It's a videoplayer underneath and "getting frame n" is not a feature. Meaning I have to implement "getting frame n" on top of a videoplayer which isn't pretty and I havn't gotten it to work reliably.
I recorded what I mean, it's safe for work. https://drive.google.com/file/d/1olwEgBLUqgvuV7gITkT2GuVN1oP4DL2C/view?usp=sharing
How do you end up with two points in the same place? (I'm aware this is possible)
Why do you need to find them?
When saving to .funscript
these should be automatically filtered.
A separate issue would've been appropriate.
Edit:
So a quick workaround would be to save to .funscript
and import that into a new project.
However this only works if the two points in the "same" millisecond. ๐ค
It happens sometimes that they are on the same millisecond, but usually they are not. It usually happens when I'm working on the PMV's on SLR (by Mutiny VR) (I work for SLR but the scripts are 'extra' :) )
I do a lot of revisions which includes a lot of replacing, copy/pasting, undo/redo, inverting. If the dots are not on the same millisecond, it messes with some of the toys causing stutters.
It's not unlikely that there's a bug somewhere. Without knowing the exact steps it's hard for me to track down. But you saying that you do a lot of copy/pasting gives me a good a idea where to look.
I'd rather prevent this from happening entirely then provide some kind of function which iterates over all actions and removes them if they happen to be to close to one another.
Hi again. I recorder a video that will explain the filmreel thing and why it does matter to view the difference between two frames. Its Nsfw, but censored!
https://drive.google.com/file/d/1EiGUod7iOdTcW2RuCqG16YdqpqkOK2Yt/view?usp=sharing
I'm even willing to pay for this feature, but in that case I don't want to make it publicly available.
I wasn't lying when I said that his is a technical challenge. If it was easy I would just do it but it's not.
The most feasible implementation of this I can imagine would be to preprocess the entire video.
I doubt that that would be satisfying. There's probably a ffmpeg command sequence which could achieve this.
I understand that it's a challenge. :) I just felt the need to explain it some more as you mentioned several times it has no use, while, at least for me, it really would be useful.
The solution of stitching frames doesn't really seem feasible, unless some one would know how to easily do it. DAINapp maybe can do it, but I don't remember it having some sort of stitching function. Thanks for the suggestion though!
I've posted something over here https://discuss.eroscripts.com/t/film-reel-preprocessing-video-for-scripting/71517 which does generate a video like described above. However it's not user friendly
This would speed up scripting soooo much by being able to see which frame you want to start or stop a stroke, instead of going back and forth between screens one by one.
Added a 'drawing' as illustration.
You could even expand on it by for example clicking on a frame = set a dot at xx%