HoboVR-Labs / issue_tracker

General issues to fix or features to be implemented
3 stars 0 forks source link

Add Eye Tracking Functionality and API #2

Open SimLeek opened 2 years ago

SimLeek commented 2 years ago

Is your feature request related to a problem? Please describe. VRChat needs to see my emotions.

Describe the solution you'd like This is a bit more than just tracking pupils. In order to get emotions passing into games you need:

Then, later on:

Additional context Already got pupil tracking working. Need to work on the hardware and some API now:

https://user-images.githubusercontent.com/5911683/143302942-348a09d9-953b-41f7-a626-808b2678ff3a.mp4

okawo80085 commented 2 years ago

I'll start experimenting on the driver side of things, hopefully OpenVR driver APIs will be enough

SimLeek commented 2 years ago

sweet

SimLeek commented 2 years ago

Pinned because it's an epic level feature and maybe one of the most important ones.

okawo80085 commented 2 years ago

Thought i'd drop an update here: Using events for passing data to the application works, will try using it for some test apps soon

okawo80085 commented 2 years ago

Well, using events to pass full eye tracking state didn't work, so we switched to using events to signal if eye tracking is active or not and use shared memory to actually pass gaze state, new driver device is almost done (still needs some cosmetic tweaks).

Hardware vise showmewebcam running on a rpi zero v1.3 with a NoIR module works great, now we just need a better mount for it.

okawo80085 commented 2 years ago

Idea for the AI model for this addon (@SimLeek plz tell me if this is even reasonable or not): During initial setup make the user go through manual calibration where they need to follow a dot on the screen, we record their eye movement and use that to train a tiny model locally just for them. (Also maybe automatically retraining the model at some time interval to stay accurate, again if thats even possible xd)

SimLeek commented 2 years ago

I definitely think that's reasonable, however idk about retraining a whole model. More like setting up a simple matrix transform to multiply the model output by to get the exact screen position, or maybe a single or double layer model, idk.

Not 100% sure what to put for the calibration, but I feel like it should be something simple, since retraining the whole neural net could take a few days on a standard pc, depending on the model we choose.

okawo80085 commented 2 years ago

Yeah. i mean very few layers for the whole model, so that even a low spec VR PC could train it relatively fast

okawo80085 commented 2 years ago

Also we could pre train it a bit

okawo80085 commented 2 years ago

@SimLeek Can you experiment with model designs? While i make the calibration app.

The model only needs to have 2 outputs with range (-1, 1) for the gaze direction and maybe one output within the (0, 1) range for how closed the eye is.

SimLeek commented 2 years ago

Hmm... I could translate the facial landmarks to eye closed/open state. Gaze direction should doable with just the blob tracking ideally, but that's messy with other stuff in the view.

Will experiment.

SimLeek commented 2 years ago

Btw, even with very few layers, some models can take very long to train. This is especially the case with transformers, which can be trained on massive GPUs and then possibly run on the pi.

okawo80085 commented 2 years ago

To not be completely useless today, i'll at least post some photos of the setup i have right now :P

image image

ghost commented 2 years ago

this is quite old, assuming this project is dead atm?

SimLeek commented 2 years ago

Oh yea, this one's much closer to being done now. However, I think I'll need to edit the todo list, because now we need to figure out the right hardware.

I think the plan of adding it to any headset won't work well because power cord, but an addon to DIY or Valve headsets would still be really good.

ghost commented 2 years ago

Since there’s some space to place any o’l microcontroller inside where the usb is at, I would think of a pcb design that connects to the USB port that basically is able to interface the raspberry pi zero and have a ribbon cable that connects to cameras below the headset it just needs a 3d printed guard more or less to protect the ribbon cable. As for the gasket, it would definitively need to be modded or at least retrofit in some way that clips onto the gasket instead of 3d printing the entire gasket. Or something like that.

What I’m more curious, is the optimal placement of the camera, assuming someone is playing with the highest fov range on the index (das me)

okawo80085 commented 2 years ago

this is quite old, assuming this project is dead atm?

I'd say it's more stalled than dead, we need someone else to continue hardware design, i can't do it anymore because of the war xD

would think of a pcb design that connects to the USB port that basically is able to interface the raspberry pi zero and have a ribbon cable that connects to cameras below the headset it just needs a 3d printed guard more or less to protect the ribbon cable.

Oh for sure, i had mine in that orientation because i didn't have longer ribbon cables for the camera :/

As for the gasket, it would definitively need to be modded or at least retrofit in some way that clips onto the gasket instead of 3d printing the entire gasket. Or something like that.

I wish that was an option, but the camera and LED cables need to go through the gasket, and making holes in the og gasket is not something i wanted to do with my index, also that camera module needs to sink into the gasket quite a lot (not sure if it's visible in the pictures) so the holes would need to be quite big

What I’m more curious, is the optimal placement of the camera, assuming someone is playing with the highest fov range on the index (das me)

In the pics it was in the highest FOV range, pretty usable, the spot the camera rests in just has a lot of empty space even when the lenses are the closest to your face, but without the foam the lenses were too close to the user's eyes, so i usually had to either dile em back a bit or get something to replace the foam xD

okawo80085 commented 2 years ago

Also the driver side of this thing is almost done, so i'd say the only thing thats missing is the hardware and firmware for the hardware, the latter of which i had some progress on... before the war started... now im not sure if even my hardware prototypes survived :/

ghost commented 2 years ago

oof, sorry about the war :/

okawo80085 commented 2 years ago

Eh don't worry about it, I'm safe from military actions, just stuck in the west of the country without most of my equipment :/

ghost commented 2 years ago

would it be fine if you would share some hardware specifics you had in mind for eye tracking? gonna attempt to revamp the look of the hardware for it based on the photo you have to make it more feasible to attach to the headset without the prototype attachments on it, as well as creating an alternative version for people who wants fans on the headset

okawo80085 commented 2 years ago

Oh yeah for sure, my prototype was using a single Raspberry Pi Camera Module 2 NoIR with an IR LED and it all connected to a raspberry pi zero (originally i planned for 2 camera modules, one for each eye, but i couldn't get the expansion module for that in time, so i decided to go with a one eyed version to prototype the software first), the camera module was oriented to see the eye of the use when wearing the headset (more or less, it was a pain to get a good angle of the user's eye).

The rpi was running showmewebcam so it appeared a normal webcam to the user's PC when connected to the headset's usb port, it then was used in our prototype tracking software (i say prototype because it was just a python script running a single neural network which was really under trained), then it was supposed to send that data to our driver and the driver would handle the rest, except that i couldn't finish that device yet, i need to test it more, make some mock apps to see if it behaves as intended etc. (thankfully i can still do that even now, just slower, because i don't have my VR PC :L )

Here are some useful links

Discord server me and other guys working on this usually hang out on (check out the #software and #face-tracking channels): https://discord.gg/rMsV5YBwQ9

Progress on the dev version of the driver that incorporates this device type (github CI is setup, so you can just download the latest built version and try it out if you want, the poser provided with it is very basic though): https://github.com/HoboVR-Labs/hobo_vr/pull/1

The hell is a poser, and other misc things about hobo_vr (this does not include the documentation for the new dev version of the driver though): https://www.hobovrlabs.org/docs/html/getting_started.html#what-is-a-poser

And lastly, good luck and have fun!