noahcoolboy / toucan-track

Achieving Full Body Tracking for VRChat using 2 cameras and machine learning.
Other
36 stars 5 forks source link

ToucanTrack

ToucanTrack is an app which achieves Full Body Tracking for VRChat using 2 PS3 Eye Cameras and AI.

Note: ToucanTrack is still in beta! Expect instability and issues.

Demo: https://streamable.com/c7zstj

Installation

Cloning the github library and installing the required libraries:

git clone https://github.com/noahcoolboy/toucan-track.git
cd toucan-track
pip install python-osc numpy opencv-contrib-python scipy onnxruntime pyjson5 pysimplegui matplotlib

Follow these instructions for downloading the PS3 Eye Camera drivers: https://github.com/opentrack/opentrack/wiki/PS3-Eye-open-driver-instructions

Calibration

For ToucanTrack to function properly, you will have to calibrate both cameras.
Print out a checkerboard pattern as big as possible, like on an A2 paper. (Eg. 9x12 Checkerboard (20mm))
Run calibtool.py and pop open the settings. If you're using a different checkerboard size, modify Checkerboard Columns and Checkerboard Rows. Measure the size of a checkerboard square, and set Checkerboard Box Size. Make sure all settings match the paper you have printed out correctly.

Note: Because of how OpenCV works, Checkerboard Columns and Checkerboard Rows do not count the amount of squares, but the amount of corners. This means that the value should be one less than the number of squares in each direction. For the example pattern listed above, this would be 11 columns and 8 rows.

Once you're done with configuring the tool, click on save. Press Camera + to add a camera. Pick the type of camera, and the ID of the camera. You might have to guess the ID when having more than 2 cameras. Now select the camera in the list on the left and press Calibrate Intrinsics. Place the calibration pattern as close as possible (while still being fully visible) to the camera, the frames will be collected automatically. You can track your progress by looking at the text underneat the camera preview. A good calibration image should look like the following:

Once the frames have been collected, repeat this process with the other camera. After that, the camera distortion for the cameras has been calculated. The previews should show the undistorted preview. You can now move on to the extrinsics calibration.

Print out an aruco marker (Eg. ID 0 (18x18cm)) and place it in the middle of the room on a flat surface. Measure the size of the aruco marker in centimeters, and modify the Aruco Size setting. Press Calibrate Extrinsics, this will make a window appear showing all the cameras. Cameras which can see the aruco marker will be tinted green. Make sure all cameras show green before pressing Calibrate. Make sure the blue line on the first camera points towards the direction you would normally face while playing VRChat. It does not matter where the aruco marker is, but it does matter how it is oriented. An example of this calibration:

You can now press Save and exit the calibration tool.

Usage

Now that both your cameras are ready and set up, you can start setting up the main app. Open settings.json in any text editor. The first option you'll have to change is the IP. This should be the IP address of your Quest 2 connected on your WiFi network. The debug mode is on by default, but if everything is working well, you can turn it off. By scrolling down you can find the filter settings, which can be modified for having a smoother but laggier or more responsive but jittery experience.

When you're done fine tuning the settings, run python main.py. This will run ToucanTrack. You can now hop into your VRChat. You should see a Calibrate FBT button in your menu. Stand up straight and look forward, and press both triggers. And you're done!

Tips