cntools / libsurvive

Open Source Lighthouse Tracking System
MIT License
727 stars 134 forks source link

Determine locations of lighthouses. #3

Closed cnlohr closed 6 years ago

cnlohr commented 7 years ago

Using data from the HMD, determine location of lighthouses relative to it.

cnlohr commented 7 years ago

This is some initial data. Its columns are: L, HED, [Actual Time (1/48 millionths of a second)], sensor ID, pulse code, delta from frame start. initial_hmd_data.txt.zip

Ardustorm commented 7 years ago

Here is some code to visualize the points. viveStuff.zip

UPDATE: This one now shows vectors (kind of). viveStuffWithNormals.zip

It looks like the normals are swirled. image

cnlohr commented 7 years ago

[Uploading hmd_points.csv.zip…]() Here's the data from my HMD data poins.

cnlohr commented 7 years ago

New tracking data - from one of the lighthouses, in mode 'A' pointed directly at the Vive. newtrackdata.csv.zip

cnlohr commented 7 years ago

REMOVED

cnlohr commented 7 years ago

Here's the actual locations of all of the HMD points on the headset, in 3D space.

hmd_points.csv.zip

Ardustorm commented 7 years ago

here is the data with some rudimentary animation so it is easier to see where everything is. animated

cnlohr commented 7 years ago

Yeah, those normals don't look right :(

ghost commented 7 years ago

Was the scad script edited and re-uploaded, because I just opened it to see if I could figure out why it looked strange and the swirling seems much less pronounced, and in the reverse direction.

This is a top view with orthogonal projection so it will look a little odd just due to the lack of perspective, but it gives a better sense of actual sensor orientation and they all look very nearly front-facing. My intuition is that they shouldn't be and they should be more aligned with the curve of the surface, but it does make more sense than a swirl pattern.

libsurvive

Ardustorm commented 7 years ago

Yeah, I've been trying to mess with it to show the normals. They don't look right because I just treated the vector as a rotation which is why the swirls appeared. I'm working on fixing that now. ErrJordan, That might be because they are being rotated by less than one because of what I said above, thus it isn't noticeable.

Ardustorm commented 7 years ago

vive Here is the corrected version, much better than last time.

And the corrected code here: for(i =[0:31]) { color("red")translate(points[i]*100) { x = normals[i][0]; y = normals[i][1]; z = normals[i][2]; rotate( [0, 1*atan2(sqrt(x*x + y*y),z), 1*atan2(y,x)]) { cylinder(h= 1, r1=.25, r2=.1); } } } viveHeadsetVectors.zip UPDATE: I tried to comment the code more and re-added the code for just displaying the points (without normal vector). commentedVersion.zip

Ardustorm commented 7 years ago

What do the last two numbers on each line represent? I made a spreadsheet of the data and found the average value for x and y for each sensor in the data-set. I find it interesting that sensor 10 does not have any Y data but does have X. screenshot from 2016-12-12 11-30-06 Here's the Google Spreadsheet if anyone wants to look at all of the data

Here is another animation with each sensor labelled with it's number. The blue vectors represent the sensors with data from the newtrackdata.csv file. Point 10 is green because it had information only for X axis and wanted to see where it was located. vive_newtrackdata_labelled viveStuff.scad.zip -Luke Thompson

cnlohr commented 7 years ago

The 0-3 represent the IR dat code, see https://github.com/nairol/LighthouseRedox/blob/master/docs/Light%20Emissions.md

The last column is the time, in 1/48 millionths of a second (aka ticks) of the time between the last sync pulse and the time that particular laser swept the point.

From what I can tell, 200,000 is dead center, it looks to be about 0.0009 degrees per tick, as I think one revolution is 400,000 ticks. Someone should check my math on that, though.

EDIT It is entirely possible for a given point to only get X or Y sweeps if it is on the edge. One laser scanner may be visible while the other may be occluded.

(Directed to @Ardustorm )

Ardustorm commented 7 years ago

OK, thanks, so it looks like I don't need to worry about the IR dat code since you already calculated X and Y from it. Here is the table I got from subtracting 200,000 from the number to center it and then scaling by .0009 to get degrees (400,000/360 = .0009 so it looks like you had an extra zero in there).

Sensor Number X (degrees from center) Y (degrees from center)
0 -8.416500 -37.606250
4 -8.602045 -35.786689
6 -4.584530 -36.499129
7 -6.250571 -36.077187
8 -5.348712 -39.878528
9 -6.385500 -38.191597
10 -3.933958
15 -7.979943 -39.510636
16 -0.641270 -34.350727
17 -1.718814 -35.612807
23 1.573604 -34.513354
24 2.023397 -36.402052
29 -0.798653 -39.376234
30 0.434340 -36.078383
31 0.306598 -38.124265
cnlohr commented 7 years ago

that totally looks believable, now comes figuring out the harder problem... Now, I'm curious if Alan Yates' twitter comment was in reference to us "Ugh this parametric surface interception is a multivariate transcendental equation, so much for a closed-form solution..."

I had given it some thought to try to linearize and turn into a matrix to solve, but, I don't know how well that would work. I was planning on doing an iterative approach. If anyone can find a better one, that would be super swanky.

I think I'm going to record more data, with IMUs, too.

cnlohr commented 7 years ago

Ben came through with the email to HTC! They had our config! Here is the correct points. correct_hmd_points.csv.zip

cnlohr commented 7 years ago

Now, I have more info. This is with two lighthouses pointed at the HMD on the floor. I have also included IMU data for the HMD. two_lighthouses_test.csv.zip

Ardustorm commented 7 years ago

Cool, I'll have ti add the correct points in and compare, see how different they are to the one's we've been using.

Trying to solve the problem of finding the location of the lighthouses, I think I found something that might be useful. Since we know two points and the angle between them I was going to try and graph all possibilities (starting in 2d). I thought I got something wrong because I kept getting circles from the few points I put in. It turns out that it is a property of circles. My thoughts now are to try and come up with equations for 'spheres' for a few sets of points and see where they intersect. (I'll try to work on that tomorrow if I have time.)

cnlohr commented 7 years ago

vk2zay says we'll "have" to use iterative methods. They're always my go-to and I was going to try to do one, myself... but now, I wanna try to find some other way just to be weird. :-D

mwturvey commented 7 years ago

Have you looked at the Perspective-n-Point problem? If you consider the each lighthouse to be a camera, and the angular positions of each tracked point to be pixel locations of "points of interest" in an image that is generated by that "camera," you can start looking at the whole system as a more traditional computer vision problem. At least that was a big "ah-ha" moment for me. I'm probably repeating stuff you already know, but the problem you run into if you try and calculate the position information from a static set of data (i.e. a full x and y sweep) is that a moving tracked object will move to a different location between the time you get the X and Y sweeps. So, unlike a traditional camera, you never get a simultaneous X and Y position. I suspect that the way it works is that an initial fix is acquired using one of the Perspective-n-Point problems. Then, a Kalman filter is used to iteratively integrate sensor and IMU readings to maintain the position.
I remember seeing a video at one point where Yates stated that they could maintain tracking for an already locked-on tracked object with a single sensor and IMU readings, as long as there was sufficient movement of the object.

cnlohr commented 7 years ago

The principle (err my perspective of it) all along has been to use two different algorithms. One to solve for camera position, then, during runtime, use camera positions to solve for positions and orientations of the objects. These are different steps. Right now, I'm only on the first step. I strongly feel we should only focus on camera positions now. I have my own ideas too about the runtime locations, but, that will have to wait until phase 3.

Re: PnP problem, that does look like the "right" way to solve it. I have never heard of it, but I did once see that Hugin uses RANSAC to get camera positions and it is AMAZING at it. Only thing is in Hugin's case it doesn't consider depth with its features so maybe this problem won't work out quite as AMAZINGly.

LeeCookGHA commented 7 years ago

Following on from Mikes PnP comment, you may want to look at: https://en.m.wikipedia.org/wiki/Epipolar_geometry In theory, with enough samples, you should be able to reverse the base positions.

cnlohr commented 7 years ago

I don't know how you would invert that problem... I would be interested to see if you have any specific ideas. Re: the PnP route... someone has a C++ EPnP Solver in C++ -- but it is licensed under the FreeBSD license. Since it's not part of the core algorithm, I might consider giving it a whirl: https://github.com/artivis/epnp

LeeCookGHA commented 7 years ago

Sorry, the Vive also has a sensor in the base, which will give relative angles between the stations...

mwturvey commented 7 years ago

I believe that code is from the authors of the EPnP paper. It requires OpenCV. The problem I've run into, and haven't had time to get past yet is figuring out the intrinsic and extrinsic parameters of the Lighthouse "camera." OpenCV has routines to calculate those for you, given a bunch of camera frames using one of two calibration images. One calibration image is a circle, and the other is a checkerboard.
The math got a little hairy for me, so I was hoping to use the OpenCV routines, at least to start with and get a proof-of-concept. I think you could simulate a checkerboard pattern by constructing a custom sensor that is a grid of sensors on a flat board. (Similar to this, but in a planar grid.) I think that calculating these parameters should be a one person, one time kind of thing. Then, you should be able to use the EPnP algorithm to compute position. Also, someone sufficiently knowledgeable in computer vision should be able calculate those parameters more or less "perfectly" without any custom hardware.

LeeCookGHA commented 7 years ago

The Vive is a "perfect camera" system, without distortions or scaling issues. You should be able to just implement it without the checkerboard calibration - I'm at work at the min I'll post a paper with a unity camera matrix later from home. Using the Epipolar technique with the known angles between the stations, a single set of EPnP distances output should give you enough to "Fix" the base positions relative to each other. You would then need to either use the IMU or "describe" the room like Vive in order to then fix the bases with respect to the ground plane.

cnlohr commented 7 years ago

@Galastorm: Any idea where that information could come from and/or how? I didn't see any data coming back from the lighthouses. EDIT Specifically relative angle data.

@mwturvey: A few people have been worried about parameterizing the "cameras" in the vive system, but part of what makes it so magic is that they really are /perfect/. It would take a herculean amount of effort to get regular cameras calibrated anywhere near as well as the lighthouses are naturally. Also, you are correct, it does rely on OpenCV, which is not an acceptable dependency for libsurvive. I wonder how hard it would be to write that out and keep the FreeBSD license vs. brew our own.

LeeCookGHA commented 7 years ago

Sorry, no. I've just seen the sensor in a picture of a base teardown. I assumed (I know, ass-u and all that) that the information would be available somewhere in the system. If you don't have direct access to the base angles, then they'd need to be worked out using simultaneous equations and sets of PnP angle readings.

mwturvey commented 7 years ago

I've not worked much with computer vision before, and couldn't find any info on what it would consider to be an ideal camera. It sounds like it does mean that the angular distance between any two adjacent pixels is the same: that's awesome.
I also would really like to have a small standalone library that can do this instead of a dependency on OpenCV. My goal is to get it to run on a microcontroller-- basically to get high quality tracking in a small package. There seemed to be a lot of tentacles once you get into the OpenCV code. But, I think most of those tentacles com more from generic abstractions that OpenCV reasonably makes, but aren't needed in this algorithm. The approach I've been considering is starting with a small test app that could repeatedly run with each small change to see if anything broke, and surgically removing pieces out of OpenCV to get to a minimal implementation.

cnlohr commented 7 years ago

@mwturvey Conveniently, the "where are my lighthouses" problem is something that would probably not be needed on the microcontroller. Only the localization code. That said. Even for the PC. OpenCV is unacceptable for this project.

I was just gonna take a whack at implementing the OpenCV functions they use, myself tonight.

EDIT Aww man... just a quick look through. They're using SVD and matrix inversion features.

revirescam commented 7 years ago

removed - rubbish

Zmathue commented 7 years ago

So the tracking data is missing the length of the pulse, this is needed to recover a first pass estimate of the z coordinate of each sensor relative to the lighthouse, because the time of the pulse is roughly proportional to the inverse of z we can calculate it.

This z value is needed for first to resolve the ambiguity of building a 3d point out of a 2d projection (how do you know which way you are viewing the object from, if you view the object from the front it will appear the same as if it is viewed from the back) and from there it can be used to match it with the config data.

cnlohr commented 7 years ago

Hmm... I had no idea that was used at all. You can tell which way you are oriented because of physical limitations, i.e. lighthouses above headset in Z... and because the curvature of the headset will give away any such failures. ALSO Don't forget I can get you the normals, so unexposed photodiodes will not react...

All that said, sounds like I need to expose pulse length!

cnlohr commented 7 years ago

I will take more data soon, but, something looks wrong with one of timings, I think I'm syncing to the wrong pulse.

cnlohr commented 7 years ago

Ok, here's a new one with lengths. Also I corrected some more correlation errors. third_test_with_time_lengths.csv.zip

It was taken with the HMD roughly in center-of-field, laying on its back, camera pointed toward camera 1. It was about 3.450m from LH1 to the HMD, and 3.110m from LH2 to the HMD.

'L' LH1 is codes 0-3, 'R' LH2 is codes 4-7.

Format is: [R|L] [X|Y] HED [timestamp] [light ID] [jk code] [time from sync, 200000 is center] [length of pulse in ticks]

cnlohr commented 7 years ago

One other note... My math just isn't working out if I assume 400,000 ticks is the time of a circle.

float angle = (hmd_point_angles[k] - 200000) / 200000 * 3.1415926535/2;  //XXX XXX WRONG??? OR SOMETHING??? WHY DIV2 MAKE GOOD?

I am just confused. Anyone else got a chance to look at the data?

mwturvey commented 7 years ago

From what I've seen, the laser rotors sweep at 60 rotations per second. Assuming 48mhz clock, that's 800,000 ticks for the time of a full circle.

cnlohr commented 7 years ago

Splendid, looks like I can go to sleep! I still have a fair bit of work that needs to be done, like righting things so down is down, but this awful awful method might actually work! (P.S. Located in 'tools' under 'planetest')

Ardustorm commented 7 years ago

Here is what I have so far. With what I had said above I was able to construct a surface that represents all possible points that an observer can be given two points and the angle (from observer) between them. This results in a torus shape. I'm not too sure how useful this will be in practice since the points are so close together it is practically a sphere but who knows. I was originally thinking that you could find the intersection point between a few of the torus' but since they would be so close together, I'm not sure that is feasible. It still might be helpful for other uses though. torus.scad.zip

module torus(p1,p2,angle) {
   z =dist(p1,p2) /2; 
   r= z/sin(angle);
   R= r*cos(angle);
   e=(p2-p1)/dist(p1,p2); //end vector

   translate(p1)
      rotate( [0, 1*atan2(sqrt(e[0]*e[0] + e[1]*e[1]),e[2]), 1*atan2(e[1],e[0])])
      translate([0,0,z])
   for(th = [-180:10:179]) { for(phi = [-180:10:179]) {
     x= (R +r*cos(th) ) * cos(phi);
     y= (R +r*cos(th) ) * sin(phi);
     z= r*sin(th);
     color("grey",.8)translate([x,y,z]) rotate([0,0,phi])
        cube(5.2,center=true);
      }}
}
jpicht commented 7 years ago

Can you post the full & correct JSON-Config file? I think for a neat solution we will need the model points and the model normals.

cnlohr commented 7 years ago

Here's mine, I've marked the sections that would need to be customized by someone's with XXX's. LHR-B4ABXXXX-Charles.zip

cnlohr commented 7 years ago

Here's the normals in CSV form for anyone who wants it... hmd_normals.csv.zip

LeeCookGHA commented 7 years ago

@cnlohr I think I follow what you're doing in the planetest: you're working on a single LH set of figures and checking x/y/z/ positions in a sphere around the cluster points to converge on the best sphere position (and thus angles) which fits the data? But I can't figure out if you're taking the relative 6DoF of the LH emitters in to account?

For example each LH will have a different pitch/roll figure which would alter the XY angle ratios and therefore the XYZ positions worked out from those angles would not be using the same co-ordinate frame. Imagine worst case of a LH on it's side...

(Please add a few more comments for those of us that need a little hand-holding! :-) )

cnlohr commented 7 years ago

I willl, I will! So, the way I was doing it was going pitch, roll, yaw, one at a time trying to find the best orientation. This simply will not work at the macro scale... at least reliably. It's also way slower, because I have to try a ?thousand? orientations before I figure out where it should be pointed. I have to do that once for every position I try the lighthouse at.

It does happen to get the right answer most of the time for me, but this is bad. I am going to try a more analytical solution, to purposefully point the lighthouse at the HMD from every possible position. I will call this "planetest2"

cnlohr commented 7 years ago

Basically I was just trying every possible combination of position and pitch yaw roll I could think of. This is bad. (Though seems to usually work)

LeeCookGHA commented 7 years ago

Just to pitch in my two cents, this would be my starting point as to how I’d approach it…

Do the calibration dynamically rather than off-line, the user will expect to have to spend some time doing it and you can use more CPU intensive approaches within it that don’t have to appear within the standard mode.

Use the HMD or controller to determine and then eliminate differences in the roll aspects of the LH bases first (standing in sight of both bases and PRY the headset whilst trying to keep the HMD in the same physical location). Once you know the declarations are plumbed you should, I think, be able to start looking at the X and Y declarations as entities in the same plane.

Then move the controller up/down and forward/back all the time in view of both LH such that there is always at least one sensor that gets readings from both LH bases. This should give you a whole load of points that you can do a simultaneous equation on to work out the relative pitch and yaw of each of the LH. You should also be able to work out the relative position differentials by looking at the relative changes in angles (sin/cos curves) though I'm not sure how accurate this would be...

Finally move the controller in PRY and forward/back/up/down. Take readings for pairs of sensors to look at the angular separations vs known maximum physical separations – you should be able to get a very good indication of the actual distance (and, with defined bearings to single sensors - the XYZ positions) by scaling against the defined HMD separation. Unfortunately perspective will be the downfall of accuracy here because you’re never going to precisely match the bearings and the ratios.

A much better solution for the final step would be the dreaded PnP to get both at once and make it a simultaneous equation…

LeeCookGHA commented 7 years ago

Interestingly, the VIVE system also has the ability to refine the relative sensor positions from the default drawing positions on an HMD and controller when in use. This removes the manufacturing tolerances from the devices and makes the overall system much more accurate in depth.

cnlohr commented 7 years ago

What you've described would be very cool, but, exceedingly difficult. And, I would worry the absoluteness of the lighthouse system would be compromised. I guess I don't fully understand what it would be solving, absolutely. It would be nice to know the lighthouse locations better, and I think you could resolve better over time, but I think it would be a very, very long process, realizing the data consistently pulls one way or another.

octavio2895 commented 7 years ago

I think I've found something that might help, sorry in advance if it doesn't help. I would try to program it but I'm a total noob.

OpenCV have algorithms that can approximate very fast the pose of a model given the 3D points from the model (the position of each sensor relative to any point in particular, say the center of the HMD) and the 2D points from the still image. This returns the translation vector (x,y,z) of the camara from the center of the HMD and the rotation vector (roll, pitch, yaw) from from the center of the HMD.

http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

cnlohr commented 7 years ago

@octavio2895 : There is a lot of detail there when using cameras, etc... which thankfully, we don't need. The lighthouses are all already pretty solid. The other problem is that it's all so tightly coupled to OpenCV, itself which I would like to avoid inclusion of.

everyone else: I now have a tool able to determine location of the lighthouses pretty? accurately. I verify the position by pretending to shoot rays out from the lighthouse where I expect the photodiodes on the HMD and see if it lines up. Currently works for two lighthouses at the same time. I really need to get issue #1 resolved. I am sure it can be done better and faster than the way I am doing it. See it in the tools/planetest2 folder.