DanijelHuis / HDAugmentedReality

Augmented Reality component for iOS, written in Swift.
MIT License
480 stars 97 forks source link

Azimuth calculation for objects in space for ex: flights #18

Closed sanjeevghimire closed 7 years ago

sanjeevghimire commented 7 years ago

this is a great project. I am working on a flight rendering on AR view. I have lat long and altitude.

Based on this code it doesn't take altitude into account. Is there a way this code can be modified also to display objects like flights in the sky in AR mode?

DanijelHuis commented 7 years ago

As it is written in the description, this library is intended for totally different purpose.

However, I am working on a new branch that will have code for positioning annotation views seperated, so it will be easy to create custom class and provide few methods to specify custom positioning logic. I currently don't have the time to finish it but it will be out in a couple months.

sanjeevghimire commented 7 years ago

is there a way you can give some hints for objects that involves altitude?

jacogasp commented 7 years ago

Hello, I've tried to modify this awesome library to manage also the altitude. I'm still testing it, but it seems that it's working. The method that I used is the following (it might be not so elegant):

In code: Find how an object is tall when display on the screen (in pixels)


struct DeviceSensor {
    let iPhone5 = Sensor(height: 3.42, width: 4.54, focalLenght: 4.10)
    let iPhone5S = Sensor(height: 20, width: 10, focalLenght: 10) // Not real
    let iPhone6 = Sensor(height: 20, width: 10, focalLenght: 10) // Not real
    let iPhone6S = Sensor(height: 20, width: 10, focalLenght: 10) // Not real
    let iPhoneSE = Sensor(height: 3.6, width: 4.8, focalLenght: 4.15)

    struct Sensor {
        var height = 0.0
        var width = 0.0
        var focalLenght = 0.0
    }
}
let sensor = DeviceSensor().iPhoneSE

func calcObjectVerticalDimensionOnScreen(height: Double, distance: Double) -> CGFloat {
    let sensorToScreenRatio = sqrt( Double(UIScreen.main.bounds.width * UIScreen.main.bounds.height) / (sensor.width * sensor.height))
    let relativeHeightInMetres = height * 1000
    let distanceInMetres = distance * 1000

   let dimensionOnScreen = sensor.focalLenght * relativeHeightInMetres / distanceInMetres // 

   return CGFloat(dimensionOnScreen * sensorToScreenRatio)
} 

Now we calculate the top edge of the object as

fileprivate func yPosTopEdge(_ annotationView: ARAnnotationView) -> CGFloat {
    if annotationView.annotation == nil { return 0 }
    let annotation = annotationView.annotation!

    var relativeHeight = 0.0
    let altitude = lastLocation?.altitude
    let distance = annotation.distanceFromUser
    let offset: CGFloat = 110 // This set the offset from the top of the object
    let elevation = annotation.elevation
    if  elevation != 0 {
        relativeHeight = Double(elevation) - altitudine!
    }
    var yPosTopEdge = calcObjectVerticalDimensionOnScreen(height: relativeHeight, distance: distance)
    // Translate the top edge point on the correct altitude from the overlayView horizon
    yPosTopEdge = self.view.bounds.height / 2 - yPosTopEdge - offset
    return yPosTopEdge
}

Finally in the positionAnnotationViews() in function set

fileprivate func positionAnnotationViews() {
   ...
   let y = self.yPosTopEdge(annotationView) - annotationView.frame.height
   ...
}

WARNING! The library as it is provided, manages the vertical translation just multiplying the pitch angle by the VERTICAL_SENS constant. This is ok when you do not consider the altitude but it is very imprecise when you are in latter case. So we have to calculate the proper vertical pixels per degree density constant. Fortunately iOS provides a new property called videoFieldOfView the retrieves the horizontal Field of View of the phone camera, but we're looking for the vertical FOV.

In the createCaptureSession() function, once you got the backVideoDevice property you can do this:

if let retrieviedDevice = backVideoDevice {
            // WARNING: This is valid only in portrayed mode. In landscape mode VFOV and HFOV must be flipped
            let VFOV = Double(retrieviedDevice.activeFormat.videoFieldOfView)
            let HFOV = radiansToDegrees(2 * atan( tan(degreesToRadians(VFOV / 2)) * Double(UIScreen.main.bounds.width / UIScreen.main.bounds.height)))

            H_PIXELS_PER_DEGREE = UIScreen.main.bounds.width / CGFloat(HFOV)
            V_PIXELS_PER_DEGREE = UIScreen.main.bounds.height / CGFloat(VFOV)

            OVERLAY_VIEW_WIDTH = 360.0 * H_PIXELS_PER_DEGREE

            print("HFOV: \(HFOV), VFOV: \(VFOV), HPixelsPerDeg: \(H_PIXELS_PER_DEGREE), VPixelsPerDeg: \(V_PIXELS_PER_DEGREE), Overlay view width: \(OVERLAY_VIEW_WIDTH)")
        }

Finally, in the fileprivate func overlayFrame() change

let y: CGFloat = (CGFloat(self.trackingManager.pitch) * VERTICAL_SENS) + 60.0

with

let y: CGFloat = (CGFloat(self.trackingManager.pitch) * V_PIXELS_PER_DEGREE)

Keep in mind four important things:

I'm sorry if I've made some mistakes, I translated some function/constant names from Italian to english, I might have forgotten some. I hope this can be useful for you!

DanijelHuis commented 7 years ago

Hi JacoGasp and thanks for this, looks really great!

If you could please explain the part with FOV calculation?

As I see it, videoFieldOfView is the horizontal FOV of the camera. Horizontal here is relative to the camera and camera in the device is in landscape(at least I think). So when you rotate device in landscape, videoFieldOfView is the actual horizontal FOV.

Or to put in different words, videoFieldOfView is FOV of the wider side of the picture.

Running your code on iphone 6 in landscape gives me this: HFOV: 89.2353639956682, VFOV: 58.0400009155273 but it should be more like this: HFOV: 58.0400009155273, VFOV: 34.6453616550478

I might be wrong, I am not the expert in this. Thanks!

https://developer.apple.com/library/content/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html

jacogasp commented 7 years ago

Hey DanijelHuis, you're perfectly right!! In the comment about the FOV I totally inverted what I wanted to say.. Now I corrected it.

That formula calculates the FOV in portrayed mode, not in landscape! The corrected horizontal view should be the one retrieved by videoFieldOfView, that is the wider side of picture captured by the camera.

Now, I stared to develop my project this summer and at the moment I cannot remember everything very well.. However you can use this tool to check if the FOV is correct.

The formula for HFOV in my example is simply the inverted one written in that page.

Let me know if it works because I'm still not totally sure about my procedure!

sanjeevghimire commented 7 years ago

can this solution be merged into the code?

DanijelHuis commented 7 years ago

Thx JacoGasp, that clears it up!

sanjeevghimire: I can't merge this in current version of the library since it would not mix well with current positioning by distance(vertical levels). It wouldn't make much sense for the user if y position is based both on altitude and distance/vertical level.

As I said earlier, I am working on version 2.0 of the library which will have the option to choose between types of positioning(altitude or distance or custom). I don't know when it will be ready for release but I will push what I have in a couple of weeks.

harshchitrla commented 7 years ago

Hi DanijelHuis, Is it possible to explain the concept how it is working ?

DanijelHuis commented 7 years ago

Vertical/Horizontal FOV calculations are implemented in v2.0.0, thx to JacoGasp.

Regarding altitude, I've developed custom ARPresenter that handles altitude but I didn't release it because there are number of problems. E.g. rotating device over 45°(top edge toward self) will result in compass changing value by 180°, this can be observed in native Compass app. Other than that there are some problems with precision. I have plans to support altitude but it is currently not my priority.