Closed arvrschool closed 3 years ago
What do you mean by "preview camera image"?
If you want the raw camera pixels on the CPU, there's an API for that and a sample scene.
Thanks for your reply. We have an Algo&Application, we use a frame data to localization, and fuse a located pose by this frame with tracker pose by ARCore, we call the result fusion pose. Then we render a arcontent camera by the fusion pose, we find the rendered model is not fixed with the real scene, it has a relative drift when I move phone. The ArcontentCamera for rendering objects and the ARCamera(render the background by ARFoundation) have the same projection matrix, so we think that the frame data acquired by TryAcquireLatestCpuImage is not synchronized with the camera pose.
So we want the preview camera image real time. Or did I do something wrong?
Sorry, I don't really understand what you are asking. However, since you mentioned "not synchronized with the camera pose", you might be accessing the camera's transform too early. The camera's transform is not updated until just before rendering, so if you access it in, e.g., an Update method, it will be the previous frame's transform. You can subscribe to the Application.onBeforeRender event and perform your logic there; the camera's transform should be up to date in this callback.
According to your suggestion, I have performed my logic in Application.onBeforeRender callback, such as:
void OnBeforeRender() => UpdateCameraImage();
public void OnEnable()
{
#if !UNITY_EDITOR
Application.onBeforeRender += OnBeforeRender;
#endif
}
public void OnDisable()
{
#if !UNITY_EDITOR
Application.onBeforeRender -= OnBeforeRender;
#endif
}
in UpdateCameraImage() method, I have acquired frame from cpu by TryAcquireLatestCpuImage() and got camera's transform by ARCore, the code like this:
Matrix4x4 currentPose = Matrix4x4.TRS(Camera.main.transform.position, Camera.main.transform.rotation, Vector3.one);
but the result still doesn't look right:
That looks more like the projection matrix is off to me. Are you using the ARCameraBackground
component? It sets the projection matrix, but you can also set it yourself. It is one of the properties supplied by the frameReceived event.
I don't use ARCameraBackground, and set the projection matrix by myself. But I don't think that the projection matrix is off to me, if I move slowly, the result looks right.
By the way, I have tried ARCameraBackground
, and set the projection matrix to be the same as ARCore camera projection matrix, unfortunately, the result still doesn't look right.
Hi Tim, I paste my full code, and look forward to your guidance: `using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.XR.ARFoundation; using UnityEngine.XR.ARSubsystems; using UnityEngine.UI; using System; using System.Runtime.InteropServices; using Unity.Collections; using Unity.Collections.LowLevel.Unsafe;
public class ARFoundationController1 : MonoBehaviour
public Camera ARContentCamera;
public Text DebugTextInfo;
private bool hasBeenSet = false;
private bool getIntrinsics = false;
private IntPtr yPlane = IntPtr.Zero;
// is a Orthogonal camera
public Camera backgroundCamera;
private ARCameraManager cameraManager;
private Texture2D texture;
public GameObject backgrounPlane;
private Matrix4x4 GetProjectMatrix(float fx, float fy, float cx, float cy, float far, float near)
{
Debug.Log("Intrinsics = " + fx + ":" + fy + ":" + cx + ":" + cy);
Matrix4x4 projectionMatrix;
projectionMatrix.m00 = fx / cx;
projectionMatrix.m01 = 0.0f;
projectionMatrix.m02 = 0.0f;
projectionMatrix.m03 = 0.0f;
projectionMatrix.m10 = 0.0f;
projectionMatrix.m11 = fy / cy;
projectionMatrix.m12 = 0.0f;
projectionMatrix.m13 = 0.0f;
projectionMatrix.m20 = 0.0f;
projectionMatrix.m21 = 0.0f;
projectionMatrix.m22 = (far + near) / (near - far);
projectionMatrix.m23 = -1.0f;
projectionMatrix.m30 = 0.0f;
projectionMatrix.m31 = 0.0f;
projectionMatrix.m32 = 2.0f * far * near / (near - far);
projectionMatrix.m33 = 0.0f;
return projectionMatrix.transpose;
}
public bool isTracking { get; protected set; }
private void ARSessionStateChanged(ARSessionStateChangedEventArgs args)
{
CheckTrackingState(args.state);
}
private void CheckTrackingState(ARSessionState newState)
{
isTracking = newState == ARSessionState.SessionTracking;
}
public void OnEnable()
{
CheckTrackingState(ARSession.state);
ARSession.stateChanged += ARSessionStateChanged;
Application.onBeforeRender += UpdateCameraImage;
}
public void OnDisable()
{
ARSession.stateChanged -= ARSessionStateChanged;
Application.onBeforeRender -= UpdateCameraImage;
}
void Awake()
{
cameraManager = UnityEngine.Object.FindObjectOfType<ARCameraManager>();
}
// Start is called before the first frame update
void Start()
{
Screen.sleepTimeout = SleepTimeout.NeverSleep;
// Get the frame size is 1440 * 1080 in Huawei cellphone.
backgroundCamera.aspect = 1080.0f / 1440.0f;
}
// Update is called once per frame
void Update()
{
// UpdateCameraImage();
if (Input.GetKeyUp(KeyCode.Escape))
{
Application.Quit();
}
}
unsafe void UpdateCameraImage()
{
if (!isTracking)
{
Debug.LogError("ARSession is not ready");
return;
}
XRCpuImage image;
var cameraSubsystem = cameraManager.subsystem;
if ((cameraManager == null) || (cameraManager.subsystem == null) || !cameraManager.subsystem.running)
{
Debug.Log("cameraManager is null!");
return;
}
if (!hasBeenSet)
{
using (var configurations = cameraManager.GetConfigurations(Unity.Collections.Allocator.Temp))
{
for (int i = 0; i < configurations.Length; i++)
{
Debug.Log("configurations " + i + ":" + configurations[i].width + ":" + configurations[i].height);
}
// Make 720P the active one
cameraManager.currentConfiguration = configurations[2];
}
hasBeenSet = true;
return;
}
if (!getIntrinsics)
{
cameraManager.TryGetIntrinsics(out XRCameraIntrinsics intrinsics);
Debug.Log("try get intrinsics = " + intrinsics.focalLength + ":" + intrinsics.principalPoint);
ARContentCamera.projectionMatrix = GetProjectMatrix(intrinsics.focalLength.x, intrinsics.focalLength.y, intrinsics.principalPoint.y, intrinsics.principalPoint.x, 1000f, 0.3f);//
getIntrinsics = true;
}
if (hasBeenSet)
{
// Attempt to get the latest camera image. If this method succeeds,
// it acquires a native resource that must be disposed (see below).
if (!cameraSubsystem.TryAcquireLatestCpuImage(out image))
{
Debug.Log("cameraSubsystem.TryAcquireLatestCpuImage failed!!!");
return;
}
if (!image.valid)
{
return;
}
var conversionParams = new XRCpuImage.ConversionParams
{
// Get the entire image.
inputRect = new RectInt(0, 0, image.width, image.height),
// Downsample by 2.
outputDimensions = new Vector2Int(image.width, image.height),
// Choose RGBA format.
outputFormat = TextureFormat.RGBA32,
// Flip across the vertical axis (mirror image).
transformation = XRCpuImage.Transformation.MirrorY
};
// See how many bytes you need to store the final image.
int size = image.GetConvertedDataSize(conversionParams);
// Allocate a buffer to store the image.
var buffer = new NativeArray<byte>(size, Allocator.Temp);
// Extract the image data
image.Convert(conversionParams, new IntPtr(buffer.GetUnsafePtr()), buffer.Length);
if (texture == null)
{
texture = new Texture2D(conversionParams.outputDimensions.x,
conversionParams.outputDimensions.y,
conversionParams.outputFormat,
false);
}
// "Apply" the new pixel data to the Texture2D.
texture.LoadRawTextureData(buffer);
texture.Apply();
Debug.Log("texture size == " + texture.width + ":" + texture.height);
backgrounPlane.GetComponent<Renderer>().material.mainTexture = texture;
// Display some information about the camera image
DebugTextInfo.text = string.Format(
"Image info:\n\twidth: {0}\n\theight: {1}\n\tplaneCount: {2}\n\ttimestamp: {3}\n\tformat: {4}",
image.width, image.height, image.planeCount, image.timestamp, image.format);
ARHelper.GetPlaneDataFast(ref yPlane, image);
var camera = cameraManager.GetComponent<Camera>();
// 获取当前pose
Matrix4x4 currentPose = Matrix4x4.TRS(camera.transform.position, camera.transform.rotation, Vector3.one);
/**
* Do sth of Processing Pose and Frame Data(only yPlane)
**/
DoSomeThingAbout(IntPtr yPlane, Matrix4x4 currentPose);
image.Dispose();
buffer.Dispose();
}
}
} `
You are computing your own projection matrix; I suggest using the one supplied by the frameReceived event. Note that ARFoundation does not compute this matrix; we get it directly from the underlying AR framework, e.g., ARCore or ARKit.
I have used the projection matrix supplied by the frameReceived event, but It's not getting better.
`
CameraManager.frameReceived += OnCameraFrameReceived;
void OnCameraFrameReceived(ARCameraFrameEventArgs eventArgs) { Matrix4x4? projectionMatrix = eventArgs.projectionMatrix; ARContentCamera.projectionMatrix = (Matrix4x4)projectionMatrix; Debug.Log("ARContentCamera projectionMatrix = " + ARContentCamera.projectionMatrix); }`
Do you have a plan to provide an interface that acquire the synchronized frame's data and camera's transform? It is important for computer vision application.
Do you have a plan to provide an interface that acquire the synchronized frame's data and camera's transform?
They are synchronized. I think your problem lies elsewhere.
You mentioned previously that this happens even when using the ARCameraBackground
component. Let's back up a bit -- does the problem exist when you run a simple scene without any custom code? For example, something like the SimpleAR scene in this repo?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Thanks for your reply. We have an Algo&Application, we use a frame data to localization, and fuse a located pose by this frame with tracker pose by ARCore, we call the result fusion pose. Then we render a arcontent camera by the fusion pose, we find the rendered model is not fixed with the real scene, it has a relative drift when I move phone. The ArcontentCamera for rendering objects and the ARCamera(render the background by ARFoundation) have the same projection matrix, so we think that the frame data acquired by TryAcquireLatestCpuImage is not synchronized with the camera pose.
So we want the preview camera image real time. Or did I do something wrong?
@arvrschool I encountered the same problem, the rendered model is not fixed with the real scene, how did you solve it? Thanks.
How do I... How can I get the preview camera image in a efficient way?