ArUco Pose Estimation: Calibration Parameters?

Hello!

Can I use the

rgb camera intrinsic matrix
NativeMat3f rgb_in_mat = NRFrame.GetRGBCameraIntrinsicMatrix();

or the
rgb camera distortion
NRDistortionParams rgb_in_mat = NRFrame.GetRGBCameraDistortion();

as calibration parameters for the OpenCV ArUco pose detection? Did someone already did that? Would help me a LOT!

Greetings,
Jan

1 Like

Yes, you could use them.

1 Like

How? What are the parameters exactly meaning?

If you want to use that parameters with that Unity OpenCV-Asset from Enox, I converted them like that, to be usable with Aruco.estimatePoseSingleMarkers() to get the rvec and tvec, which are by the way the Rotation-Vector and Translation-Vector (last one: for positioning).

    private Mat convertToMat(NativeMat3f input)
    {
        Mat output = Mat.zeros(3, 3, CvType.CV_64FC1);

        Vector3[] columnVectors = new Vector3[3];
        columnVectors[0] = input.column0.ToUnityVector3();
        columnVectors[1] = input.column1.ToUnityVector3();
        columnVectors[2] = input.column2.ToUnityVector3();

        for (int i = 0; i < 3; i++)
        {
            for (int j = 0; j < 3; j++)
            {
                output.put(j, i, columnVectors[i][j]);
            }
        }
        return output;
    }

    private Mat convertToMat(NRDistortionParams input)
    {
        Mat output = new Mat(1, 4, CvType.CV_64FC1);
        output.put(0, 0, input.distortParams1, input.distortParams2, input.distortParams3, input.distortParams4);
        return output;
    }

To get an Quaternion, I used the code from this answer to translate the rvec.

I added the tvec to my CenterCamera.position to get the position in front of me, but it is still not that accurate - can someone give me a hint?

Jan, did you get anywhere with this? I am also trying to accurately place objects detected in RGB frames, using the intrinsic and distortion parameters, along with the head pose and RGB from head pose transformations… Positions are always a little off…

yeah, i dont get it more accurate than “always a little off”

1 Like

I am also finding it very different between sets of glasses, and the values from the intrinsic and extrinsic APIs are different per glasses m… Can you share your calculation for positioning things?

it is really dirty programmed, i couldn’t really explain it later, maybe :smiley:

No worries - I worked on this some today, and now have pretty good positioning. Ignoring the distortion parameters was a part of it, but also I wasn’t quite storing the values in the projection matrix correctly - some reading up helped me out!

if you managed this, could you share yours then? :sweat_smile:

Hey - I tried to compress it into one method - not tested, but this is all the steps to go from a 2D pixel position to a 3D world position. I have used Vectors and Matrices in the System.Numerics namespaces rather than Unity’s own…

    bool TryGetWorldPosition(Vector2 pixelPosition, float distance, out Vector3 worldPosition)
    {
        var glassesPose = UnityEngine.Pose.identity;
        ulong timestamp = 0;

        // Assume you are using the very latest frame so the image and 
        // glasses position are not too different! :)
        if (NRFrame.GetFramePresentHeadPose(ref glassesPose, ref timestamp))
        {            
            // Get the projection transform.
            var projectionTransform = NRFrame.GetEyeProjectMatrix(out var _, 0.3f, 100f).RGBEyeMatrix.ToNumerics();

            // Get the projection intrinsic values from the projection transform
            var focalLengthX = projectionTransform.M11;
            var focalLengthY = projectionTransform.M22;
            var centerX = projectionTransform.M13;
            var centerY = projectionTransform.M23;
            var normalFactor = projectionTransform.M33;

            // Normalize the center.
            centerX = centerX / normalFactor;
            centerY = centerY / normalFactor;

            // Get the pixel coords on a scale between -1 and 1.
            var pixelCoordinates = (new Vector2(pixelPosition.X / 1280f, 1 - (pixelPosition.Y / 720f)) * 2f) - new Vector2(1, 1);            

            // Create a directional ray using the principal point and the focal length.
            var dirRay = new Vector3(
                (pixelCoordinates.X - centerX) / focalLengthX,
                (pixelCoordinates.Y - centerY) / focalLengthY,
                1.0f);

            // Multiple the ray by the distance you want.
            var position = dirRay * distance;

            // Get the RGB camera transform relative to the glasses.
            var cameraToGlassesTransform =
                Matrix4x4.CreateFromQuaternion(NRFrame.EyePoseFromHead.RGBEyePose.rotation.ToNumerics()) *
                Matrix4x4.CreateTranslation(NRFrame.EyePoseFromHead.RGBEyePose.position.ToNumerics());

            // Get the glasses transform relative to the world.
            var glassesToWorldTransform =
                Matrix4x4.CreateFromQuaternion(glassesPose.rotation.ToNumerics()) *
                Matrix4x4.CreateTranslation(glassesPose.position.ToNumerics());

            // Add these transform to create the full camera view transform
            var cameraViewTransform = cameraToGlassesTransform * glassesToWorldTransform;

            // Transform the position we have relative to the camera to make it relative to the world.
            worldPosition = Vector3.Transform(position, cameraViewTransform);
            return true;
        }

        worldPosition = Vector3.Zero;
        return false;
    }

Let me know if this works for you, or if it doesnt, and we can look again.

1 Like

wow, big thanks, I will try it in the next days and give feedback :slight_smile:

No worries @jan - I did also use the distortion parameters based on some math I read on Wikipedia, but it seemed less accurate, this overall looks really tight.

Where you see ToNumerics() - they are extension functions I have - here they are if it helps:

    public static System.Numerics.Vector3 ToNumerics(this UnityEngine.Vector3 vector)
    {
        return new System.Numerics.Vector3(vector.x, vector.y, vector.z);
    }

    public static System.Numerics.Quaternion ToNumerics(this UnityEngine.Quaternion quaternion)
    {
        return new System.Numerics.Quaternion(quaternion.x, quaternion.y, quaternion.z, quaternion.w);
    }

@Elisabeth - tagging you here, as I recall you had a similar question a couple of weeks ago…

Hi !
I’m currently trying to Instantiate some Unity prefabs on Unity 3D space points that should correspond to markers detected by using the Nreal RGBCamera as input for an OpenCV slam.

But the points seem to have an offset as their coordinates seem to be relative to the RGB Camera not the Unity left eye or right eye camera.

I’m trying to check this and other thread in case it helps because the problem seems quite similar.

You mention you have done some reading that helped you out. Could you please share those resources with me ? I would like to try to understand better what I’m doing.

Cheers.

Hi, please check if this post can be helpful to you.

1 Like