Screen to World Point from Centre Cam

we have two clients One is a Nreal Client and the other is a windows client so what we basically need is
windows client clicks on his screen we pass those ratio points to the Nreal client and want to generate an object in the real world using Camera.ScreenPointToRay() function.
As of now whenever we try to do so using NrealCentre cam i.e NReal Camera Rig’s centre
cam.ScreenpointToRay(Vector2(ratiopoints) ; (note ratio points are multiplied by nreal’s Screen width and height for conversion)
where ratio points are Window’s clients Screen coordinates divided by respective width and height.
on doing this we get an offset which is fairly visible.
we were doing the same for AR foundation android Client having a single AR camera
and getting the best results but
InNreal we are getting offset due to the multiple cameras in camera rig
Can anyone point me in the right direction how to go further in this?

Hi, NRSDK uses leftCamera and rightCamera in CameraRig to render texture for separate eyes. The centerCamera is only for in-editor rendering (play mode). So probably that’s why you observe the offset.

1 Like

so in case, I require a screen point to ray-like feature how can I proceed further or which camera to use for converting from screen to world point also I observed that Camera.pixelrect also results in different resolutions for three cameras and from the standard 1280 X 720 value Nreal screen resolution as well.
BTW Thanks a lot for replying I had lost all hopes that anyone would reply.

Sounds like you need to explore Gaze interaction supported in NRInput module

I tried to explore but had no luck yet,
also Gaze always points from the center of the screen always,
whereas in our case we have variable points in the screen space coordinates.
what are you suggesting,

Hi, Dhami, after an examination of your issue internally, we hereby propose a solution that has a high chance to solve your problem.

The offset comes from the different pose between RGBCamera and virtual camera (leftCamera, rightCamera or centerCamera you call ScreenpointToRay with). Since you get the texture from RGBCamera, you also need to virtualize a RGBCamera whose pose corresponds to the physical position on glasses and whom you call ScreenpointToRay with.

Please inspect NRKernal.Record.NRCameraInitializer, where the position of “virtual RGBCamera” gets initialized:

transform.localPosition = eyeposFromHead.RGBEyePose.position;
transform.localRotation = eyeposFromHead.RGBEyePose.rotation;

You could create a new camera (RGBCamera, for example) under NRCameraRig. Then add “NRCameraInitializer.cs” as a component. Choose RGB_Camera as device type:

image

Call ScreenpointToRay on this camera when needed.

Hope this will help! Thank you.

1 Like

Thanks this has solved the issue for us.