I have been experimenting with the CloudXR client on Nebula 2.3.4 and have been able to get some good results so far. However, I can’t figure out how to access localisation features such as plane detection to properly anchor models in the room. I am building the server side (PC) app as a standalone desktop app targeting the SteamVR runtime. To access the NRSDK localisation functionality I would need to target an Android build. Is this correct? Is there a way to send localisation data back to the server side app through the Nebula client? Or does it simply set up a coordinate system centred on where the glasses are located at app launch without any knowledge of the surrounding room?
Any insight would be appreciated!
Hi developer, our Light can percept surroundings and build anchors. And also, you have to build 3D app by Unity then get these data. But the online available versions (Developer Ver and Experimental Ver) cannot get access to this kind of raw data.
Thanks for your reply! I should have specified above that it is indeed the Nreal Light that I am enquiring about. Just to clarify, there is currently no way, using the Nebula CloudXR client, to send slam data to the Unity app running on the PC?
At present, there is no way for current Developer Ver NRSDK. SLAM data is not accessible, so you cannot get this data by your way. Sorry about it.
Ok thanks for the info. Are there any plans to make slam data accessible in this way in the future?
The plan for this way is not clear, but we’re trying to integrate our SDK with OpenXR, then the raw data can be accessible when you build your own Unity APP.
Ok great! Is there any indication of when the integration with OpenXR would be expected?
The integration will be finished before end of this year.
Great thanks very much for the info
The CloudXR SDK provided by Nvidia doesn’t provide interface for features like plane detection at this moment. Although the client can detect planes locally if the developer wills it, plane data cannot be passed to the server runtime or on-server applications through CloudXR’s interface.
So, at this moment, you may only set up a coordinate system centered at the glasses, and address floor position using the initial HMD height at launch.
Hi @Pepe, many thanks for your reply. Ok so the functionality is not actually there within CloudXR itself? This is interesting because I asked a similar question on the Nvidia forum and got no replies. Maybe this is the reason. The CloudXR SDK comes with a sample Android client that incorporates some ARCore features. Is this what you mean when you say the client can detect planes locally if the developer wills it?
Yes, I suspected that was the case regarding the coordinate system. Thanks for the confirmation
The CloudXR SDK comes with a sample Android client that incorporates some ARCore features. Is this what you mean when you say the client can detect planes locally if the developer wills it
CloudXR SDK comes with a variety of client implementations, such as those for Oculus, Vive, ARCore, etc. There you can refer to the APIs available from CloudXR SDK.
Or, simply refer to the header files in CloudXR-SDK/Client/Include.
Then it should be clear that transmission customization between the client and the server is quite limited and there’s no standard API for transmitting plane information.
So, it is possible for the client to detect planes and track hands itself, but such information is hard to be passed to the CloudXR server or the content on the server side.
I recall that you also mentioned NRSDK, and I assume you were referring to the Unity SDK on Nreal website. That SDK is particularly for developing contents, and it’s not very likely that one can alter CloudXR client behavior with it. NRSDK is not for building SteamVR applications either, which requires using OpenVR or OpenXR APIs.
Hope this helps.