I would like to know a timeframe when we could expect NVidia CloudXR to be supported for NReal Light.
Is this something NReal are working on? Or is NReal expecting a third party to provide it?
I came across this on Twitter during the run up to the USA release of NReal Light which created an expectation. It was quickly taken down.
I want to concurrently stream high poly models and volumetric video from a PC using CloudXR to multiple NReal Light glasses.
May I know the reason for which you want to do this level of steaming?
Also, given that the current version of CloudXR is bound to SteamVR and the limitation of GPU power, it doesn’t seem likely to be capable of concurrent streaming to multiple clients, at least not streaming multiple MR visions based on different trackings from clients.
Sorry not to have responded sooner as just picked up your message.
We have installed CloudXR on a local PC (not in the Cloud). It currently works with Oculus Quest2 so why not with NReal Light?
I am a PhD Researcher in Residence at University of Arts London Creative Computing Institute and am testing streaming applications for high poly 3D models (500k - 50M poly)band volumetric video.
With some coding we have streamed to NReal Light a life size 6DoF 50M poly model converted to 3.1Billion point cloud of the Space Shuttle (Smithsonian Washington) from New Zealand to Dublin. This used Euclidean udStream but currently does nit accept volumetric video.
My use case is for 55,000 museums worldwide where 95-99% of their collections are in storage and rarely displayed. Also to make 3D digital loans to be displayed life size with 6DoF in location based exhibitions alongside physical exhibits.
So my question remains that NReal having tweeted an official slide showing NReal Light supporting NVidia CloudXR and OpenXR is this still your plan and when is the likely release. Without a plan I need to switch my research away from NReal Light.
Sorry that it took me some time to pick up your message too.
Your research sounds fantastic and I understand your point.
I’m personally expecting to see something within a month or two.
Also, based on my understanding, SteamVR has limited support to AR but Nreal is fundamentally an AR company. I think this is partially why it takes time to adapt between CloudXR and Nreal.
I hope these address your questions and concerns to some extent at least.
I look forward to seeing what hopefully comes in a few months time.
We were in the middle of writing our own code to allow streaming volumetric video in 6DoF using CloudXR to NR Light when the NReal pre announcement I referred to appeared. So I put our efforts in abeyance. I recently saw that BT developed this too and were able to demo it at Saracens rugby club in the U.K. It is their own code but it does not look like it will be publicly released.
See 5G Edge-XR - The Grid Factory
Another issue that must be resolved is persistent anchors. These must be streamed from the server (CloudXR). This is so that all viewers of a 3D model can see in AR the same 3D model anchored in the same physical place.
It is not sufficient for anchors to be only for one person at a time with the anchor obtained and set by the tethered Android phone itself. The result is that the 3D model stored on the phone can be seen again in the same place by NReal glasses using that phone only. This is fine for single users but not for multiple users sharing the same experience.
This could however be a way of setting the anchor itself using a low poly model (as untethered headsets have limits. HoloLens2 for example is 100k poly) before the anchor is stored on the server alongside the high poly model to be streamed to all viewers. This is an approach I am taking in my research.
I look forward to seeing NReal future developments.
Yep, it also appears to me that sharing anchors could be a problem.
For now, I think that might be resolved with image anchors. But, given that streaming one single instance to different MR clients requires considerable rendering power, it most likely cannot be done by a single common GPU, so you will still need multiple GPUs and thus multiple computers. In addition, it doesn’t look like any other existing CloudXR client is supporting anchor sharing in a meaningful way. Given all that, I doubt if Nreal will put this at priority in short terms at least.
Did you check out the client in developer options?
I saw NReal Nebula support cloudXR in their developer options, you can refer to below link
CloudXR - NRSDK (gitbook.io)
Wow! Thanks. Will try and report back.
I am also to research the CloudXR support in NREAL device, the CloudXR in Nebula is beta version only, it is unstabiltiy.
in my viewpoint, if NREAL can support cloudXR or others cloud rendering solution, we can reduce the AR glass weight and hit consumer request a bit.