Nebula for Window V0.7.0 Released!

Does this also work with desktops without USB-C on GPU? So can i use the app with the adapter HDMI to USB-C or use it with a regular USB to USB-C cable?

Keep up the good work!

I don’t see how it could: Xreal needs a USB channel to read the orientation sensor data and it also needs power to operate the glasses: my understanding is that the power on HDMI and DP isn’t enough for the glasses, but since there are no published specs to be found, who knows.

The bigger problem is sensor data. Again DP has some USB support at USB 2 levels, but the sensors might need more. Again, the lack of reliable and detailed technical info from Xreal is a big issue, beause peope are buying these glasses who have no hope of getting them to work on their PCs.

If you have a beefy workstation with a discrete dGPU, your only hope would be a Thunderbolt plug-in card, which takes DP inputs and adds USB3, power and PCIe lanes, of which Xreal would only need the first three.

I’ve ordered one, just in case Xreal doesn’t accept my late return (I made the mistake of waiting for this software).

Nebula seems to be a Unity application, so it really shouldn’t matter.

But my impression is that Xreal only has (one?) Nvidia RTX 3060 based notebook for all development and QA so the fact that they invite our input is simply implying that they are expecting us to do their QA for them: they perhaps need a few lessons on product liability, marketing alone won’t do.

I’ve tried with a Ryzen 5800U based notebook that offers USB-C Alt-DP.

It “works” and after forcing vertical sync at least the tearing is no longer horrible.

But the performance is far below acceptable levels (estimated ~10Hz), which unfortunately it is also with every other device I’ve tried (Xe, ARC770, RTX 2080ti).

From what I can tell, Xreal employs a rather fat software stack, a Unity engine application and the terminal server or RDP APIs in Windows to deliver the functionality.

And while that doesn’t even seem to stress the hardware as far as my monitoring tools (HWinfo & GPU-Z) can tell, the display is very far from being as smooth as it is on the Beam, which seems (again, there are no published specs…) much more modest, somewhere in the Snapdragon 835 to 855 range, I’d guess.

It’s hard to say if they’ll ever be able to make that on the more modest Windows platforms that have the proper port they need. So far it’s pretty bad on my RTX 2080ti (my only big GPU that has a USB-C “VirtualLink” port , but that GPU may be just below one feature level that they use on the RTX3060 based notebook, that their (only?) principal developer is using for coding and QA. E.g. it won’t support better than 60Hz on any secondary display, even when I run my 4k@144 screen at a modest THD@60.

The RTX 2080ti is hardly taxed at all, yet the refresh rate is as terrible as it is on the notebooks, so it doesn’t really see the GCN5 iGPU as being the real culprit, especially since it is still quite a bit faster than the SoC in the Beam can be.

1 Like

I’d have said that in more words, but that’s exactly how I see it, too, and in my case on an older Cezanne based 5800U with GCN5.

They require a USB-C port with Alt-DP support and that’s pretty much excluding all discrete GPUs, except some left over from the VirtualLink era.

Unfortunately they also do not seem to work with hybrid graphics, where the mobile dGPU isn’t actually physically connected to the display port output, but uses the iGPU to copy from the dGPUs frame buffer into its output. It would explain why this version of Nebula fails to deliver any output on the two NUCs, I’ve tried, one NUC11 with an RTX 2060m and one NUC12 with an ARC 770, both of which connect their USB-C Alt-DP ports physically to the Xe iGPU while the dGPUs only have DP and HDMI ports which won’t work with the Air².

I checked for the Nebula release on the Xreal home-page regularly: “coming soon” before the return window for the headset–that includes supporting software as a packge–expires is a “not soon enough”.

I only hit across this release pretty much by accident: not ok.

I had high hopes, but I’ve requested a return after testing this software with five of machines I have in my home-lab: I don’t expect it to reach satisfactory performance any time soon.

Satisfactory im my book would mean better than what the Bean box can offer, with hardware that pretty much by definition must be less powerful than “current” iGPUs or dGPUs.

Yes, modern mobile SoCs from Qualcom may actually beat Intel integrated HD graphics up to Ice Lake. But a 96EU Xe iGPU at 5 Watts dedicated power should at least equal the Beam or offer a smooth single virtual screen at 72Hz, because that’s a max mobile SoC power budget only for the iGPU part and even Intel isn’t that bad. Actually AMD’s GCN5 in Cezanne is pretty near the same league and 3DMark seems to prove me out, too: these iGPUs should be good enough for that job, you shouldn’t need an M1Ultra or a >100Watt dGPU for the job.

Since the Air²/Nebula require both, a USB-C port with Alt-DP and a powerful dGPU best from Nvidia, I could not test with my most powerful systems, because they dropped VirtualLink support and only offer DP and HDMI ports. AFAIK a cable simply can’t do the job of joining USB and DP into a USB-C/Alt-DP so I’ve ordered a TB PCIe extension card, that I’ll combine with an RTX 4090 on a Ryzen 9 5950X for the most powerful base platform I can use for testing. It’s an extra €80 to prove a point, because I don’t think hardware performance is the real issue judging by virtual desktops elsewhere or in fact the Beam.

I was limited to the following systems, mostly Windows 11, some Windows 10, latest CUDA 12.3.2 drivers for Nvidia, OS updates etc., I had bought an Air² Pro with Beam bundle.

  1. RTX 2080ti (in a Broadwell 22-core Xeon), which has a VirtualLink USB-C port and quite a bit of graphics power, 60% more than the RTX 3060m mentioned in the laptop the Xreal developer seems to use
  2. Lenovo laptop with Ryzen 5800U and GCN5 iGPU, two USB-C with Alt-DP ports
  3. Asus laptop with Alder-Lake i7-12700H, 96EU Xe iGPU, one USB-C/TB/Alt-DP port
  4. Intel Enthusiast NUC11 Phantom Canyon, Tiger Lake i7-1165G7 + 96EU Xe iGPU + RTX 2060m dGPU, dual USB-C/TB3 is connected to Xe (hybrid mode operations required), dGPU is connected to DP and HDMI
  5. Intel Enthusiast NUC12 Serpent Canyon, Alder-Lake i7-12700H + 96EU Xe iGPU + ARC770m dGPU, iGPU connected to USB-C/TB4 (hybrid mode operations required), dGPU is connected to DP and HDMI ports

The first thing the Beta Nebula software will do is overwrite the Air² firmware. If you then connect it to the Beam, the Beam will overwrite the Air’s firmware with its own older variant. And so it goes on, which is a bit scary, since firmware updates are often a high-risk operation and I simply don’t know if and when the two will agree on a single version…

Most of my systems have been used with a plethora of monitors before, the RTX 2080ti runs on two 4k screens, one at 144Hz and occasionally with an Oculus Rift CV1 or a wireless Lenovo Android VR headset: in other words, there are plenty of monitor settings that Windows remembers for each of the systems and that can be a problem for Nebula, which creates extra screens for the logical screen content and for the simulated monitors: Windows can then get confused where these are then logically located and which of those are actually active. And switching from 72 to 90Hz seems to create new layouts causing additional challenges. I’ve had to make sure that all screens are used in extension mode and sometimes the time you get to make these changes was too short before I could confirm their validity and Windows would reset them…

Long story short: I started with the less powerful devices, #3 the notebook with Xe graphics first. When that device delivered around 10Hz refresh on head-movements even with a single 72Hz virtual screen, I feared that more graphics power was required.

But after quite a bit of wrangling with the RTX 2080ti system, which doesn’t want to use better than 60Hz refresh on any but the primary screen, and after testing all the other devices, I’ve come to the conclusion that the 10Hz refresh isn’t related to lack of GPU power. I used GPU-Z and HWinfo to observe CPU and GPU loads and the Nebula app doesn’t stress and GPUs, why the refresh rates are so abysmal I have no idea.

I validated that enabling a fixed vertial sync on the GPUs was required to stop terrible visual artifacts on the headset, but didn’t deliver anywhere near the smoothness the Beam can deliver (which has its own challenges with the text on the virtual screens)

I was really disappointed to find that the NUCs with the hybrid graphics wouldn’t work at all, because they represent a rather large class of notebooks, who are designed in a very similar manner.

Some years ago, notebook designs with optional dGPUs had to physically switch output ports between the iGPU for low-power desktop work and the dGPU for gaming or graphics. Then GPU vendors developed a hybrid approach where dGPU output ports weren’t actually phyiscally connected but much like data center GPGPUs would only render into their frame buffer. The iGPU would then read from that frame buffer which was mapped into the SoCs virtual address space and copy its content at screen refresh rates to its own frame buffer, which would then feed the serial display outputs. It sounds computationally expensive and “double work”, but turns out such a light workload, that vendors felt it was worth saving money and integration testing on the switch. The actual performance difference or lag is minor, but measurable, so higher end gaming designs tend to also offer direct DP or HDMI ports.

In the case of USB-C/TB/Alt-DP ports this approach seems to save even more hardware validation and other overhead, which probably explains why most such universal USB-C/TB/DP ports only phyically connect to the iGPU.

So even with some of the high-end notebooks and NUCs the only port that matches the Air requirements may not be physically connected to the big GPU. And in the case of Nebula, that means it does not currently work. The virtual screen stays dark and the only sign of life is when I use “identify” in the Windows settings, where the monitor number is visible in the headset. AFAIK that’s more of a software bug than app design error, but without details I can’t exclude that (worst case) possibilty, where Xreal have not done their research on what the Windows platform can support at the required fluidity.

The software is much more a Unity app than a driver. In theory that should provide device independance and work with any hardware.

In practice the app seems to use terminal service facilities to create screen and then use RDP API calls to do what the hybrid graphics are doing, too. It creates one or more “in-game” monitors, which the applications write to, then scrapes these “in-game” monitors via RDP shadowing APIs or similar, projects and transforms them onto 1-3 virtual or a two variants of bendable monitors in a space you can view using the Air in “stereo mode”.

Compared to your typical game, this is a relatively modest workload, but it involves quite a bit of OS shenanigans and overheads, in an environment that is rather unforgiving in terms of real-time demands. The hybrid drivers I mentioned above work in a privileged OS kernel context and do not require user-mode services or foreign APIs like RDP to work: it’s a somewhat unfair competition.

Yet other VR headsets manage virtual desktops without being any more intelligent, I’ve got VR desktops bent and shaped in many way operating quite easily on everything that runs Oculus or SteamVR, but only mirroring the single primary screen.

So far none of them tried extra screens and perhaps that is an area where Microsoft doesn’t offer some fast path that the VR desktops might be able to use… I am pretty much wild guessing here, as you might be able to tell.

But it seems to indicate that Xreal is simply not putting the resources into writing and testing the app, which are required to get the job done. Of course that’s difficult for a startup, but at the current rate of progress, I’d see the Air² breaking from firmware reflashes or old age before they become usable, which is why I am trying to return mine after the first glimpse of software… and quite a few hours of testing.

1 Like

I’ve spent another couple of hours testing this release with my big Ryzen workstation:

  • AMD Ryzen 9 5950X on Gigabyte X570 Aorus Ultra, 128GB ECC RAM, Nvidia RTX 4090 with latest CUDA 12.3.2 drivers, Windows 10 22H2 fresh with January patches on WD PCIe v4 NVMe

to which I’ve added

  • Intel Maple Ridge dual Thunderbolt 4 controller from Gigabyte, which turns one of the RTX 4090 DP outputs into a USB-C Alt-DP port which the Air² Pro needs to operate

And while I’ve managed to get Nebula to run in all configurations triple screens and 32:9 @ “90Hz” included, the display is still very laggy and practically unusable.

While the invisble source screen for virtual monitor(s) reflect 72Hz and 90Hz refresh rates, there actually isn’t a noticeable difference, because the real issue is the repaints of the projection surface in AR space: they are way too slow.

When you move your head very slowly, things are relatively smooth (text is still being washed out so you when you try to move beetween text passages in documents displayed on different screens, you have to wait until it settels and you can refocus), but when you move your head normally, actual refresh is still somewhere around 10Hz, too tiresome for actual work, no different from a TigerLake or AlderLake onboard Xe 96EU iGPU or all the other systems I’ve tried.

Currently there is very few systems more powerful than this combination and it just doesn’t do the job.

My primary goal is enhanced productivity on a laptop, but since this is a Workstation I also use for gaming, I tried to see if this would work.

I have not been able to get a single game running on that virtual 32:9 screen. Most of the time Nebula would simply crash and terminate with the glasses going back into passive mode. Sometimes the entire system became unresponsive and had to be reset.

My personal alpha-to-product completion meter for Nebula hovers at the 10% mark: it’s quite unusable for any work or fun.

I’ve demanded Xreal to take their product back, because it simply does not work.

They have refused because I let them try fix things beyond the 30 day return window…
I recommend you do not make the same mistake.

I’ve opened a VISA chargeback case and will let you know if it succeeds.

DO NOT BUY THIS PRODUCT unless you are very sure that your use case does not depend on Windows software currently working.

Hi

I just got sent this link by an XReal dev when I asked about iOS Nebula (still not here).

I’m trying this with a pair of Light on a big MS Surface Studio 2 and it does not work. Sadly I just noticed other comments regarding no support for Light…wow.

Leigh

AS Abufrejoval correctly noted, Nebula/Windows is a Unity app, ind IMO, Nreal/Xreal got invested into Unity and Android, which allowed a very rapid application development based on OpenCV modules for Unity. Basically, getting the Nreal Light out and making it work well enough.

Unity is stumblig and most phones today don’t support Alt-DP - I can only hope Xreal will bite the bullet and release a pure driver (OpenXR/Win/Android/iOS) for it’s otherwise excellent hardware!

There are thousands of applications for OpenXR and many devs would be too happy to build them against the Xreal ecosystem.

Following up on reports of cables that merge HDMI/DP+USB3 into a USB-C “Alt-DP” connector suitable for the Air², I ordered one as the far easier and lighter alternative to the ThunderBolt adapter. For a pure cable solution, these cables are not cheap at around €30, as they are evidently a niche product. It’s also nowhere near as light and pliant as the cable Xreal delivers, quite uncomfortable actually, but a bit longer to accomodate a longer distance to a workstation’s tower chassis.

It arrived today and I just tried that on my Serpent Canyon NUC12, which combines an Alder-Lake i7 with its 96EU Xe iGPU with an ARC 770m dGPU. Both TB/USB-C ports are physically connected to the Xe iGPU, two DP and one HDMI port are phyiscally connected to the ARC A770m.

On normal screen operations hybrid graphics has games executing on the ARC dGPU displaying on my 4k screen via USB-C without noticeable performance degradation.

Nebula doesn’t seem to support hybrid graphics where the dGPU renders into the frame buffer of the iGPU which then connects to the monitor with a tiny overhead. This is by far the most typical setup with slim laptops, because it allows easy and seamless switching to iGPU-only mode gaining quite a bit of power efficiency for desktop work.

Only high-end notebooks (and NUCs) tend to offer discrete ports for the dGPU, but then only as HDMI or DP ports.

Anyway, Nebula failed on that NUC using the USB-C ports, even if those support DP output, but with the cable I was able to use the ARC A770m with Nebula…

But the results are nearly the same as they are on an Xe iGPU: not usable.

  • Both passive modes are ok. Mirrored THD @ 120 Hz works and so does the 32:9 “per-eye” split at @60Hz. But that should work with any DP connector on any OS.
  • Nebula at 90Hz terminates after a moment or two, the Air screen switches from passive to blank and then to passive again and Nebula hangs
  • Nebula at 72Hz works in all modes with up to three monitors or the 32:9 curved screen. But I cannot get rid of terrible tearing. I toggled all “sync” related options without any visible benefit, but I cannot find any specific “V-Sync” setting I can set globally or for Nebula. Refresh rates are also terrible, it’s the same ~10Hz refresh when I turn the head more than at millimeters per second. And even if I turn slow enough for “smooth”, text washes out and becomes unreadable at normal full-width document sizes.

This can’t be an issue of insufficient GPU power, because apart from 90Hz working on the RTX 4090, that refresh is just as bad there, even if with an Nvidia dGPU I can eliminate the tearing with V-sync, which I believe games/apps can demand and activate themselves, no need to change a global setting on a system that is not exclusively used with Xreal glasses.

If you need Nebula on Windows working as advertised, do not buy or quickly return your glasses, because Xreal doesn’t seem to accept returns 30 days of their partial hardware-only delivery… voluntarily.

I’m currently exercising my EU consumer protection options to get my money back.

it kinda of works, but yes nebula keeps rolling back the firmware after connecting to the beam is something not happy about.
can we just get the existing Nebula dev to release the code to be accessible so that the community can edit and create the software as open source? Maybe that way we can create the Nebula beam windows app, so that it does not rely on GPU but can be processed by APU and Beam that way beam becomes more useful.

I have to resort to wild guessing here, because Xreal doesn’t provide the information required to do better…

The Beam has rather seamless refreshes. It doesn’t fix the washed-out text, but when it comes to just moving around the head in gimbal mode, it showed adequate performance.

The hardware inside Beam may just be the Nreal prototype hardware, which was something like a Snapdragon 845-865, high-end mobile SoC from a couple of years ago with a max of perhaps 5 Watts of total sustained power budget. Or it may be something cheaper, a Mediatek or similar, given the price it sells at.

That’s not a lot of graphics power and quite within the range of what “established” iGPUs like Cezannne APUs or 80-96EU Xe from TigerLake U and onwards support, who typically give 5 Watts for exclusive iGPU use. Even if Qualcom was able to work ‘magic’ for their mobile GPUs, that’s quite a bit more GPU power than the Beam likely has available.

If Xreal demands an RTX3060 or better for Nebula to work, I believe they are just raising the stakes somewhat blindly or misleadingly to cover a glaring defect. And from my testing I can tell that there is simply no difference between an Xe, ARC A700m, RTX2080ti and an RTX4090, which are perhaps 50:1 apart in GPU power.

The Beam beats the RTX4090 in smoothness? It’s not a hardware issue.

It is a problem that distributing the workload between two different GPU devices with a low bandwidth link will not solve, even if it could realistically be done and without rearchitecting the Unity engine.

Where the problems are exactly, they are not telling. If it’s because they do not want to lose face or if it’s the typical Chinese secrecy who know just how many fellow engineers are at the ready to clone and rip of their product, which is after all, based on freely available commodity components, again is hard to tell.

But the lack of consumer trust this attitude will create may hurt them badly going forward.

Open source has become more of a weapon that is most easily wielded by Internet giants who have other means of assuring exclusivity than a startup like Xreal.

It isnt that simple. There are five kind of compute - cpu, gpu, dsp, vpu, and tpu (though dsp and tpu are sometimes lumped together). cpu, gpu are very familiar to pc enthusiasts.
DSP/VPU/TPU not so much. these are either designed into socs directly, or provided as separate asic chips.

All modern XR platforms are absolutely reliant on DSPs and TPUs to achieve low latency low power tracking performance. The difference between the DSPs on a snapdragon gen1, and even the xreal beam (running a bottom barrel 5 year old rockchip soc) and running the same algo on a desktop gpu is staggering.

desktop PC hardware does not have the dedicated compute for these algorithms today. It is a reasonable analogy to video encoding on a cpu versus a vpu like nvidia nvenc or amd vce. With VPUs you are looking at a 10-30x performance difference and more importantly a 70-90% power savings with that performance difference. DSPs/TPUs are an even larger disparity.

This is why every XR platform seems to only build for mobile - those socs have the onboard hardware needed. I’m still surprised xreal has release desktop capabilities at all - and not at all surprised how janky and limited they are.

This isn’t going to get any better until AMD/Intel start putting this stuff on die, or nvidia puts it into their gpus, or someone launches a dongle for XR like the Coral TPU does for computer vision acceleration.

2 Likes

Thanks for the insight! I sure did overlook that aspect!

I was actually quiet impressed at the job the Beam does is terms of sensor fusion: given that there are no inside-out cameras involved and it has to rely totally on kinetic and magnetic sensors, the stability and speed at sensing the headset’s orientation was impressive.

I remember how Google Cardboard would struggle and even ‘extended’ Google Cardboard like the LeEco LeMax2 with its LeVR adapter, which had optimized gyros would have noticeable drift.

Actually, even x86 APU and Atom SoCs tend to have DSP IP blocks on them, which might be able to do such workloads, but they are so diverse and proprietary, without any common API that they might as well not be there at all.

And I guess that all of my VR headsets, even the ones that aren’t autonomous like early Oculus or HP Reverb do that on the headset…

Again the main problem is that Xreal doesn’t say anthing about the design of the hardware and what each component is capable of doing. The glasses have a firmware, they certainly have the sensors.

Whether sensor fusion with all that DSP processing is already done on the heaset itself or if large chunks of that workload is pushed to the host, customers can only guess. The fact that their API doesn’t seem to support reading out individual sensors, might suggest that an abstraction layer is runing on the glasses themselves.

If they need a Hexagon style DSP monster on the host to make Nebula “AR” work, they should have validated what could be done on a PC before selling a product they cannot deliver.

If the facilities inside the glasses aren’t enough to do these tight real-time workloads and even giant x86 CPUs and Nvidia GPUs can’t deliver in time, Xreal evidently just didn’t do a proper design and product analysis and oversold hardware that is incapable to deliver what they sold.

And with that, they need to accept full returns because the product is fundamentally defective.

It’s the cost and responsibility of doing business.

I tested the version 0.7.0 with my AMD RX 6600 but without success. I use an HDMI to TypeC adapter for this. But it works with GingerXR. Has anyone had any success getting the Nebula software to work using this method?

Will it be possible to start the program later without admin rights? I don’t have admin rights on my office notebook.

It requires Admin permissions. And you can try to check this link for troubleshooting. XReal Air — Multi-Monitor AR Setup on Windows using Nebula & Troubleshooting | by Han N | Medium

I was having some initial trouble setting up wide screen (was normal resolution stretched to wide, had to manually set resolution), some screen tearing and starting with the fast refresh rate freezes the nebula app on one of my laptops, but otherwise works great. Wish I could add more virtual screens.

Any chance of releasing the nebula for windows as open source? At least in part, so that the comunity can help develop? I am also a developer; I am sure I couldn’t write Nebula for Windows myself from scratch, but I could surely extend it, and if not just me, community is much greater and eager to help. I think it would be a great benefit and help to your product and should save you at least some time in development.

EDIT: Also, how about some higher resolutions on virtual screens? If I can set the size and distance so that I can only see a part of it through the “viewport”, I don’t see any reason why the resolution should be limited to 1080p? I am running 2x4K + 1xFHD on hardware monitors, so the rendering capability must exist. That’s 9 FHD screens of pixel real-estate.

In the end Xreal has accepted their responsibility and they did reimburse me.

I recommend everyone not satisfied with their “Beta-approach” to do the same.
Ultimately it should result in a better company doing better products.

I am currently using Nebula for Windows v0.8.0. The reason I am writing this is because I am trying to find out where I got it since I can’t find the link neither here nor on reddit. Is this currently the latest version? The last verstion I see mentioned here is 0.7.0.

Glasses keep updating up and down as I move them from one windows PC to another and back. Is this because of different versions of nebula? Upgrading glasses from the new nebula and then using them with the old nebula restores the firmware on the glasses? In this case I must always update on all of my machines when I do one.

How about releasing all updates at the same places? Is there a certain place where I can always count on finding the latest version?

found it. you can get nebula for windows v0.8.0 from here https://www.reddit.com/r/Xreal/comments/1bst2rk/big_software_update/

Hi there,

Is the Xreal Air 2 Pro compatible with this notebook specification for using Nebula for Windows?

Notebook: LENOVO IDEAPAD PRO 5 14IMH9
OS: Windows 11 Pro X64
Processor: Intel® Core™ Ultra 7 155H, 16C (6P + 8E + 2LPE) / 22T, Max Turbo up to 4.8GHz, 24MB
Graphics: Integrated Intel® Arc™ Graphics
Chipset: Intel® SoC Platform
Memory: 32GB Soldered LPDDR5x-7467
Display: 14" 2.8K (2880x1800) OLED 400nits Glossy, 100% DCI-P3, 120Hz, Eyesafe®, DisplayHDR™ True Black 500
Connectivity:

  • 1x USB-C® (USB 10Gbps / USB 3.2 Gen 2), with USB PD 3.0 and DisplayPort™ 1.4
  • 1x USB-C® (Thunderbolt™ 4 / USB4® 40Gbps), with USB PD 3.0 and DisplayPort™ 1.4
  • 1x HDMI® 2.1, up to 4K/60Hz

Thank you!