External mini FPGA compute board for 6DOF computation - OS and Host Hardware Independence for OpenXR

To become independent of host hardware and host OS there needs to be a way to compute 6DOF with external hardware.
One of the better ways to achieve this is to use an external mini FPGA board.
It should pull the tracking camera feed from the glasses and produce 6DOF data as output.
This board has to be transparent to the host i.e. no drivers specific to the board itself or, in other words, when the board is enabled then the host should identify glasses as a standard HMD.
Glasses > USBC > FPGA board > USBC > Host

This external compute board may also be utilized for hand tracking and other AI/ML tasks.
The only constraint is that it needs to remain low cost.
It can be an optional product for people who want to use OpenXR across a variety of host hardware.
Would be a good idea to integrate this board into the HDMI adapter.

@XREAL-dev
What are the chances Nreal will come up with own mini FPGA board to compute 6DOF or support existing AI/ML mini boards for this purpose?
Does Nreal’s current license with Qualcomm allow doing this?

Cheers

1 Like

Hi, we don’t plan to produce or make our own mini board for this purpose. Instead, we will promote other units.

Thanks @XREAL-dev

How long would it take Nreal to write 6DOF compute software for the other external FPGA units?
How do you plan to connect the glasses to the board if there is no USBC passthrough capability?
Will you have a splitter board that separates the camera feed into a separate USB connection?
Are there boards with USBC passthrough?
6DOF processing transparency will otherwise be void.

Feeding cameras through the PC as opposed to transparently through the board will be taxing on the Host CPU i.e. have to see workload performance results to assess feasibility of connecting both the board and the glasses directly to PC.

Intel will have VPUs built into their CPUs next year.
AMD will do something on their end.
Nvidia has their own take on this.
And there are a few AI accelerator boards floating around.

Is it possible to write 6DOF compute code that works across all of the above? I very much doubt it.
In any case this pathway is fully dependent on hardware and OS. It will also be the longest pathway with the highest amount of constraints including lack of transparent USB-C pass through.

It makes all the sense in the world to create a board with USB-C pass through, write your own FPGA firmware and be independent of all the companies’ agendas.
This will take the least amount of effort and time whilst producing the best outcome for both Nreal and the customer.
Why wouldn’t Nreal want to go this way?
Is there a chip coming next year that would allow 6DOF compute onboard the glasses independent of host?