Access Camera Output for AI Model

Hello, I am an AI developer interested in experimenting with depth and multimodal analysis for object segmentation and isolation for a personal project with the glasses.

This is not a public focused project planned for release so there are no privacy concerns, I just want to get camera output from the glasses so I can feed it to a model and then isolate and analyze objects in the users POV.

This is something I plan to develop once I get my Air 2 Ultra glasses delivered.

If it is impossible to access RGB camera feeds, which hopefully is not the case, is there a second way to port content fed from a separate camera module (like a Raspberry Pi Camera) into the development environment to send data to the glasses?

Thank you.

Thank you for your interest. I hope you have carefully reviewed the specifications of the Air 2 Ultra glasses. The Ultra does not have an RGB camera; it only has two gray cameras. Accessing the raw data from these gray cameras requires contacting our business team and going through an internal process, which can be somewhat complex.

Additionally, the Ultra currently does not support external device data input.

Good to know. How about the XREAL development platform, can you link in a camera to a compute platform that also communicates with the glasses, to act as a kind of medium for information?
@Dorix