Hello, I am an AI developer interested in experimenting with depth and multimodal analysis for object segmentation and isolation for a personal project with the glasses.
This is not a public focused project planned for release so there are no privacy concerns, I just want to get camera output from the glasses so I can feed it to a model and then isolate and analyze objects in the users POV.
This is something I plan to develop once I get my Air 2 Ultra glasses delivered.
If it is impossible to access RGB camera feeds, which hopefully is not the case, is there a second way to port content fed from a separate camera module (like a Raspberry Pi Camera) into the development environment to send data to the glasses?
Thank you.