I want to develop a platform that uses a camera’s RGB input to segment and analyze elements of a user’s FOV to interact with a conversational assistant.
I understand the Air 2 Ultra glasses are black and white and have limited camera access. Has anyone successfully used other camera inputs in a development application that sends data to the Air 2 Ultra?
My plan is to mount a small camera on the glasses, link it to a computer, and then connect the computer to the glasses. This way, the computer acts as a medium for information.
Where should I start with this, using XREAL dev tools or Unity?
Thanks