Hi,
I have an application that uses the CameraProxy to capture images that are passed on to another module which does some image processing on them:
m_NativeCameraProxy = CameraProxyFactory.CreateRGBCameraProxy();
m_NativeCameraProxy.SetImageFormat(CameraImageFormat.YUV_420_888);
m_NativeCameraProxy.Play();
and later
var frame = m_NativeCameraProxy.GetFrame();
... do further processing with frame
This works fine.
Now I want to add RTP-VideoStreaming to let others see on a PC what I am seeing through the glasses. For streaming I use basically the VideoCapture2RTPExample
which internally uses NRVideoCapture.StartVideoModeAsync()
to get an RGB image.
The problem I have now is that when I use both methods at the same time, the image that is captured from the NativeCameraProxy looks totally crappy. Just as if the raw image data was interpreted as a pixelformat that it is not (e.g. raw data is rgb, but it is interpreted as yuv):
Therefor my computer vision algorithms fail on those images.
But the stream that I receive at the PC looks fine.
I have two questions
- Is there a way to use both capturing methods at the same time without messing up one of the images?
- if 1) is not possible, how can I achieve to get images that I pass to a function and at the same time stream them via rtp (with AR overlay rendered) ?
Thank you in advance!