Why video capture might not capture everything + a solution

Hello everyone!

I’m working with point clouds, and was having trouble with capturing a first person video. I believe my problem was quite similar to those described in these 2 posts:
post 1 by @segur.opus
post 2 by @bDunph

In my case, I’m rendering a point cloud using a series of custom shader (both compute shader and vertex/fragment shader), and I’m using DrawProceduralIndirect to effectively render my model.

After a few hours loosing my mind on why my model wasn’t rendered in the exported video while the rest was, and thanks to the author of post 1 who pointed me towards the blending shaders, I finally solved the problem on my end:

This section is wrong, check my next post, there is no problem with the shaders
-Shader Problem
The shaders used for blending are set to render at Geometry (+2000), which will, by default, ignore the transparent and post process part of the rendering chain.
In the shaders, I added the tag “Queue” = “Overlay” which seems to be the highest shader lab defines (+4000).
Note that you can always input a bigger number.
Note also that while digging through the NRSDK, I only found references for NRBackground.shader and NRBackgroundYUV.shader, and those two were enough for it to work, but I may have missed something, so feel free to experiment with the other shaders in the NRSDK/Ressources/Record/Shaders folder.

-Frame Capture Timeline Problem
The way the export work is quite convoluted: rather than capturing from the main camera, it instantiate prefab with yet another camera and a script.
I can understand, as you can better manage the video output resolution this way, but be warry if, for some reason, you are rendering only on some cameras.
Camera.Render() is called manually on this camera during Update!!!
That was a problem for me as my DrawProceduralIndirect call was in LateUpdate for some (valid) reason. I solved this by putting my draw call in Update, and changing the NRKernelUpdater from Update to LateUpdate.
Be carefull with that, I don’t know if it can have any side effects, as my scene is really simple and only use plane detection. It is useful to get a few visuals though, but it might not be reasonable in a released build.

Lastly, the camera output is not 1:1 with the glasses experience:
The background is a lot more luminous.
It uses a bigger fov than the glasses. I was doing frustum culling using the main camera to remove points not in the view-port, and it shows in the exported footage. Not sure if it’s a bad thing, but the more you know!

@XREAL-dev It might be interesting to change the renderQueue of the capture shader to something more than geometry level, and do the Camera rendering later than Update (like OnPreRender or something) so you don’t miss on drawcalls.

1 Like

Okay, some corrections:

The NRBackground shaders are in fact only there to render the background, so no need to change the render queue. There is no real blending, just a shader rendering the glasses’ camera image on a quad. So my first point is quite irrelevant. Sorry for being misleading.

I stand by my second point though. Unity Update calls are, to my knowledge, not ordered, so your draw calls may or may not be taken into account if they are done during Update. And if they are done in a later event, forget about it.