Rendering in ARDK
Customize how the camera feed is rendered in your app.
Overview
The ARDK provides a configurable rendering pipeline. Adding a ARRenderingManager component to your scene, typically added to the scene camera object, ensures that the rendering pipeline can render the camera feed to the screen.
The ARRenderingManager component is responsible for setting up and managing how native images get rendered to the screen continuously, with every frame update. Besides this, it has a feature to yield snapshot textures (of the camera image) in CPU or GPU memory, on-demand.
Writing Custom Command Buffers
You can use custom Unity graphics Command Buffers along with ARDK’s rendering pipeline. Custom command buffers should be scheduled to execute after Unity’s CameraEvent.BeforeForwardOpaque
or the renderer’s GPUFence.
Depth Information and the Rendering Pipeline
In addition to rendering camera feed data, the ARDK render pipeline is responsible for generating z-buffer data if the session is configured to generate depth information. For this reason, if you enable depth generation in your scene, either through use of an ARDepthManager
or configuring the ARSession
for IsDepthEnabled
, you must make sure you have a ARRenderingManager
in your scene. If you use ARDepthManager
, the ARDepthManager
component must be added to the same object that the ARRenderingManager
is added to.
For more information on obtaining depth information from ARDepthManager
see: Generating Depth Data.
Aligning Awareness Buffers with the Screen
For the depth and semantic buffers, the pipeline generates and uses buffers that don’t necessarily match the size and resolution of the device camera or screen. ARDK provides convenience methods to align these buffers via DepthBufferProcessor.CopyToAlignedTextureARGB32, DepthBufferProcessor.CopyToAlignedTextureRFloat, and SemanticBufferProcessor.CopyToAlignedTextureARGB32, however these methods use a per-pixel process that can be slow.
An alternate, faster approach is to get the SamplerTransform
from the appropriate AwarenessProcessor and use this in a custom shader. For example, you could obtain the SamplerTransform
from your ARDepthManager:
Matrix4x4 samplerTransform = depthManager.DepthBufferProcessor.SamplerTransform;
Then, get the raw, unaligned texture you want to use, for example the float texture for the depth buffer:
Texture2D depthTexture; depthBuffer.CreateOrUpdateTextureRFloat(ref depthTexture);
Create a custom shader that takes the raw texture and transform and applies the transfer in the vert processing function:
Shader "Custom/DepthSh" { ... SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Pass { ... // Transforms used to sample the context awareness textures float4x4 _depthTransform; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; // Transform UVs for the context awareness textures o.depth_uv = mul(_depthTransform, float4(v.uv, 1.0f, 1.0f)).xyz; return o; } ...
Finally, use this shader in a custom material in your scene as needed. For example, you might use an OnRenderImage()
to blit the aligned depth buffer texture to screen:
public Material _shaderMaterial; .... void OnRenderImage(RenderTexture source, RenderTexture destination) { //pass in our raw depth texture _shaderMaterial.SetTexture("_DepthTex", depthTexture); //pass in our transform _shaderMaterial.SetMatrix("_depthTransform", samplerTransform); //blit everything with our shader Graphics.Blit(source, destination, _shaderMaterial); }
For step-by-step examples of aligning the depth or semantic buffers, see Intermediate Tutorial: Depth Textures and Intermediate Tutorial: Semantic Segmentation Textures.
Getting AR Updates
This step is not required on the iOS platform, but is on the Android platform. On Android, AR updates have to be explicitly fetched through a command buffer command (i.e. no ARFrame
updates will be surfaced without executing this command). If you want to use a custom command buffer, call:
yourCommandBuffer.IssuePluginEventAndData(yourARSession);
This ARSessionBuffersHelper method only adds a command to your buffer to fetch an AR update. It does not set up execution of that buffer. Since the command buffer triggers AR updates, it needs to be executed repeatedly, like as part of the rendering loop. ARCore
updates at 30fps, so aim for at least that frequency.