Skip to content

Layer image aov

Layer image and AOV

It is possible to render a 3D scene using the Layer Image generator: when the Layer Source parameter is a 3D layer, the Layer Image generator produces a render of the 3D scene represented by the layer. Additional options become available with 3D layers.

Layer image with 3D layer

To set up a Layer Image generator that renders a 3D layer:

  1. Set the Layer Source to point to a 3D layer.

  2. Specify the Render delegate to use for rendering the scene. Filament is the default delegate with the most features, but Storm may be sufficient for less complex renders.

  3. Specify through which Camera the scene is rendered. Choose Top-Most Active to select the top-most active camera in the stack of 3D layers up to the one you selected in Layer Source, or pick a specific camera in the drop-down list.

  4. Adjust Filament-specific settings if the Filament render delegate has been selected. You can enable or disable effects there independently of the 3D settings of the composition. Alternatively, you can inherit the 3D settings of the composition by checking Use Parent Comp 3D Settings.

AOVs

The AOV (arbitrary output variable) parameter controls the kind of image produced by the renderer. Currently, there are two AOV options available:

  • Color: performs a full render and retrieves the final result.
  • Depth: performs a depth-only pass and retrieves the depth buffer, containing depth values in window coordinates, with values ranging between 0 and 1.

Using the depth AOV in a composition

The depth AOV, combined with the camera matrices, can be used to reconstruct a 3D position with the Shadertoy modifier. Use the following GLSL code inside a Shadertoy modifier to reconstruct a 3D position in camera space, with the depth AOV bound to iChannel1 via a Layer Image, and the same Camera setting:

// Reconstruct the camera-space position given normalized XY window coordinates (in [0;1]) and a depth in window coordinates.
vec4 viewPositionFromDepth(vec2 texcoord, mat4 invProj, float z) {
    // Get the depth value for this pixel
    // Get x/w and y/w from the viewport position
    float x = texcoord.x * 2 - 1;
    float y = texcoord.y * 2 - 1;
    vec4 vProjectedPos = vec4(x, y, z * 2.0 - 1.0, 1.0);
    // Transform by the inverse projection matrix
    vec4 vPositionVS = invProj * vProjectedPos;  
    // Divide by w to get the view-space position
    return vPositionVS / vPositionVS.w;  
}

void main(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord / vec2(iResolution);

    // Sample the depth texture
    vec4 depthTex = texture(iChannel1, uv);
    // reconstruct fragment position in camera space
    // iCamInvProjMatrix is determined automatically from the Camera setting of the shadertoy effect 
    vec4 viewPos  = viewPositionFromDepth(uv, iCamInvProjMatrix, depthTex.r);

    // ...

    fragColor = viewPos;
}

Among other things, this can be used to implement screen-space ray-marching effects on top of a 3D render.

Tip

The depth AOV may appear empty at first if the Near Plane & Far Plane of the selected camera are not adjusted for the scene. To debug the contents of the AOV, add a Auto-Contrast modifier, or adjust the Near Plane & Far Plane parameters of the camera so that they match the bounds of the scene more closely.