That considerable everyone should just go ahead and use it.

The general idea itself is actually very simple: instead of mapping the interval between the near and far plane \([z_n, z_f]\) to \([0, 1]\), a special projection matrix is constructed in a way that it is being mapped to \([1, 0]\) instead.

**Why** this actually **increases the depth buffer precision** is not directly obvious, but **I will also not go into detail here**. I’ve added some references to articles on this topic at the end of this post.

Simply put: this approach works very well in combination with floating point depth formats (16- or ideally 32-bit) and it utilizes the high precision in the numerical area around zero for distant objects. Using it in conjunction with normalized integer formats will **not** yield the expected precision improvements.

Sadly the overall knowledge about this approach is still a bit scattered through the Net and I finally thought it’s a good idea to put the most useful information in a single place. Thus I am going to focus on the following things:

- Setting up the projection matrices
- Adjusting your codebase
- Efficiently linearizing the native depth buffer values (without a matrix multiplication)

… and here we go!

Let’s start out with the well known perspective projection matrices for left- and right-handed coordinate systems:

\[

P_{RH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&\frac{z_f}{z_n-z_f}&\frac{z_f

z_n}{z_n-z_f}\\

0&0&-1&0\\

\end{array}

\right)

\]

\[

P_{LH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&\frac{z_f}{z_f-z_n}&-\frac{z_f

z_n}{z_f-z_n}\\

0&0&1&0\\

\end{array}

\right)

\]

We can remap the depth range from \([0, 1]\) to \([1, 0]\) by applying a simple transformation matrix

\[

M_I = \left(

\begin{array}{cccc}

1&0&0&0\\

0&1&0&0\\

0&0&-1&1\\

0&0&0&1\\

\end{array}

\right)

\]

to both projection matrices respectively, yielding the final *Reverse Z* projection matrices:

\[

M_I P_{RH} = P_{RevRH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&\frac{z_n}{z_f-z_n}&\frac{z_f

z_n}{z_f-z_n}\\

0&0&-1&0\\

\end{array}

\right)

\]

\[

M_I P_{LH} = P_{RevLH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&-\frac{z_n}{z_f-z_n}&\frac{z_f

z_n}{z_f-z_n}\\

0&0&1&0\\

\end{array}

\right)

\]

Using the described projection matrices, while setting the near plane distance to \(z_n=15\) and the far plane distance to \(z_f=1000\), we are able to plot the following results:

It’s also possible to completely remove the need for a fictional far plane and reduce potential rounding and truncation errors during projection and matrix concatenations to almost zero. To achieve this, we can simply assume that the far plane is infinitely far away, yielding:

\[

P_{RevInfRH} = \lim_{z_f\to\infty} P_{RevRH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&0&z_n\\

0&0&-1&0\\

\end{array}

\right)

\]

\[

P_{RevInfLH} = \lim_{z_f\to\infty} P_{RevLH} =

\left(

\begin{array}{cccc}

s_x&0&0&0\\

0&s_y&0&0\\

0&0&0&z_n\\

0&0&1&0\\

\end{array}

\right)

\]

Setting the near plane distance to \(z_n=15\), plotting the results reveals the following:

Note how the function is not starting to touch the x-axis when reaching the previous far plane distance.

An infinite far plane might not be the perfect fit for everyone. It has to be correctly handled throughout the whole codebase, requiring a bit of extra care and potentially a few hacks in certain scenarios.

Swapping the projection matrix is actually the easy part and there is still a little bit of work left to do (depending on the codebase you are working with). You have to…

- Switch to a floating point depth buffer (if you are currently using normalized integer formats)
- Clear depth to zero (instead of one)
- Adjust your depth tests from defaulting to
*less (or equal)*to*greater (or equal)* - Adjust all calculations which rely on the assumption that a normalized post-projection depth of zero lies in front of the viewer
- Keep the infinite far plane in mind for all your calculations – if you went down that path. Frustum corners for example are often calculated by transforming the screen space coordinates back to view or world space. Obviously, you will have to swap the depth values for the near and far plane, but you will also have to make sure that you won’t end up with infinite values for your far plane corners. Slightly biasing the screen space depth (something like \(0 + \epsilon\)) for the far plane can do the trick here
- Keep in mind that OpenGL uses a NDC depth range in \([-1, 1]\), which requires some extra care. Using the OpenGL function glClipControl is one candidate for the job

Multiplying with the inverse of the current projection matrix is a common approach to move the normalized post-projection coordinates back to view space.

But: in a lot of scenarios it is already sufficient to apply this transformation to the \(z\) component only and to skip the complexity of a complete matrix multiplication. This approach is often called *linearizing the native depth values*. We can deduce the transformation of the \(z\) component by applying the inverse of the projection matrix, dividing by the \(w\) component and simplifying the term of the \(z\) component only:

\[

v_{ss} = (0, 0, z_{ss}, 1)

\]

\[

z_{vs}=\frac{(P^{-1} v_{ss})_z}{(P^{-1} v_{ss})_w}

\]

Using this approach it is possible to derive the *depth linearization functions* fitting to each of the projection matrices depicted above:

**Standard projection matrix (RH):**

\[

z_{\text{vs}} = -\frac{z_fz_n}{z_{ss}(z_n-z_f)+z_f}

\]

**Reverse Depth projection matrix (RH):**

\[

z_{\text{vs}} = -\frac{z_fz_n}{z_{ss}(z_f-z_n)+z_n}

\]

**Reverse Depth projection matrix with an infinite far plane (RH):**

\[

z_{\text{vs}} = -\frac{z_n}{z_{ss}}

\]

The left-handed counterparts are simply the negation of the right-handed functions.

As a small side note, when using those functions in shaders: make sure to prepare as much as possible on CPU side and to pass the prepared constants as uniform or constant buffer values. Taking the Reverse Depth linearization function as an example:

\[

c = (z_fz_n,z_f-z_n,z_n,0)\\

z_{\text{vs}} = -\frac{c_x}{z_{ss}c_y+c_z}

\]

… and that’s all there is to it. To sum it up, the following plot shows the linearization functions with the near plane distance set to \(z_n=15\) and the far plane distance set to \(z_f=1000\):

- Reverse Z
- Projection Matrices

Intrinsic uses Vulkan as its primary rendering API and includes many of the things I’ve learned during the last couple of years when working on the renderer/engine and PS4 backend for “Lords of the Fallen” and also – of course – for “The Surge“.

And most importantly: It tries to improve on all the errors I’ve made in the past.

To get this out of the way: This projectÂ is nowhere near feature complete – and most certainly never will be. It’s still wearing its child shoes. It’s not even close to a polished state. I’m also very sure that I’m going to refactor and overhaul things in the coming months. It sometimes even misses basic things. But: the code base is clean, it’s an efficient foundation and it’s open for many new things and features in the future. That’s the reason why I’ve released Intrinsic under the Apache 2.0 license. It’s available on GitHubÂ – take a peek and try it for yourself!

I’ve invested a lot of time in this project and I’m really sure that I will continue working on it for a long time.

]]>