PHYSX DEFOCUS

PhysX Defocus is a physically based zDefocus alternative with non uniform bokeh option. It means that if you input real values of focal length, aperture and sensor size, you will get a physically accurate defocus size (if your 3D scene has the right scale of course). It is also equiped with a non uniform bokeh option to create the kind of swirly bokeh you can see in the example above. The bokeh itself is fully customizable with a nice default preset that mimics a real bokeh ball with its imperfections. A physically plausible lens breathing option can also be found and will create a realistic breathing based on the change in focal distance and anamorphic properties.

 

But how does it work ? The field of optics is a very complex yet fascinating one, and there are loads of equations to describe the many interactions light has with different surfaces. The bokeh is just the result of a ray of light that is not correctly focused on the surface of the sensor. It's round shape comes from the shape of the lens. The optic name that describes the bokeh ball is the circle of confusion, refering to the area in which the light is not focused. A simple equation is well known and describes the radius of this circle of confusion (CoC) :

 

        CoC(Px) = ((((Depth - Focus_distance) * Focal_length ^ 2) / (Aperture * (Depth * (Focus_distance - Focal_length))) * Image_width) / Hozontal_sensor_size) / 2

 

(Distances are expressed in mm, images in pixels.)

 

This equation is used within PhysX Defocus along with many others to create physically plausible bokeh.

 

Another feature is the non uniform bokeh. As you probably know, Nuke doesnt support natively non uniform convolve operations. To work around this problem, I used Gilles Vink technic :

 

To achieve non uniform bokeh, you have to slice the image into X pieces, lets say 25 here which corresponds to the Medium quality preset, and apply a different zDefocus on each slice and then combine them all together. This should normally be impossible to do on a single frame since 25 images need to be processed, however taking advantage of Nuke's subframe system, it is possible to divide a single frame into 25 subframes and assign one operation to each frame. You then need a system that scans the 25 slices one by one and assigns to them a different filter depending on their position in the frame and finally to combine them all you can use a frameblend node to reassemble the image. I will spare the many weird problems you encounter along the way but if you would like a deeper explanation please feel free to contact me !

 

One of the key problems you will encounter is performance. As nuke is very inefficient with convolve operations, and this method requires X operations per frame, you can expect this node to be pretty slow in non uniform mode. The only real way to fix this problem would be to rebuild a zDefocus node from the ground up. Therefore to make you life easier I created render options that allow you to enable non uniform only when rendering on only on a renderfarm, as well as work in a lower quality setting and automatically use a higher quality upon rendering. This allows you to work with acceptable performance while not compromising on the final quality without having to change those settings before each render. Just set it up once, and you are good to go !

 

The rest of the node is pretty standard thus I will not be explaining each feature in details but all you have to know is that python scripting was used to create dynamically changing menu between 2D and Depth mode, scripting was also used for the "Get Selected Camera Data" button and a similar equation to the one above was used to create physically plausible lens breathing.