Wednesday, June 5, 2013

How to recolor the deep buffer data in Nuke 7

The DeepRecolor node is used to merge deep buffer files (contains per sample opacity values) and standard 2D color image. This node spreads the color across all samples using the per sample opacity values.

Read in the deep image that contains per sample opacity values and the color image. Add a DeepRecolor node from the Deep menu. Connect the depth input of the DeepRecolor node with the deep image. Next, connect the color input of the DeepRecolor node with the 2D color image.
Note: If the color image is premultiplied, add an Unpermult node between the Read and DeepRecolor nodes.
On selecting the target input alpha check box, the alpha of the color image is distributed among the deep samples. As a result, when you flatten the image later, the resulting alpha will match the alpha of the color image. If this check box is clear, the DeepRecolor node distributes the color to each sample by unpremultiplying by the alpha of the color image and then remultiply by the alpha of each sample. As a result, the alpha generated by the DeepColor node will not match with the alpha of the color image.

How to convert a standard 2D image to a deep image using the depth channel

The DeepFromImage node is used to convert a standard 2D image to a deep image with a single sample for each pixel by using the depth.z channel.

Read in the image that want to convert to a deep image.
Note: If the depth information is not available in the depth.z channel, make sure that you copy the information  to the depth.z channel using the Channel nodes.
Select the premultiplied check box if you want to premultiply the input channels. If this check box is clear, the DeepFromImage node assumes that the input stream is premultiplied. Select the keep zero alpha check box if you want to keep the input samples with zero alpha are considered in the deep output. If you want to manually specify the z depth, select the specify z check box and then specify a value for the  z parameter.

You can use the DeepSample node to check the deep data created by the DeepFromImage node.

How to convert a standard image to a deep image using frames

In Nuke 7, you can use the DeepFromFrames node to create depth samples from the standard 2D image. To understand the concept, follow these steps:

Step - 1
Create a new script in Nuke and then set the format in the Project Settings panel.

Step - 2
Download an image of sky, refer Figure 1 and the load the sky image into the Nuke script.
Figure 1
Step - 3
Connect a Reformat node to the Read# node to reformat the sky image.

Step - 4
Connect a Noise node (from Filter menu) with the Reformat# node. Animate the z parameter and modify the other settings as required, in the Noise# node properties panel to apply fog over the sky image, refer Figure 2.

Tuesday, June 4, 2013

Working with deep images in Nuke 7

Nuke's powerful deep compositing tools set gives you ability to create high quality digital images faster. Deep compositing is a way to composite images with additional depth data. It helps in eliminating artifacts around the edges of the objects. Also, it reduces the need to re-render the image. You need to render the background once and then you can move the foreground objects at different places and depth in the scene. Deep images contain multiple samples per pixel at various depths. Each sample contains per pixel information about color, opacity, and depth.

Deep Read Node
The DeepRead node is used to read the deep images to the script. In Nuke, you can read deep images in two formats: DTEX (Generated by Pixar's PhotoRealistic Renderman Pro Server) and Scanline OpenEXR 2.0.
Note: The tiled OpenEXR 2.0 files are not supported by Nuke.
The parameters in the DeepRead node properties panel are similar to that of the Read node.

Deep Merge Node
The DeepMerge node is used to merge multiple deep images. It has two inputs: A and B. You can use these inputs to connect the deep images you want to merge. The options in the operation drop-down in the DeepMerge tab of the DeepMerge node properties panel are used to specify the method for combining the deep images. By default, combine is selected in this drop-down. As a result, Nuke combines samples from the A and B inputs. The drop hidden samples check box will be only available, if you select combine from the operation drop-down. When this check box is selected, all the samples that have an alpha value of 1 and are behind other samples will be discarded. If you select holdout from the operation drop-down, the samples from the B input will be hold out by the samples in the A input. As a result, samples in the B input will be removed or fade out that are occluded by the samples in the A input.

Monday, June 3, 2013

How to generate motion vector fields by using the VectorGenerator node

The VectorGenerator node in NukeX is used to create the images with the motion vector fields. This node generates two sets of motion vectors for each frame which are stored in the vector channels. The output of the VectorGenerator node can be used with the nodes that take vector input such as the Kronos and MotionBlur nodes. The image with the fields contains an offset (x, y) per pixel. These offset values are used to wrap a neighboring frame into the current frame. Most of the frames in the sequence will have two neighbors therefore two vector fields are generated for each frame: backward vector and forward vector fields.

To add a VectorGenerator node to the Node Graph panel, select the node in the Node Graph panel from which you need to generate fields and then choose VectorGenerator from the Time menu; the VectorGenerator# node will be added to the Node Graph panel. Make sure the VectorGenerator# node is selected and then press 1 to view its output in the Viewer# panel. To view the forward motion vectors, select forward from the Channel Sets drop-down. Select backward from the Channel Sets drop-down to view the backward motion vectors. To view the backward and forward motion vectors, choose motion from the Channel Sets drop-down. Figure 1 through 4 show the input image, forward and backward motion vectors, forward, and backward motion vectors, respectively.

Sunday, June 2, 2013

How to create a position pass in Nuke 7 using the DepthToPosition node

The DepthToPosition node is used to generate 2D position pass using the depth data available in the input image. The position pass is created by projecting the depth through camera. Then, position of each projected point is saved. This node along with the PositionToPoints node is used to create a point cloud similar to the point cloud that the DepthToPoints node generates. In fact, the DepthToPoints node is a gizmo that contains the DepthToPosition and DepthToPoints nodes. In this tutorial, we will generate a position pass and then place a 3D sphere in the scene. To do this, follow these steps.

Step - 1
Navigate to the following link and then download the zip file to your hard-drive: https://www.dropbox.com/s/xo7eemr6qz16icl/nt007.zip. Next, extract the content of the zip file.

Step - 2
Using a Read node, bring in the nt007.rar; the Read1 node will be inserted in the Node Graph panel.

Step - 3
Connect the Read1 node to the Viewer1 node by selecting the Read1 node and then pressing 1, refer to Figure 1.

Saturday, June 1, 2013

How to render position pass in Maya and then use it with the PositionToPoints node

The PositionToPoints node is used to generate a 3D point cloud using the position data contained in the image. In this tutorial, we will first create a position render pass in Maya 2014 and then we will create a 3D point cloud using the position data in Nuke. Then, we will composite a 3D object in our scene with help of the 3D point cloud. Lets get started:

Step - 1
Create a project folder in Maya and open the scene that you need to render. Next, create a camera and set the camera angle. Figure 1 displays the scene that we will render.
Figure 1
We will be rendering a 32 bit image so first we set frame buffer to 32 bit.

Step - 2
Invoke the Render Settings window and then select mental ray from the Render Using drop-down list.

Step - 3
Now, choose the Quality tab and then enter 1.5 in the Quality edit box.

Step - 4
Scroll down to Framebuffer area in the Quality tab and then select RGBA (Float) 4x32 Bit from the Data Type drop-down list.

Next, you will create layers in Layer Editor and create layer overrides.

Step - 5
Select everything in the viewport and then choose the Render tab in Layer Editor. Next, choose the Create new layer and assign selected objects button from Layer Editor, refer to Figure 2; the layer1 layer will be created in Layer Editor.