Jan 212015

I’m working on a production right now that involves sending absolutely enormous animated meshes (6.5 million polygons on average with a changing point count every frame) out of Houdini and into Maya for rendering. Normally it would be best to just render everything directly out of Houdini, but sometimes you don’t have as many Houdini licenses (or artists) as you’d like so you have to make do with what you have.

If the mesh were smaller, or at least not animated, I’d consider using the Alembic format since V-Ray can render it directly as a VRayProxy, but for a mesh this dense animating over a long sequence, the file sizes would be impossibly huge since Alembics can’t be output as sequences. Trying to load an 0.5 TB Alembic over a small file server between 20 machines trying to render the sequence and you can see why Alembic might not be ideal for this kind of situation.

Hit the jump below to see the solution.
Continue reading »

Mar 252014

I’ve been messing around with using Ptex in VRay for Maya and ran into a particularly weird little problem involving a normal map exported from Mudbox. It’s probably easier to show it than to tell about it (Fig. 1):

bad ptex bottle

Fig. 1. Check out the wax wrapper up top… all chunky and gross.

That wax wrapper looks terrible… not at all like the original sculpt. The mesh is set to render as a subdivision surface (using the custom VRay attributes from the Attribute Editor), but the details are all mangled like it’s not subdividing the surface at all.

Even weirder is what happens when I only render a small region of the image (Fig. 2):

bad ptex with region render

Fig. 2. The details are suddenly cleaner! All I changed was enabling region render.

This was really confusing, and although I’m still not 100% sure what’s going on in VRay to cause this (I’m fairly certain this is a bug), there is at least a solution.

What’s probably happening here is that choosing to render this mesh as a subdivision surface is changing the point count of the object BEFORE any Ptex information is applied to the mesh during rendering. Ptex is very sensitive to your geometry… any change in point count could potentially break things. You’re not allowed to polySmooth objects that will have Ptex textures, for example, or Maya won’t know what points to assign Ptex information to.

The way to get around this is to put the object into a VRay displacement set. Assign displacement and subdivision attributes to the displacement set like you normally would, but don’t attach a displacement map, and make sure the Displace Amount is set to 0. Rendering this way gets you the more correct image (Fig. 3):

correct ptex

Fig. 3. This looks a lot more like the original sculpt from Mudbox.

This seems like buggy behavior to me more than some technical thing I’ve overlooked, but thankfully the workaround is pretty minimally difficult. If you have any better insight as to what exactly is happening here or if there’s a less hacked way to prevent it, I’d love to hear it.

Oct 252012

A short film that I worked on with a friend of mine, director Dan Blank, is finally out to the public! I was the CG Supervisor, and I handled all of the lighting, materials and rendering in addition to the overall pipeline.

Take a look at this article from The Atlantic magazine:

The short:

And here’s the VFX breakdown:

So excited it’s finally out there and getting good reviews!

Sep 292012

I’m working on a project right now that involves exporting cached geometry from Houdini to Maya. The Alembic node makes that a fairly painless process now that Houdini and Maya both support Alembic import/export, although it turns out that getting any data other than point positions and normals is kind of a hassle. I tried renaming the Cd point attribute in Houdini to all kinds of things in the hopes that Maya would recognize the data, but Maya wasn’t having any of it. That’s when I checked out the script editor in Maya a little more closely and saw this:

// Error: line 0: Connection not made: 'output_AlembicNode.prop[0]' -> 'subnet1.Cd'. Data types of source and destination are not compatible. // 
// Error: output_AlembicNode.prop[0] --> subnet1.Cd connection not made //

Maya is creating this “Cd” attribute on the mesh, but it has no idea what to do with the data so it throws an error. The Alembic node, though, still contains that data, as the 0th index of the array “prop.” Now all you have to do is get that data from the Alembic node onto the vertex color somehow…

The SOuP plugin for Maya has the answer. There is a node called “arrayToPointColor” that will read an array of data and apply it to the point color of the mesh it’s connected to. Create the arrayToPointColor node, and feed it your geometry (mesh.worldMesh[0] –> arrayToPointColor.inGeometry) and then feed it your data array from the AlembicNode (AlembicNode.prop[0] –> arrayToPointColor.inRgbaPP). If you have more than one point attribute exporting from Houdini, you may want to check the script editor to make sure you know which index of AlembicNode.prop you are supposed to be connecting.

Finally, make a new mesh node and connect arrayToPointColor.outGeometry –> newMesh.inMesh. If you were to select some vertices and look at their properties in the Component Editor, you should see values attached to the “red” “green” and “blue” vertex attributes.

All that’s left to do at this point is to connect this color data to a texture that can read it. In mental ray, you’d create a mentalrayVertexColors node, and connect newMesh.colorSet[0].colorName –> mentalrayVertexColors.cpvSets[0]. If you don’t see a colorSet[0].colorName property on your mesh, try selecting the mesh, then go to Polygons > Colors > Color Set Editor. You should see a colorSet1… just select it, click “Update” and you should have the property you’re looking for. Then connect the mentalrayVertexColors node to any shader. See Fig. 1 for the example network.

Fig. 1: Node Editor network connections to link Alembic attributes to vertex color.

You can also just remove the middleman entirely at this point, and delete the original shape node. Then connect the AlembicNode.outPolyMesh[0] to arrayToPointColor.inGeometry. This is probably a good idea if only because it will stop Maya from throwing annoying errors every time you select the geometry because of that missing “Cd” connection.

Jan 302012

I was just talking to a colleague about his problems using displacement maps with VRay, and then remembered my confusion when I first tried to work with them. So here’s a post about it!

Normally in Maya/mental ray, when you want to apply a displacement map you just create a displacement material and connect it to the shading group you want displaced. If you want to adjust the amount of displacement, you actually grade the image itself by adjusting the color gain and offset on the file node. It’s simple, it works, whatever.

You can still apply displacement like that in VRay, but there is a better and more flexible way to handle it using VRayDisplacement sets. They’re kind of like VRayObjectProperty sets, but they act as a sort of container for displacement settings instead of generic render settings and object IDs. In order to use these sets, you want to select the objects to displace with a single map, and go to Create > V-Ray > Apply single VRayDisplacement node to selection. A set will be created, visible in the Outliner.

Next up is to assign a displacement texture to the set. This means you don’t have to connect a displacement shader to any shading group; the set will handle that connection.  When you select this set, the Attribute Editor will give you just two options: a checkbox saying “override global displacement,” and a plug for a displacement material. Check the box on, and then connect a texture to the displacement material (not a material, but a texture). I usually run a file texture through a Luminance node first to make the connection easier (file.outColor –> luminance.value), unless I’m using a vector displacement map in which case I’m using color information instead of just luminance or alpha.

So where are all the displacement options? You have to add them. If you don’t change anything, the displacement will use the default values set in the VRay render settings under Settings > Default Displacement and Subdivision. This defaults to an edge length of 4 pixels, a maximum subdivision number of 256 (this is a lot of subdivisions!!), and a displacement amount of 1.0 (which is usually way too high). These are terrible values for most scenes, the displacement will look grossly exaggerated and it will take forever to render.

In order to tweak the settings, you need to add the appropriate attributes to the VRayDisplacement set. With the set selected, open the Attribute Editor and select Attributes > V-Ray > Displacement Control and Attributes > V-Ray > Subdivision and Displacement Quality. Now you have a ton of options to play with, the most important of which are Displacement Amount (color gain), Displacement Shift (color offset), Edge Length, and Max Subdivs. I recommend starting with a displacement amount of 0.1-0.5, and a max subdivs of maybe 16-32 before starting to increase those settings.

I have no idea why these attributes have to be added manually, but hell, it’s still better than mental ray.

Oct 262011

A lot of After Effects compositors like to use a plugin to allow texture substitution on objects, called Re:Map. You render out a special color pass from your 3D scene, and then the plugin uses those colors to wrap a flat image onto the object, basically allowing you to re-texture in post.

A lot of people in the past have asked me or others to find the material you’re supposed to use to render this pass from Maya. There are different ways to make it, but by far the easiest is to just use a place2dTexture node and a surface shader (or a VrayLightMtl, if that’s your thing).

Connect the place2dTexture.outU –> surfaceShader.outColorR. Then connect place2dTexture.outV –> surfaceShader.outColorG.

That’s it. Apply the material to everything and you’re done. In VRay, you don’t even need a material if you want to save a render layer; just create a VrayExtraTex element on your render layer and connect the place2dTexture outputs to the ExtraTex element in the same manner.

The setup is really easy but there are a few things to watch out for. First of all, if your UV’s are distorted, then any texture you place on it is going to be distorted. So you need good UV’s. If you’re using simple rectangular billboards, make sure the UV’s are normalized (you can normalize UV’s in the UV Texture Editor). Also, the image quality will suffer if you aren’t rendering to a floating point file format– 32-bit floating point images are best to avoid artifacting.

One other subtle thing to watch out for. If you are using a linear workflow when you render (and you should be!) it’s easy to screw up this render and end up with weirdly warped images. This render should NOT be gamma-corrected in any way, so disable your lens shaders if you’re in mental ray, and set your gamma to 1.0 and turn off “linear workflow” if you’re in VRay. It’s hard to see, but take a look at the difference between a linear render of this pass and an sRGB (gamma 2.2) render:

The image on the left is the correct one. Using a color-corrected UV pass will cause your substituted textures in After Effects to appear very warped around certain edges.

(I’ve made this mistake way too many times.)

May 212011

If you’re using mental ray as your renderer, chances are that you aren’t going to get a whole lot out of the passes system, especially if you’re trying to write custom color buffers. It’s a slow, buggy, work-intensive process to get a lot of passes out of mental ray that Vray has absolutely no trouble with. You could use render layers instead of custom passes, but mental ray also has a particularly long translation time for complex scenes (think of any scene where you see mental ray hang for about 10 minutes before it even starts to render a frame). So you can’t exactly add render layers haphazardly… you need to condense things as much as possible.

Someone told me about a neat trick that they saw at a studio they were freelancing at where three data channels would be written to RGB channels, almost like an RGB matte pass except with “technical” passes instead of mattes or beauty or whatever. Since the data being written only needs a single channel, you can get three kinds of data written to one image and then split them apart later in post. Simple enough when you think about it…

Continue reading »

May 172011

As promised, a useful post! And probably a long one.

As far as rendering goes, the problem that I see people running into more often than anything else is render layers mysteriously breaking, especially when file referencing is involved. The symptoms are typically either objects disappearing or Maya simply being unwilling to switch to a specific render layer, claiming in the Script Editor that there are “overrides to a node in a missing reference” or something to that effect. A lot of less experienced or just less technical types will try to solve the problem by either importing their references into the scene (which is rarely a good idea), or by screaming obscenities (which is exhilarating but ineffective). There is a better way to get your scene to render with no problems, and be able to use file referencing. The trick is to use shared render layers properly, only allow certain edits to your references in your final scene, and make sure that your references are clean.

Continue reading »