Apr 012016
 

As part of the pipeline I’m currently writing for Timber, I’ve been working on an input/output system that can pass data back and forth between Maya and Houdini. The goal is to have a Maya artist send an Alembic scene to Houdini, process the scene in Houdini, then send the modified scene back to Maya. The problem is when you want the scene to be recognizable at all, and also easy to work with in Houdini.

There are several steps to making this process work, and you’re better off scripting as much as you can because it’s annoying and repetitive. Also, the method I’m using will preserve the scene hierarchy, but it will not preserve pivots or transform information (everything becomes baked to world space). Hopefully there will be a better way to manage that soon.

First off, when you import the Alembic, use an Alembic Archive geometry container. Disable “Build Hierarchy Using Subnetworks” and then build the hierarchy. Instead of packing each child transform inside subnetworks for the whole Alembic hierarhcy, you should get a flattened hierarchy inside your Alembic Archive that looks like Fig. 1.

Fig. 1. A flattened Alembic subnet. All the geometry exists inside this subnetwork, with no child subnetworks.

Fig. 1. A flattened Alembic subnet. All the geometry exists inside this subnetwork, with no child subnetworks.

The reason you want this flattened hierarchy is so that you can merge everything into another network easily. I prefer to import all my data in one place, then Object Merge it into another network to modify it in order to keep everything nice and readable. Since everything is flat, on the Object Merge node you can just merge in /obj/alembicarchive1/* and everything should appear nicely. In order to preserve the original hierarchy for export, though, we’re going to want to enable “Create Per-Primitive Path” so that a primitive attribute objname appears on every packed Alembic primitive we’re reading into the scene.

This objname parameter isn’t storing exactly what we want, though. It’s storing the path to the SOP in Houdini’s terms, not the path the object had in the original export. If you look inside the Alembic Archive you originally created and click on any of the Alembic nodes inside, you can see that there’s an objectPath parameter that is storing the original Alembic path. We just need to get this value for each primitive. Since we have objname already stored, it’s not too hard to get objectPath from each packed primitive. We just need to use a little VEX (don’t worry).

Drop down an Attribute Wrangle after your Object Merge, and try the following code:

string objname;
objname = prim(geoself(),"objname",@primnum);
// get objectPath attribute from the parent node of objname
string channame = sprintf(objname + "/objectPath");
string objpath = chs(channame);
s@abcPath = objpath;

First we fetch the Houdini object path (stored as objname) on each primitive. Next, we generate a string for the channel name that we want to query, which is objname + "/objectPath", the channel name that contains the original Alembic path of the object. We then use chs() to grab the value of that channel, and store it as a primitive string attribute called abcPath. Not too bad. If you look at the object now in the Geometry Spreadsheet, you can see that each primitive now has a string attribute pointing to the Houdini path as well as the original Alembic path of each object.

There’s another little catch here. We’re probably going to want to convert this geometry to regular Houdini geometry, since there’s not a lot we can do with packed primitives (or polygon soups, depending on what you’re trying to do). So we’ll need to append an Unpack SOP. When you unpack, though, you’ll notice your primitive attributes are gone. To transfer the primitive attribute onto the newly unpacked polygon soups, just enter abcPath into the Transfer Attributes parameter on the Unpack SOP. Now every primitive should have the correct attribute, and you can go ahead and drop down a Convert SOP to convert everything into standard Houdini geometry.

You can go ahead and smash up the geometry to your heart’s content now, as long as you maintain those primitive attributes. When you’re done and it’s time to export an Alembic, all you have to do on the Alembic ROP is select “Build Hierarchy From Attribute” and type in abcPath as your Path attribute. If you were to load the exported .ABC file back into Maya, you should see that the object names and their hierarchical relationship to each other should be unchanged (although your pivots will all be at the origin, and the transforms frozen). Now your Maya team won’t freak out when everything in your Houdini scene comes in as one giant unrecognizable object!

Mar 212016
 

I’m trying to build a system right now that can automatically substitute environment variables for paths on a per-scene basis. When a user opens a Maya file, I want to parse that file and set temporary environment variables called $SHOW, $SEQ and $SHOT that point to the show’s root folder, the sequence’s folder in that show, or the shot in that sequence. The variable names and paths I’m trying to get are pretty specific to the current pipeline I’m working on, but this essentially lets me use relative paths in Maya but WITHOUT using Maya’s outdated and inflexible workspace system. I don’t want an individual Maya project for every single shot and asset in the entire show, and I don’t want all of my Maya scenes to be forced to live within a single workspace, either.

I’ve solved this problem before by using symbolic links to basically force all of the Maya projects generated for every single shot and asset to link to the same place. This makes for a pretty busy-looking file system, though, and symlinks only work on Unix-based servers (i.e. not Windows). This system I’m building now looks a lot more like Houdini’s global variables… as long as I define $SHOW or $SHOT or whatever, I can path a reference or a texture to $SHOT/assets/whatever.xyz and it doesn’t necessarily have to be within a Maya workspace, and it’s also not an absolute path.

Read more below… Continue reading »

Mar 112016
 

A while ago I posted about how to manage viewports in Maya in order to prevent the screen from redrawing while doing a bake or Alembic export, which slows things down considerably. It turns out there’s a way easier way to handle this, thanks to a hidden (of course) MEL variable that Maya docs don’t seem to mention anywhere.

The command is:

paneLayout -e -manage false $gMainPane

The variable $gMainPane is a shortcut that just refers to Maya’s main viewport window. You can set the “managed” state of the entire pane to False, which simply disables refreshing it until you set it back to True. Just one line of code!

Here’s another, even easier method, that’s actually part of the Python cmds module:

cmds.refresh(suspend=True)

There’s a catch, though. From the docs on the refresh() command:

Suspends or resumes Maya’s handling of refresh events. Specify “on” to suspend refreshing, and “off” to resume refreshing. Note that resuming refresh does not itself cause a refresh — the next natural refresh event in Maya after “refresh -suspend off” is issued will cause the refresh to occur. Use this flag with caution: although it provides opportunities to enhance performance, much of Maya’s dependency graph evaluation in interactive mode is refresh driven, thus use of this flag may lead to slight solve differences when you have a complex dependency graph with interrelations.

So make sure you test each method with different kinds of bakes before you commit to any one solution.

Feb 252016
 

Last year I worked on a Vikings teaser for King and Country over in Santa Monica. The spot was mostly live-action, but there were a bunch of CG snakes that needed to be added in post, and so I finally got to flex my rigging muscles a little bit.

First, here’s the spot:

Snakes seem like simple problems to solve, since they don’t have shoulders or hips or any other nasty bits that are hard to rig on humans, but the problem lies within the lack of control an animator has over a typical IK spline.

Most simple snake rigs are just that… make an IK spline, cluster the curve, let the animator sort out the rest. Maybe the animator will get lucky and there will be some kind of control hierarchy, but otherwise they’re in for a lot of counter-animating hell. IK splines also suffer from a lack of precision twisting… snakes (especially when you have big piles of them) tend to need to have different twisting rotations along the length of the body, and IK splines can only twist linearly from start to end. Stretching them also typically results in unstable behavior, with the end joint stretching well beyond the intended values, especially when the spline curve is bent quite a bit.

Click below for more details…
Continue reading »

May 182015
 

I’ve implemented most of my planned pipeline, finally. Just want to do a brief walkthrough to brag about features to show what this set of tools is capable of doing.

The pipeline uses symlinks to allow each asset or shot to exist inside its own self-contained Maya project workspace, while linking together key directories in those workspaces to all point towards the same repository. This way, textures can all be found in a single location, cache data can be shared between departments (Animation, Lighting, etc), and renders can all be easily found in a single place, without confusing Maya by rendering to paths outside of the workspace.

All project-level commands can either be run from a Linux terminal or from the web interface mentioned in the previous post. New project structures are generated using a master XML file that defines folder names/hierarchies and permissions bits for each folder, to prevent regular users from writing files where they shouldn’t.

The File Manager handles assets and shots. Assets can be “tagged” in order to categorize them in the menu on the left. These tags are arbitrary, can be changed at any time, and don’t affect the file structure at all, so the way assets are organized can evolve fluidly without screwing up references. Asset versions are organized using a public/private system, so users do whatever they need to do inside their own work folders, then “publish” the asset, which automatically is assigned a name and version number based on the department it’s being published to (such as Modeling or Rigging). Artists using these assets can automatically receive updates to their references without dealing with a “master” asset constantly being overwritten and screwing up in-progress renders or simulations. Each version has notes associated with it, and for published versions an automatic note tracing the file back to the work file it was originally saved from.

assetManager

The Assets tab of the File Manager.

The Animation I/O tool handles the import/export of animation data, including cameras. Animation can be exported either as baked curve data (Maya ASCII), or as Alembic caches. The hybrid approach is to maintain a light server presence and flexibility, since blindly caching Alembic data for everything can seriously bog down a server, not to mention the time it takes to actually write an Alembic cache in Maya. Animation is written out to shared cache folders named after the shot they were exported from, and animation exported this way can be automatically updated using the import tools. Lighters don’t even have to know what assets are supposed to be in their scenes; the Match Exported Scene button will automatically pull in references as needed when importing data (even when the data is not Alembic). A single click will make sure that a lighter’s scene exactly matches the animator’s scene (minus all the junk that animators like to leave around).

The Look manager tool handles shading and render layers. A user can shade, apply render layers, per-layer overrides, displacement sets, etc., and then save those customizations to a Look file. This Look can then be applied to an identical (or nearly-identical) asset anywhere else in the project. Materials are saved to a separate file that is then referenced into other scenes by the Look Manager. If you have multiple objects sharing the same Look, they will shared the shader reference, helping the Hypershade stay as clean as possible. Shading and layer assignments are handled by object name, so if your receiving asset is named slightly differently (Asset_A vs. Asset_B) you can use a find/replace string to substitute object names. Shading in this pipeline is completely independent of lighting and can be adjusted and modified (aside from UV adjustments) anywhere, then saved as a Look for use by others. When Looks are removed or replaced, the Look Manager automatically removes unused render layers and displacement sets, as well as referenced materials files that are no longer in use.

The Look Manager interface.

The Look Manager interface.

The Preflight system helps keep assets and scenes conforming to pipeline structure. A series of checks is run, with presets for checks available based on department (Modeling, Rigging, Lighting, etc). Each check will be highlighted if there is a problem, which can then be fixed with an automated function. Each check maintains its own viewable log, so users can see exactly what the problems are and what is fixed if the function is run. The checks are dynamically loaded from a folder, and each check is based on a template. A pipeline operator or power user could add their own checks to this folder as long as they fit the template, and they will be automatically loaded into Preflight at next launch.

Renders, once finished, can be quickly loaded into an Auto-Comp generated in Nuke or NukeAssist. The ReadGroup script creates a special group that will take all single-channel passes in a folder and combine the channels together into a single stream. The entire group of Read nodes can be updated simultaneously by picking a new folder on the ReadGroup, rather than having to manually replace each individual channel. Another button on the ReadGroup automatically replaces each main sequence type (Beauty, Tech, and Deep pass groups) with the latest available version. Another click quickly generates a composite out of the channels within, adding or multiplying channels depending on the channel names (this function assumes a VRay pipeline but could be expanded to work with Arnold or mental ray) and creating a fast comp along with a contact sheet to check passes quickly. Lighters can use this tool to make sure their passes rendered correctly, and then use the “Approve Renders” button to automatically move footage from the “incoming” renders directory to the “approved” renders directory, where compositors can use them. Compositors can keep groups of renders linked together easily for version control, and use the auto-comp as a starting point for breaking out channels in their comps.

A generated Auto-Comp, with the interface panel on the right.

A generated Auto-Comp, with the interface panel on the right.

The pipeline is built to be as modular as possible, so each piece doesn’t necessarily need to know what the other pieces are doing in order to function. A few core scripts automatically set the user’s workspace when files are opened or saved, and a couple optionVars maintain the current user and project folder. Everything else can be derived from the workspace, so the tools can be used piecemeal in a pipeline (for the most part) if necessary. Most configuration is done in a single settings.py file, which can be edited to configure the pipeline for just about any server setup. The goal was to make a pipeline that could be entirely feature-complete if necessary, or to have the tools operate individually as a layer on top of an existing pipeline.

Sorry about the big long bragpost, just wanted to have a place to document what I’ve been spending all these months working on since it’s hard to put something like this in a demo reel!

Jan 212015
 

I’m working on a production right now that involves sending absolutely enormous animated meshes (6.5 million polygons on average with a changing point count every frame) out of Houdini and into Maya for rendering. Normally it would be best to just render everything directly out of Houdini, but sometimes you don’t have as many Houdini licenses (or artists) as you’d like so you have to make do with what you have.

If the mesh were smaller, or at least not animated, I’d consider using the Alembic format since V-Ray can render it directly as a VRayProxy, but for a mesh this dense animating over a long sequence, the file sizes would be impossibly huge since Alembics can’t be output as sequences. Trying to load an 0.5 TB Alembic over a small file server between 20 machines trying to render the sequence and you can see why Alembic might not be ideal for this kind of situation.

Hit the jump below to see the solution.
Continue reading »

Mar 252014
 

I’ve been messing around with using Ptex in VRay for Maya and ran into a particularly weird little problem involving a normal map exported from Mudbox. It’s probably easier to show it than to tell about it (Fig. 1):

bad ptex bottle

Fig. 1. Check out the wax wrapper up top… all chunky and gross.

That wax wrapper looks terrible… not at all like the original sculpt. The mesh is set to render as a subdivision surface (using the custom VRay attributes from the Attribute Editor), but the details are all mangled like it’s not subdividing the surface at all.

Even weirder is what happens when I only render a small region of the image (Fig. 2):

bad ptex with region render

Fig. 2. The details are suddenly cleaner! All I changed was enabling region render.

This was really confusing, and although I’m still not 100% sure what’s going on in VRay to cause this (I’m fairly certain this is a bug), there is at least a solution.

What’s probably happening here is that choosing to render this mesh as a subdivision surface is changing the point count of the object BEFORE any Ptex information is applied to the mesh during rendering. Ptex is very sensitive to your geometry… any change in point count could potentially break things. You’re not allowed to polySmooth objects that will have Ptex textures, for example, or Maya won’t know what points to assign Ptex information to.

The way to get around this is to put the object into a VRay displacement set. Assign displacement and subdivision attributes to the displacement set like you normally would, but don’t attach a displacement map, and make sure the Displace Amount is set to 0. Rendering this way gets you the more correct image (Fig. 3):

correct ptex

Fig. 3. This looks a lot more like the original sculpt from Mudbox.

This seems like buggy behavior to me more than some technical thing I’ve overlooked, but thankfully the workaround is pretty minimally difficult. If you have any better insight as to what exactly is happening here or if there’s a less hacked way to prevent it, I’d love to hear it.

Apr 152013
 

My latest gigantic Houdini project is finally live! I was the Technical Director for this one, which also means Houdini tube effects, lighting, shading, rendering, etc. Thanks to my fellow Houdini artist, Alvaro Segura, for handling the inky effects, as well as for helping me to refine the tube generator OTL I spent so long constructing.

Feb 112013
 

Many operations in Maya will run faster if Maya doesn’t have to refresh the viewport while running them. For example, if you switch the viewport to only show the Graph Editor before baking animation, or caching particles or Alembic geometry, the operation will happen much faster than if Maya had to actually display the geometry for each frame that it’s being baked. There is probably a way via the API to tell Maya’s viewport not to refresh, but since I don’t know shit about the API, here’s a workaround using a few of Maya’s less-documented MEL commands and some Python.

I wrote this method assuming that artists would want to cache out information frequently without it disrupting their workflow. That means that I needed to first store the user’s current panel layout, then switch it to something that doesn’t require a refresh on every frame (like the Graph Editor), and then restore the previously restored layout.

I prefer coding in Python, but some of the procedures I’m running are MEL functions that are found in scripts/startup and so they’re not documented and they don’t have Python equivalents. A hybrid approach is the best way to handle it, since MEL is terrible.

Click for some Python code… Continue reading »

Oct 252012
 

A short film that I worked on with a friend of mine, director Dan Blank, is finally out to the public! I was the CG Supervisor, and I handled all of the lighting, materials and rendering in addition to the overall pipeline.

Take a look at this article from The Atlantic magazine:
http://www.theatlantic.com/video/archive/2012/10/monster-roll/263968/

The short:

And here’s the VFX breakdown:

So excited it’s finally out there and getting good reviews!